A nine-country investigation conducted by the University of Oxford has concluded that social media is being used as an effective tool to spread misinformation and propaganda throughout the world.
The Computational Propaganda Research Project did not look at activity here in the UK, but examined such social media phenomena as US President Donald Trump’s use of Twitter, the prominence of Twitter bots in Russia, and how social media campaigns in China and Taiwan have been used to eulogise the Chinese Mainland.
The Executive Summary of the report notes:
“Social media are actively used as a tool for public opinion manipulation.”
However, it goes on to point out that although examples of manipulation were noted in all nine countries studied, how this is achieved varies from one country to the next.
In authoritarian countries like Russia and China, it notes that sites like Facebook and Twitter are used for ‘social control’. The Russian report points out that when Russian Twitter accounts putting out regular political content were analysed, 45% of them were found to be automated rather than human controlled.
In democracies like the United States and Germany, meanwhile, attempts at manipulation are generally subtler and aimed at particular demographics, but propaganda is still evident. In the US report, a digital strategist notes that by literally and metaphorically talking loudly and regularly, Trump was able to set the agenda for political discussion – perhaps explaining how he received 15% more coverage in the media than his rival Hillary Clinton.
How can social media sites tackle this?
The Executive Summary concludes by asserting that although social media platforms are not creating manipulative content themselves, they should take action to address the way they are being used. It calls for social media sites to “significantly redesign themselves” in the interests of protecting democracy.
What the findings highlight is that social media, though constantly evolving, is still in its infancy and in its present state is being successfully used to influence opinions in a semi-conscious way. Perhaps its greatest challenge in future years will be to develop more sophisticated algorithms that can assess the true worth of content.
Facebook has made moves in recent months to combat the issue of ‘fake news’, but its ways of determining the importance and value of content remain fairly crude. When Facebook’s Reactions buttons were introduced in early 2016, we noted that users might be surprised at how the site itself reacts to their reactions, and how it uses this information to decide what its users want to see. In the long term, sites like Facebook may need to come up with an intuitive way to reward meaningful, accurate discourse rather than simply assume that interactions indicate importance and quality.
- How to find a circular reference on Excel - May 23, 2024
- Five life skills learned from internet marketing - January 3, 2024
- How artificial intelligence can (and can’t) help you write content - September 29, 2023
[…] be an effort to tackle suggestions that social media sites like Facebook are helping the spread of propaganda and fake news. It uses words like ‘meaningful’, ‘informative’, ‘accurate’ and ‘authentic’ to […]
[…] automated accounts. In fact, a report released in June revealed that 45% of political Russian-based Twitter accounts were controlled by bots rather than […]