When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

The role of social bots in spreading fake news

Fake news is spread all over the internet nowadays and is linked to recent manipulations of stock markets and elections. For that reason, tech giants have stepped up their fight against these malicious disinformation tools, with Google recently expanding its Fact Check tool to determine fake news in search results and Facebook enabling its fake news alert in Kenya for its election on August 8.

But in order to limit the diffusion of fake news across the internet, it is also important to know how it spreads in the first place. That was the question made by scientists from Indiana University in Bloomington, who have systematically studied how fake news spread on Twitter for the first time.

To study the propagation of false or misleading news, the team has monitored 400,000 claims made by 122 websites listed as routinely publishers of fake news on independent fact-checking websites such as snopes.com, politifact.com, and factcheck.org. Among those 122 websites are also some satirical, such as theonion.com, "because many fake-news sources label their content as satirical, making the distinction problematic", as stated by Chengcheng Shao, leader of the project.

The team has also monitored 15,000 stories written by those fact-checking websites and over a million Twitter posts that have mentioned them. By analyzing the last 200 tweets by all the Twitter accounts monitored, the team concluded which ones were most likely run by humans or by bots.

Diffusion network for the article “Spirit cooking: Clinton campaign chairman practices bizarre occult ritual", published by the site Infowars.com four days before the 2016 U.S. election.

The scientists have also created two platforms for the analysis of fake news claims, Hoaxy, and for the analysis of how likely is a Twitter account being run by humans or by bots, Bolometer. Based on their results, the team has concluded that "accounts that actively spread misinformation are significantly more likely to be bots".

An interesting behavior of bot accounts is that they are "particularly active in the early spreading phases of viral claims, and tend to target influential users". By aiming at highly connected nodes on a social network, those bots increase the likelihood of the misleading information becoming viral.

Finally, the scientists suggest that "curbing social bots may be an effective strategy for mitigating the spread of online misinformation". Unfortunately, due to the noncentrality of the internet, it may be extremely hard to implement a global strategy for restraining the activity of such bots.

Source: MIT Technology Review

Report a problem with article
Next Article

YouTube globally rolls out in-app chat feature, making it easier to share videos

Previous Article

Another Pixel 2 leak shows the device's huge front bezels

Join the conversation!

Login or Sign Up to read and post a comment.

11 Comments - Add comment