You can't believe everything you read, and apparently that's no truer than when you're scrolling through TikTok and Twitter, which have been identified in a recent study as most likely to amplify disinformation.
While it was Facebook that was found to contain the most lies, these didn't spread as far as those on TikTok and Twitter, which, as sites based on virality, can see content spread far and wide in a small amount of time.
Social media is one of the key ways that we receive information these days, meaning it's more important than ever that what we see is true, and that when it isn't, platforms are held accountable.
The Misinformation Amplification Study
The study, which comes from the Integrity Institute, aimed to find if some social media platforms were more likely to spread disinformation that others.
As the group itself recognised in the study, content that contains misinformation tends to see more likes, views and comments than factual content. This means that removing disinformation would heighten trust on these platforms, but ironically, lower engagement.
In order to measure misinformation on the platforms, researchers collected posts and fact checked them, and also measured how much engagement these posts had received, compared to historical engagement from the poster. This resulted in a Misinformation Amplification Factor (MAF).
Researchers estimated the expected engagement based on creators' previous two week posting history (or 15 posts), using this as a baseline to then compare the offending disinformation post. To get an idea, see the example below:
Twitter and TikTok Main Misinformation Offenders
The results from the study show that it's Twitter and TikTok that are most likely to amplify disinformation and spread it further. While the number of misinformation posts was far higher on Facebook than any other platform, it was much less likely to be boosted, resulting in a much lower MAF score.
The reason for Twitter and TikTok's boosting of misinformation? The study puts it down to virality. Of all the social media platforms investigated, these are the two that most depend on viral content. What's more, the systems each has for sharing and boosting are incredibly easy. A simply button click and it's done. Sharing a post on Facebook has a lot more steps to it, as does YouTube and Instagram.
In addition, TikTok and Twitter are very public platforms, designed to engage as many users as possible. As a result, posts by its users can spread much wider than their own social group, unlike users on Facebook, meaning the scope of the misinformation is almost unlimited.
Facebook has its own fact checking program, which would hopefully root out a lot of the nefarious posts, but as the study discovered, fact checking takes time, and previous research shows that social media posts get 90% of their engagement on the first day. This means that even for a platform like Facebook which is actively weeding out incorrect information, the damage could already be done by the time they downrank and limit its distribution.
It's true that social media hasn't been sitting on its hands. YouTube has been incredibly aggressive for instance in demonetizing and flagging Covid-19 misinformation content. Twitter actually ousted a President from its platform for posting misinformation and inciting violence (even if it did take four years and made the call during the Presidents final hours in power). However, there's clearly still more to be done, and next time you come across a viral TikTok or Twitter smash, we suggest viewing it with a modicum of suspicion.