In the summer of 2024, misinformation and disinformation played a significant role in fuelling riots across the UK, particularly surrounding the Southport stabbing. False narratives about the incident spread rapidly on social media, leading to confusion, fear, and escalating tensions between rival groups. Social media platforms became a place for rumours, some deliberately fabricated, while others were taken out of context or amplified by bots. These false claims stoked public unrest, with protests and violent clashes erupting in several areas of the country. Authorities were forced to release accurate information and work with social media platforms to remove harmful content, but the speed and reach of misinformation had already caused widespread damage, eroding trust in both the media and public institutions.
The rise of social media fake news, misinformation, and disinformation has become one of the most pressing challenges in our digital world. While these terms are often used interchangeably, they each have distinct meanings that play a crucial role in understanding the key issues:
- Fake news refers to entirely fabricated stories or headlines designed to mislead, deceive, or manipulate readers. These false narratives are usually sensationalised and created with the intent to go viral, often to influence public opinion, stir emotions, or generate clicks and revenue.
- Misinformation is the broader category and includes any information that is false or inaccurate, regardless of intent. It could be a mistaken fact, a rumour, or something shared without malice, but still misleading. Misinformation is unintentionally presented as fact when it is believed to be accurate, but in reality, it is not.
- Disinformation, on the other hand, is false information that is deliberately created and spread with the intention to mislead. It is often used strategically to influence political views, shape public opinion, or achieve specific goals.
As the spread of falsehoods continues to proliferate across social media platforms, the pressure is mounting on companies to take action. Striking the right balance between protecting freedom of expression – the right to express and receive opinions, ideas and information – and addressing harmful content remains a significant challenge.
Governments around the world are implementing stricter online safety laws, such as the EU’s Digital Services Act and the UK’s Online Safety Bill, which mandate platforms to remove harmful content, increase transparency, and ensure stronger protections for users against misinformation and disinformation. These laws aim to hold companies accountable for the content shared on their sites and compel them to take more proactive steps in curbing the spread of false information.
Social media platforms are increasingly deploying a range of practices, from fact-checking tools to collaborations with third-party organisations, to curb the impact of social media fake news, misinformation, and disinformation. However, this will remain an ongoing challenge as the topics that fuel misinformation and disinformation constantly shift. New trends, issues, and events emerge regularly, providing fertile ground for false narratives to spread. As these topics evolve, they continually test the effectiveness of efforts to combat them.
Social Media Platforms’ Efforts to Combat False Information and Social Media Fake News
Social media platforms face ongoing scrutiny over their role in the spread of false information. With billions of active users worldwide, platforms like Facebook, Instagram, X (Twitter) and YouTube are often the first places people turn to for news and information. This makes them both a powerful tool for spreading knowledge and a breeding ground for fake news.
To address these challenges, social media companies have implemented several practices aimed at reducing the spread of misinformation and disinformation:
-
Detection Tools for Bots and Fake Accounts
Bots and fake accounts are frequently used to amplify disinformation on social media. These automated accounts can spread false narratives at an alarming rate, creating the illusion of widespread support or belief in a particular idea.
-
Fact-Checking Partnerships
Many platforms have established partnerships with independent fact-checking organisations to help identify and flag false or misleading content.
To note: on 7th January 2025 Meta announced an overhaul of its content moderation policy, including a switch from fact-checking to Community Notes. The change initially applies to the US (and not the UK or EU). Full details can be found in this blog post.
-
AI-Driven Content Moderation
Artificial intelligence (AI) has become a key tool in the fight against fake news. Social media platforms have developed machine learning algorithms to detect patterns of misinformation and flag content for review. These AI systems can analyse massive amounts of data in real-time, identifying potential threats like hoaxes, conspiracy theories, or misleading headlines.
-
Human Moderation
Despite the advancements in AI, human moderators continue to play a crucial role in identifying and addressing misinformation and disinformation. Teams of moderators investigate suspicious activity and work to dismantle networks or coordinated campaigns that spread false narratives. Human moderation is vital in tackling nuanced or context-dependent misinformation that algorithms may miss.
-
Labels and Warnings
Posts identified as misinformation or disinformation are often marked with warnings or disclaimers. These labels alert users to the potential inaccuracy of the content. In many cases, users are also directed to verified resources or fact-checked articles to provide clarity and correct the false narrative. This helps users access more accurate information to better understand the content they encounter.
-
Community Guidelines on Social Media Fake News, Misinformation and Disinformation
To reduce the spread of false information, social media platforms maintain community guidelines to explicitly prohibit the spread of harmful misinformation and disinformation. This includes false health claims, political manipulation, and coordinated inauthentic behaviour. When users violate these guidelines, they may face consequences such as content removal, account suspension, or even permanent bans for repeat offenders.
You can find links to the platform’s community guidelines below:
Community Standards for Facebook, Instagram, Messenger and Threads
LinkedIn Professional Community Policies
-
User Education and Tools
Platforms are also taking proactive steps to educate users on how to identify and avoid spreading misinformation:
- Pre-emptive Prompts: Some platforms prompt users to read articles before sharing them, encouraging them to verify the information they are about to spread. These tools warn users about the risks of sharing unverified content, aiming to curb the viral spread of misleading posts.
- Information Hubs: Social media platforms have created centralised hubs that provide users with verified information on high-stakes topics like public health, elections, or climate change. These hubs aim to be a trusted source of real-time updates and counter misinformation with credible facts.
-
Transparency and Accountability
In the interest of building trust with users and regulators, social media platforms have become more transparent about their efforts to combat disinformation.
Many platforms now publish regular transparency reports, outlining the steps they’ve taken to identify and remove harmful content. For example, Meta’s Community Standards Enforcement Report provides data on how much harmful content was removed, the effectiveness of detection technologies, and enforcement trends.
The LinkedIn Community Report includes details on the removal of fake accounts, which are often used to spread misinformation or engage in fraudulent activities. It also highlights the volume of content flagged for violating platform policies, including misinformation, and outlines how LinkedIn addresses coordinated inauthentic behaviour.
X Transparency Center includes information requests, removal requests, copyright notices, trademark notices, email security, X Rules enforcement, platform manipulation, and state-backed information operations.
TikTok regularly publishes Transparency Reports to provide visibility into how we uphold our Community Guidelines and respond to law enforcement requests for information, government requests for content removals, and intellectual property removal requests.
Similarly, YouTube’s Transparency Report highlights the number of videos flagged and removed for spreading misinformation, as well as policy updates.
Social media platforms, such as Meta, LinkedIn, TikTok, and YouTube, also have specific advertising policies that require transparency for ads related to social issues, elections, and politics, including identity verification processes and the inclusion of ‘paid for by’ disclaimers to ensure users are informed about the sources of promoted content.
This ongoing accountability helps users understand how these platforms are handling misinformation and gives them insight into the challenges involved in curbing it.
You can find links to the platform’s advertising policies below:
Google Advertising Policies (also applicable to YouTube)
The Role of Social Media Users in Combating False Information and Social Media Fake News
While social media platforms are taking action to combat misinformation and disinformation, users also play a critical role in fostering a more truthful online environment. Here are some ways users can help:
-
Fact Check Information Before Sharing and Look Out for Verified Accounts
One of the simplest and most effective ways users can combat fake news is by reading, questioning and verifying information before sharing it. Fact-checking websites, such as Full Fact and Snopes, are excellent resources for validating claims before sharing them with others.
Additionally, relying on verified accounts—such as those of credible news outlets, experts, and public figures—can help ensure the information you share is trustworthy. Verified accounts are held to higher standards of accountability, making them a reliable source for accurate information and helping to counteract the spread of misinformation and disinformation.
For users considering verifying their accounts, it’s a valuable step to increase credibility and trust with your audience. Verified accounts help ensure that your communications are seen as authentic and reduce the likelihood of impersonation.
Platforms including X (Twitter), Instagram, Facebook, TikTok, LinkedIn and YouTube offer verification processes, typically requiring documentation to prove the account’s authenticity.
-
Report Content and Social Media Fake News
If you come across content that seems false, misleading, or harmful, reporting it is a vital step in combating misinformation and disinformation. Social media platforms rely on users to flag suspicious content, which triggers a review process by their moderation teams. This can lead to quicker identification and removal of misleading posts, helping to ensure that users are exposed to accurate information.
For example, on Facebook, you can report a post by selecting the three dots in the corner of a post, then choosing the option to report it for misinformation.
Instagram users can report posts directly in the app, helping to combat the spread of false narratives.
Similarly, X (Twitter) allows users to flag tweets that violate their rules, such as promoting false information about health or elections.
YouTube also has a reporting tool for videos that violate their community guidelines, including spreading conspiracy theories or manipulated media.
On LinkedIn, you can report suspicious content by clicking the three dots on a post and selecting “Report this post,” helping to flag misleading or false professional information.
On TikTok, simply tap the share icon, choose ’Report,’ and select ‘Misinformation’ to flag videos that may contain misleading or harmful content.
-
Improve your Media Literacy
Media literacy is about understanding how information is presented and being able to think critically about what you see, read, or hear to make informed decisions and avoid being misled by false or biased content.
Improving your media literacy is all about becoming more confident in spotting what’s real and what’s not online. Start by looking beyond the headlines—ask yourself if the story is backed by credible sources and whether it’s presenting balanced, reliable evidence. It’s also good to remember that algorithms can shape what you see, so try mixing up your news and information sources to get a broader perspective.
Useful resources to support media literacy are available from Ofcom.
Social Media Fake News, Misinformation and Disinformation: Conclusion
The fight against social media fake news, misinformation, and disinformation is an ongoing challenge, but social media platforms, alongside government actions, are taking important steps toward creating a more trustworthy digital space.
While the responsibility largely lies with platforms to implement policies and detection tools, governments are also introducing regulations to hold them accountable for the spread of harmful content.
However, this battle requires a collective effort—both platforms and users must work together to combat misinformation in all its forms. By staying vigilant, questioning the information we encounter, and promoting media literacy, we can all play a role in fostering a more accurate and informed online environment.