Artificial Intelligence (AI) is becoming an integral part of our online experience. From helping social media platforms improve their services to creating personalised content for users. But, with growing concerns around data privacy many are seeking ways to opt out of AI usage. Particularly where their personal data is involved. Concerns include the potential misuse of personal information in AI training, unintended profiling, or exposure to algorithmic bias. In this blog, you will find a brief overview of the EU AI Act regulation, explain how you can opt out of AI on Meta and LinkedIn, how X uses your data for AI, and link it all back to the wider picture of AI features on social platforms.
The EU AI Act: A Quick Overview
The EU Artificial Intelligence Act is the European Union’s landmark regulation aimed at governing AI technologies. First proposed by the European Commission on April 21, 2021, this regulation was part of the EU’s broader digital strategy to regulate upcoming technologies. The EU AI Act came into force across all 27 EU Member States on 1 August 2024. and the enforcement of the majority of its provisions will be effective from 2 August 2026. Until then, and while details are still under negotiation, providers of high-risk AI systems are encouraged to comply on a voluntary basis.
The act is designed to address the rapid development and integration of AI systems across various industries. From healthcare and finance to social media and public services.
Set to be one of the first laws globally to regulate AI, it classifies AI systems into risk categories—unacceptable, high, limited, and minimal risk. And, this applies different levels of regulatory scrutiny accordingly.
The Act focuses on ensuring AI is used ethically, avoiding harm to individuals’ rights and freedoms, and preventing misuse of sensitive data.
Non-compliance with the Act will be met with a maximum financial penalty of up to EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
How the EU AI Act Impacts Social Media Platforms
The EU AI Act regulation has significant implications for social media platforms and social media managers.
For social media platforms, the act requires greater transparency and accountability in how they use AI, particularly concerning user data. This means that AI systems used for content recommendations, targeted advertising, and personalisation must comply with strict regulations to ensure they do not infringe on users’ rights or privacy.
For social media managers, the act reinforces the need to be aware of how AI-driven tools are used in their activities, especially when it comes to data collection, targeting, and automation. The EU AI Act introduces an added layer of responsibility. Compliance might mean reviewing how tools like AI-powered chatbots or social media analytics platforms are used, ensuring that data collection and processing are transparent and user consent is respected.
With increased regulation, managers will need to ensure that any AI tools they use, such as for analytics or content creation, align with these new standards, focusing on ethical practices and maintaining user trust.
What is AI Training and How Does It Work?
AI training is the process of teaching machines how to perform tasks by showing them lots of examples. Just like how we learn by practising, AI systems learn from data—whether it’s text, images, or other types of information. The more data they get, the better they become at understanding patterns and making decisions. For example, an AI might learn to recognise objects in pictures or understand language by analysing thousands of similar examples. Over time, this helps the AI get smarter and more accurate at doing the tasks it’s been trained to do.
However, AI training can be harmful. One major concern is bias in data. Where AI models learn from data that reflects societal biases. Such as gender, racial, or cultural biases, leading to unfair or discriminatory outcomes. Privacy issues also arise when sensitive personal data is used without consent or adequate protection. Putting individuals at risk of data breaches or misuse. Additionally, AI models trained on large amounts of data can sometimes reinforce harmful behaviours or spread misinformation. Especially if the data includes misleading or harmful content. Lastly, lack of transparency in AI systems can make it difficult to understand how decisions are made, leading to mistrust and accountability concerns.
To protect your data when using AI tools, limit the personal information you share and adjust privacy settings to control what data is collected. Use anonymisation techniques, review privacy policies, and be cautious about granting unnecessary permissions.
For some tools, including social media platforms, you can opt out of having your data used for AI training if you prefer not to participate in the process.
How to Opt-Out of Meta’s AI Training (Facebook & Instagram)
If you’ve heard rumours about preventing Meta from using your data to train its AI, you’re not alone. A post (shown below) went viral, but it was based on false information. This post has been flagged as “false information” by fact-checkers. Meta has confirmed that such statements do not help you to legally object to AI data usage.
If you do wish to stop Meta from using your Facebook and Instagram data to train its AI models, here are the steps to follow:
- Access Your Account Settings on Facebook or Instagram.
- Go to Privacy and then select Privacy Settings.
- Scroll down to the Privacy section and find Generative AI Data Usage.
- Toggle off the option to allow Meta to use your data to train its AI.
By opting out, your data won’t be used in AI training models. But, Meta may still use it for other platform services.
How to Opt-Out of AI Training on LinkedIn
LinkedIn has also started incorporating AI features into its platform to improve services like job recommendations, content suggestions, and ads. If you’d prefer not to have your data included in training LinkedIn’s AI models, here’s how you can opt-out:
- Log in to your LinkedIn account and go to your Settings & Privacy section.
- Select Account Preferences.
- Scroll down to AI Settings.
- You can then opt out of using your data to improve AI recommendations or train LinkedIn’s AI systems.
Note: AI training is not conducted using data from members located in the EU, EEA, UK, Switzerland, Hong Kong, or Mainland China.
How X (Twitter) Uses Your Data for AI Training
Recently, X updated its Terms of Service. This includes a key change allowing the platform to use your posts for AI training, including its xAI models. By continuing to use X and agreeing to the updated terms, you’re permitting your content to be used in this way.
However, opting out of this data usage is not currently possible unless you live in the EU. In the EU, stringent data protection laws apply. For non-EU users, only conversations with X’s Grok chatbot can be excluded from AI training.
AI Features Across Social Media: What to Expect
AI is rapidly evolving, and platforms like Meta, LinkedIn, and others are introducing more AI-powered tools. The hope is to enhance user experience—from content creation tools to personalisation algorithms.
If you’re curious about what AI features are being rolled out across social platforms and how they can impact your work as a social media manager, check out my blog on AI features across social media platforms for a summary.
EU AI Act Regulation: In Conclusion
With the introduction of the EU AI Act and growing concerns around data privacy, having control over how your data is used is more important than ever. While some platforms offer opt-out options, like Meta and LinkedIn, others like X don’t unless you reside in the EU. Keep up with these changes by staying vigilant about updates to the Terms of Service. And, make sure you review and adjust your privacy settings regularly.
As AI evolves, so too will the policies and opt-out features. To stay informed on the latest social media features, including AI, subscribe to my weekly email newsletter.