AI training by Meta - contradiction until May 26, 2025
        
        AI training through Meta - right to object
From May 27, 2025 , Meta intends to use the publicly accessible data of all adult European Facebook and Instagram users to train its generative AI models for AI at Meta (e.g. Meta-AI chatbot on WhatsApp or language models such as Llama). Publicly accessible data of minors is not used.
If it does not matter to you that your publicly accessible personal data is used for AI training purposes, there is no need for you to take any action. Just be aware that without an objection, not only the data from May 27, 2025, but also all data from the past will be used for AI training.
The Data Protection Authority strongly recommends that all others file an objection by May 26, 2025. At the same time, however, please note that the effect of the objection may be limited (see below).
What data is involved? Everything that is posted publicly is also public. This means that posts, photos and their captions or videos that you have posted as publicly visible in your profile, stories or reels are used by Meta for the purpose of AI training. So self-explanatory, so good. However, there is also content that is supposedly private because the profile is set to private, but is still classified as "publicly accessible" by Meta. According to Meta, this includes
- Name
 - Facebook and Instagram username
 - Profile picture
 - Activities in public groups, on Facebook pages and channels,
 - Activities with content that is public - for example, comments, ratings or reviews on Marketplace or on a public Instagram account.
 - Avatars
 
The above information is therefore always considered public. If you do not want this data to be used by Meta for AI training purposes, you must also object to private profiles.
This is possible in the respective apps or for Facebook at https://www.facebook.com/help/contact/712876720715583 and for Instagram at https://help.instagram.com/contact/767264225370182. You must be logged in to do this. You should also have received or still receive an email from Meta. Regardless of whether you have read or deleted this email, you can object at any time using the above forms or in the apps.
Attention: Limited effect of the objection
Note: Your objection only applies to your account. If someone shares one of your posts and this person has not lodged an objection and has a publicly accessible profile, your data will be used from there for training purposes. This also applies to posts by minors that are shared by an adult who has a publicly accessible profile and has not lodged an objection. This means: Outside your account = outside your sphere of influence. Therefore, filing an objection should not be expected to provide absolute protection against the use of data for AI training purposes.
Date of the objection
For a comprehensive impact over time: Now until May 26, 2025.
In the event of a later objection, your publicly accessible personal data will be used for AI training purposes until the time of the objection, because an objection is only possible for the future or, according to the current state of the art, once data has been fed into the training of an AI, it cannot be removed again. The right to erasure (Art. 17 GDPR), which actually exists in addition to the right to object (Art. 21 GDPR), is therefore in all likelihood practically impossible to implement. The same applies to the right to rectification (Art. 18 GDPR).
If you have already objected at an earlier date (this was possible for the first time in 2024), you do not need to object again. However, if your email address with which the meta service is used has changed since then, you must object again.
If you have several Facebook and Instagram accounts, you must object individually. However, if the accounts are linked in an account overview, one objection is sufficient for all of them.
Special case of WhatsApp and Meta AI
With WhatsApp, there is no option to object because there is actually no publicly accessible data that Meta could use to train its generative AI models. The chats are end-to-end encrypted so that Meta cannot access them. Chats on WhatsApp therefore do not count as public information.
But be careful: as soon as you communicate with the "Meta-AI chatbot" AI in WhatsApp or integrate it into a group chat, this communication is no longer end-to-end encrypted and is therefore "public". All requests and messages sent to this AI can therefore also be used for AI training.
Because the use of data from WhatsApp for training purposes is thus directly linked to the use of the AI "Meta-AI-Chatbot", the data protection authority recommends Do not use the "Meta-AI chatbot" AI in WhatsApp if you do not wish to make your data available for AI training purposes.
Also note: A contradiction in another Meta product does not affect WhatsApp. On the one hand, the data in WhatsApp - without using the "Meta-AI chatbot" - is not publicly accessible data for Meta that could be used to train AI. On the other hand, there is usually no link between a WhatsApp account and an Instagram or Facebook account, so Meta has no way of knowing whether you have objected elsewhere or not.
Meta-AI chatbot
Meta AI has been visible for a while in the form of a blue circle. You cannot turn this circle off, you can only ignore it. You can only use Meta AI in Facebook and Instagram without your data being used for AI training. To do this, however, you must have objected to AI training via Facebook or Instagram as described above.
The use of the meta-AI chatbot on WhatsApp, on the other hand, has the consequence that the data entered can be used for AI training purposes. You cannot object to this in WhatsApp, you can only refrain from using the chatbot altogether.
Source: Newsletter of the Liechtenstein Data Protection Authority | Photo: istock.com | Khanchit Khirisutchalual