X Faces Multiple Complaints in Europe for Alleged Illegal Use of User Data to Train AI Models
HomepageBlog
X Faces Multiple Complaints in Europe for Alleged Illegal Use of User Data to Train AI Models

X Faces Multiple Complaints in Europe for Alleged Illegal Use of User Data to Train AI Models

Kaamel Lab
Kaamel Lab

Following a costly lawsuit in Brazil (click here for details), Meta's social platform X (formerly Twitter) is now facing multiple complaints in Europe. These complaints accuse X of illegally using the personal data of EU users to train its artificial intelligence technologies, including Grok.

Lawsuit Filed by the DPC

According to information released by the Irish High Court, on August 6, the Irish Data Protection Commission (DPC) filed a lawsuit against X, accusing it of collecting and using users' data to train the AI model Grok without their consent.
In July of this year, an X user discovered that the platform had "automatically opted in" to using their posts and interactions with the Grok chatbot for training X’s AI system, without explicit user consent. Additionally, the option to refuse such data usage was designed in a way that made it difficult to find, further exacerbating users' privacy concerns and distrust of the platform.
On August 5, a consumer organization filed a complaint with the DPC, alleging that X’s use of user data to train its AI models violated the General Data Protection Regulation (GDPR), particularly due to insufficient transparency regarding data usage and the cumbersome process for users to exercise their right to object. Subsequently, the DPC brought the case to court, seeking an injunction or restrictive order to prevent X from using personal data to develop, train, or improve its AI systems. The DPC also plans to submit the case to the European Data Protection Board for review.
As early as September 2023, X had already sparked controversy over its privacy policy. At that time, X’s updated privacy policy stated: "We may use the information we collect and publicly available information to help train our machine learning or artificial intelligence models." Elon Musk explained that X would only use public data to train AI models and would not use any private content. This issue did not gain widespread attention until recently, when users discovered that X had been using their tweet content to train the Grok AI model without explicitly informing them, and that this data collection option was enabled by default.
On August 8, the DPC announced that X had agreed to suspend the use of EU/European Economic Area (EEA) users' personal data (tweets collected from May 7, 2024, to August 1, 2024) for AI model training until a court ruling is issued. The case will be further reviewed by the court and the European Data Protection Board.

NOYB’s Complaints in 9 Countries

None Of Your Business (NOYB) is a non-profit organization dedicated to protecting privacy and data protection rights, headquartered in Vienna. Its mission is to ensure that companies and institutions comply with privacy regulations such as the GDPR through legal action and policy advocacy.
On August 12, NOYB announced that it had filed complaints against X with data protection authorities (DPAs) in nine countries: France, Italy, the Netherlands, Austria, Belgium, Greece, Ireland, Spain, and Poland. The complaints accuse X of illegally using the data of over 60 million users within Europe to train its AI models without obtaining explicit consent, in serious violation of GDPR regulations.
NOYB claimed that since July 26, 2024, X introduced a new default setting on its platform, which irrevocably collects all user data for machine learning or AI model training without explaining the system’s purpose. X attempted to justify its data usage practices under the "legitimate interests" principle, but NOYB pointed out that such practices must be based on user consent.
As a result, NOYB filed complaints against X, accusing it of violating GDPR, specifically highlighting X's lack of a valid legal basis for collecting and using personal data for AI training and the lack of transparency. The complaint specifically alleges violations of GDPR Articles 5(1), 5(2), 6(1), 9(1), 12(1), 12(2), 13(1), 13(2), 17(1)(c), 18(1)(d), 19, 21(1), and 25. NOYB stated that since X has already begun processing data for its AI technology and has not provided an option to delete the collected data, it has requested that regulatory authorities initiate emergency procedures under GDPR Article 66.

Compliance Analysis

The privacy compliance issues involved in AI model training highlight the potential conflict between technological innovation and privacy protection. On one hand, AI model development requires vast amounts of data; on the other hand, increasingly strict privacy compliance regulations must be adhered to. The key question for companies is: How can AI models be developed without violating privacy laws? According to regulations like the GDPR, the processing of personal data must have a legal basis. Companies like Meta and X are under scrutiny for whether they obtained users' explicit consent when using data to train AI models. Even when processing user data based on legitimate interests, companies must demonstrate that they have not infringed on users' privacy rights. Throughout this process, companies should ensure the rights to information and other rights of data subjects. Users should have the right to understand how their data is being used and be able to exercise their rights to access, delete, and correct data in a straightforward manner. In developing AI tools, companies should clearly define the legal basis for data processing, strengthen anonymization measures, increase transparency, and enhance data security management. By implementing these privacy compliance measures, companies can reduce legal risks, build user trust, and further enhance their market competitiveness.