Business

Elon Musk’s X Hit With Nine Privacy Complaints After Harvesting EU User Data For Grok Training

X, the social media platform owned by Elon Musk, has been hit with a series of privacy complaints after it used data from European Union users to train AI models without asking people’s consent.

Last month, an eagle-eyed internet user spotted a setting indicating that X had quietly started processing regional user post data to train its Grok AI chatbot. The revelation prompted an expression of “surprise” from the Irish Data Protection Commission (DPC), the watchdog charged with overseeing X’s compliance with the EU’s General Data Protection Regulation (GDPR).

The GDPR, which can punish proven violations with fines of up to 4% of global annual turnover, requires that any use of personal data be based on a valid legal basis. The nine complaints against X, filed with data protection authorities in Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland and Spain, accuse it of failing at this stage by processing Europeans’ posts to train AIs without obtaining their consent.

Max Schrems, chairman of privacy rights group noyb, which is supporting the complaints, said in a statement: “We have seen countless cases of ineffective and partial enforcement by the DPC over the past few years. We want to ensure that Twitter fully complies with European law, which, at a minimum, requires asking users’ consent in this case.”

The DPC has already taken action regarding X’s processing of data for training AI models, filing a lawsuit in the Irish High Court seeking an injunction to force it to stop using the data. But noyb argues that the DPC’s actions so far are insufficient, pointing out that there is no way for X’s users to force the company to delete “data already ingested.” In response, noyb has filed complaints under the GDPR in Ireland and seven other countries.

The complaints argue that X has no valid basis to use the data of some 60 million people in the EU to train AI without obtaining their consent. The platform appears to rely on a legal basis known as “legitimate interest” for AI-related processing. However, privacy experts say it must obtain people’s consent.

“Companies that interact directly with users should simply ask them to answer yes or no before using their data. They do this regularly for many other things, which would certainly be possible for AI training as well,” suggests Schrems.

In June, Meta suspended a similar plan to process user data for AI training after noyb backed some GDPR complaints and regulators stepped in.

But X’s approach, which involves quietly appropriating user data for AI training without even notifying people, appears to have allowed it to fly under the radar for several weeks.

According to the DPC, X processed Europeans’ data for training the AI ​​model between May 7 and August 1.

X users got the ability to opt out of processing via a setting added to the web version of the platform, apparently in late July. But there was no way to block processing before that. And of course, it’s tricky to opt out of having your data used for AI training if you don’t even know it’s happening in the first place.

This is important because the GDPR explicitly aims to protect Europeans from any unexpected use of their information that could impact their rights and freedoms.

To argue against X’s choice of legal basis, noyb points to a ruling issued last summer by Europe’s highest court – relating to a competition complaint against Meta’s use of personal data for targeted advertising – in which judges ruled that a legitimate interest legal basis was not valid for this use case and that user consent must be obtained.

Noyb also points out that generative AI system vendors typically claim they are unable to comply with other core GDPR requirements, such as the right to be forgotten or the right to obtain a copy of your personal data. These concerns feature in other pending complaints against OpenAI’s ChatGPT.

Back to top button