In this article

Sign up for our newsletter

Share this article

Hello all, and happy Thursday! 

ChatGPT’s standing under the GDPR has always been on shaky ground—just look at the brief period where Italy’s data protection authority flat-out banned ChatGPT in Europe. And wherever GDPR compliance is suspect, Max Schrems is sure to be found.  

For the unfamiliar, Schrems, founder of the privacy advocacy group noyb (or “none of your business”), is best known for the invalidation of various EU-U.S. data transfer frameworks, such as the Privacy Shield, as well as various complaints and lawsuits against big tech companies. Now, Schrems and his organization have turned their sights on ChatGPT. 

One of our stories this week focuses on a recent complaint filed by noyb against ChatGPT. The basis of the complaint? ChatGPT didn’t know the date of Schrems’ birthday. 

At first blush, this might seem frivolous—but it actually underscores a deeper issue with ChatGPT and its ability to comply with the GDPR.   

Rather than come out and state that it didn’t know Schrems’ birthday, the AI chatbot instead made up several dates. When generative AI solutions like ChatGPT simply make information up out of thin air, it’s called “hallucinating.” 

Under the GDPR, controllers and processors are supposed to:  

  • Identify the sources they receive personal information from. 
  • Ensure that data subjects’ personal information is accurate. 
  • Honor data subjects’ requests to correct inaccurate personal information. 
  • Honor data subjects’ requests to delete personal information. 

According to a noyb statement, OpenAI was unable to meet any of these requirements. Hence, the complaint (even if it is over something as simple as a birthday).   

Unfortunately, acting upon data subject requests in large language models like ChatGPT is a major technical challenge, and it’ll likely be some time before we solve all of the challenges associated with generative AI’s data privacy compliance. In case you missed it, our recent webinar focused on this very issue. 

Best, 

Arlo 


APRA CTA

Top Privacy Stories of the Week

2 Million Hit in Massive Debt Collector Data Breach—Full Names, Birth Dates and SSNs Exposed 

Financial Business and Consumer Solutions (FBCS), a debt collection agency, has revealed that it has fallen victim to a data breach in which borrower information was exposed online. Approximately 2 million individuals’ sensitive personal information was recently accessed by hackers, including full names, Social Security numbers, dates of birth, account information, and driver’s license numbers or ID card numbers. 

Read more 

Connecticut Senate Passes AI Law 

The Connecticut Senate pressed recently passed Senate Bill 2, which aims to rein in bias in artificial intelligence decision-making and protect people from harm, including manufactured videos or deepfakes. Now, the AI law goes to the Connecticut House of Representatives. If the bill should become law, it would go into effect February 1, 2026. 

Read more 

UK’s Revamped Surveillance Rules Become Law Despite Industry Opposition 

The UK’s Investigatory Powers (Amendment) Act (IPAA) received royal assent on Friday, making it law and broadening the government’s ability to collect bulk communications data. The act raises concerns about potential mass surveillance and violations of individual privacy as it weakens safeguards when intelligence services collect bulk datasets of personal information, potentially enabling the harvesting of millions of facial images and social media posts. 

Read more 

Schrems’ noyb Files Complaint Against ChatGPT Over Hallucinations 

None of your business (stylized as “noyb”), a non-profit founded by privacy activist Max Schrems, filed a complaint against ChatGPT on the grounds that it violates the GDPR’s requirements on privacy, the accuracy of information, as well as the right to correct inaccurate information. The complaint was triggered by ChatGPT's failure to supply Schrems' correct birthday, making a wild guess instead. The chatbot doesn't tell users that it doesn't have the correct data to answer a request. 

Read more 

EU AI Act Unpacked: Which Types of AI Are Regulated Under the AI Act? 

What is the difference between a high-risk AI system, certain AI systems with transparency risks, general-purpose AI models, and general-purpose AI models with systemic risk? This post examines how the EU Artificial Intelligence Act (AI Act) defines AI systems and General Purpose AI (GPAI) models. 

Read more 

Osano Webinar: AI and Data Privacy: Minimizing Risk and Maximizing Opportunity 

Miss our recent webinar on AI and its implications for data privacy? Don’t worry; you can access the recording of the webinar on our site. You’ll hear from data privacy and software development experts on where AI’s data privacy risks lie, practical advice on how to mitigate those risks, and what likely requirements you’ll face when implementing compliant AI. 

Watch here 

If you’re interested in working at Osano, check out our Careers page

Schedule a demo of Osano today
Share this article