In this article

Sign up for our newsletter

Share this article

Note: As of April 28th, ChatGPT is accessible in Italy after OpenAI implemented several measures ordered by the Italian data protection authority.

On April 3rd, 2023, Italy became the first EU country to ban ChatGPT. Among other countries seriously analyzing AI’s GDPR compliance, Germany, Ireland, France, and others may follow its example. What does this mean for businesses in the EU and elsewhere? Is generative AI a nonstarter for anyone who wants to respect users’ privacy? Here’s what you need to know. 

What Exactly Is ChatGPT? 

ChatGPT, which stands for Chat Generative Pre-trained Transformer, is an AI language model developed by OpenAI. It can generate human-like responses and even give you the feeling you’re having a conversation with it.  

The model is still far from perfect. Although its constantly ingesting new data, its knowledge of facts and events is only accurate up until 2021. It occasionally provides incorrect responses but will present them as though they were true. 

Despite the imperfections, the tool is impressive and highly useful. Not only are individuals using it for content creation, software development, meeting transcripts, and more, but many businesses are using it as well. Morgan Stanley uses it to surface relevant content to financial advisors, Stripe uses it to support customers and combat fraud, and many other businesses are using it or other AI models to engage with their customers. 

AI models like ChatGPT rely on training from a vast corpus of data to function. Not only does this include publically available texts, but it also learns from user interaction. This collection and processing of data—which can easily include the user’s personal information—is why ChatGPT and other AI models have become a focus of data privacy concerns. 

AI and Privacy 

At the beginning of April 2023, the Italian data protection authority banned ChatGPT and said it would open an investigation against OpenAI for breaching the GDPR 

The decision may seem harsh, but it’s not without precedent. 

In 2021, Australia found Clearview AI violated the Australian Privacy Act by collecting images and biometric data without consent. Shortly after, the UK, Canada, and France took a similar decision, fining the company. They also ordered Clearview AI to stop any further data processing activities and delete all the data they had gathered until then.  

In the same year, the Dutch Data Protection Authority fined the Dutch Tax and Customs Administration for violating the GDPR by using a machine learning algorithm that discriminated against candidates based on their nationality. 

How Does AI Invade Privacy? 

AI privacy has been a concern in the EU and the world at large for several years now. Some of the general concerns include: 

  • Poor consent management capabilities. 
  • Unclear or unspecified legal basis for processing. 
  • Insufficient information regarding data processing activities. 
  • Profiling. 
  • Not enough protection for minors. 

Clearview AI and the Dutch tax authority’s use of AI were clear examples of misuse. They violated privacy and consent and discriminated against certain people. But what is ChatGPT being accused of? 

The concerns revolve mainly around how data is collected. According to the Italian data protection authority (DPA), OpenAI:  

  • Doesn’t inform users about data processing activities and their scope. 
  • Doesn’t have an appropriate legal basis for the collecting user data. 
  • Doesn’t have any means of verifying whether a user is a minor or not.  

Since users aren’t being informed about data collection or given a choice regarding data collection, they can’t be said to be giving consent that meets the GDPR’s standards of “freely given, specific, informed, and unambiguous.” Italy’s DPA also asserts that OpenAI doesn’t appear to have established any one of the five other legal basis for data collection besides user consent, including: 

  • Performance of a contract 
  • Meeting a legal obligation 
  • Protecting someone’s life 
  • Performing a task in the public’s interest 
  • For legitimate interests 

However, OpenAI’s privacy policy does assert that it’s legal basis for processing data includes the performance of a contract, legitimate interests, consent, and meeting a legal obligation. Of course, Italy’s DPA appears to disagree with these assertions, and other DPAs are currently investigating ChatGPT. 

Another accusation brought against the AI company is that it fails to check the ages of the users. In other words, it is entirely possible for minors to access ChatGPT. As the solution is built to use the text data it receives to learn, it’s safe to assume it would use the children’s data too. 

Italy has ordered OpenAI to cease any data processing operations for Italian citizens and banned ChatGPT. Germany, France, and Ireland are all looking into the issue at the moment and may soon follow in Italy’s footsteps.  

The concerns are also extending beyond Europe’s borders. The Office of the Privacy Commissioner of Canada launched an investigation into OpenAI and its privacy practices in early April. Processing data without proper consent is the main accusation in this case as well.  

The Artificial Intelligence Act 

Despite all the concerns, AI is here to stay. But the current privacy issues are a clear indicator that something needs to change. The EU’s Artificial Intelligence Act (AI Act) might be that change. Proposed on April 21st, 2021 by the European Commission, the AI Act would encompass all types of AI and all sectors except for the military should it be enacted into law. 

The AI Act aims to classify all AI apps by risk. At the moment, there are three categories: 

  • Unacceptable risk apps, such as government-run social scoring. These would be similar to something China uses at the moment for social scoring and would be banned. 
  • High-risk apps, such as CV-screening tools. These have a high chance of collecting data without consent, profiling, and discrimination. They won’t be banned, but there will be strict legal requirements to fulfill before using them. 
  • Low-risk apps, which are currently unregulated. A code of conduct or other recommendations may be included, but low-risk apps aren’t regulated in the current form of the AI Act.  

The need for such an act is clear. But some feel the AI Act’s current form doesn’t have a clear way to deal with something like ChatGPT.  

The Act would classify something like ChatGPT as low risk, so no provisions would apply to it. But critics claim ChatGPT is a tool that can be used for both low-risk activities and high-risk activities, like the creation and dissemination of misinformation. So should ite be classified it as high risk? Does it check all the boxes to fit into that category?  

Proposed solutions currently include imposing stricter rules for companies like OpenAI on what data they can access and how, external audits, and quality management requirements. 

Compliance when Using AI 

With all the uncertainty surrounding AI, it is natural to wonder if AI and GDPR compliance can go together. Can businesses use generative AI in their products and services? In theory, you can still use AI, at least as long as it can’t be classified as unacceptable or high-risk.  

Whether you use ChatGPT in your product or services, partner with another AI business, or develop your own generative AI, there are a few basic steps you should carry out to make sure you’re being compliant. 

Conduct a Data Protection Impact Assessment (DPIA) 

DPIAs are the backbone of laws like the GDPR. These risk impact assessments that help you identify, analyze, and minimize privacy risks. A DPIA can also assist you in categorizing risks, something that may come in useful with complying with the AI Act. 

You should conduct a DPIA any time you introduce a new product or feature to check for any new privacy risks. Things to look at include: 

  • Whose data you’re processing. 
  • The nature and scope of the processing. 
  • The type of personal data used. 
  • Potential transfers to third countries. 

Create and Review Data Processing Agreements 

Vendor risk assessments will be more important than ever when working with AI apps. Carefully review the contracts you have with AI vendors, paying special attention to clauses related to data transfers and their data processing practices. 

Don’t forget to assess their privacy and security measures. In short, assess everything from the cybersecurity risk to the financial, legal, and operational risks. 

Make Sure All Processing Has a Legal Basis 

As with any data processing activities, you need to make sure you have the appropriate legal basis for processing with the AI apps.  

The GDPR gives you various options to choose from, including legitimate interest, compliance with legal obligations, or processing based on the user's consent. Generally, user consent is the most common basis under the GDPR. 

Don’t forget about the data subject’s rights. Users can revoke consent, request to access, modify, or delete their data, and more. Regardless of how your AI system learns new patterns, you’ll need to make sure users can still exercise those rights. 

Children's Data 

If you process children’s data, you need to offer adequate protection. The exact age might differ from country to country, even within the EU. In some places, you will need to be careful with data from children under 13, while in others, the age is 16. Check local regulations and choose the safest route. 

Don’t Forget about Consent 

Consent is a critical component of GDPR compliance. Data subjects must be informed of your data processing activities and should understand exactly what they’re consenting to. 

Remember, you’ll only be able to process data for the purposes the data subject agreed upon. If you need to add to the scope, you’ll need to request consent again.  

Data subjects have the right to revoke their consent, so you’ll need to keep records of consent and processing activities. The only exception is data that is completely anonymized, or when you need that data to comply with a legal requirement. 

Limit Profiling 

A lot of AI systems could be guilty of profiling and automated decision-making. The app the Dutch Tax and Customs Administration used in 2021 was involved in these regulated activities, and ChatGPT has been accused of doing so as well. 

The GDPR doesn’t forbid profiling and automated decision-making, but it puts some serious restrictions on it. You can engage in profiling if: 

  • You have the data subject’s explicit consent. 
  • It is authorized by Union or Member State laws. 
  • It is necessary to enter into a contract. 

In all cases, you need to protect the data subject’s freedoms and rights, including their right to request human intervention or object to the results of the automated decision-making process. 

Follow Privacy Best Practices 

All in all, it is not impossible to be GDPR compliant and use artificial intelligence, but you will need to pay close attention to it. Assess the risks and don’t leave entire processes in the hands of an AI, especially when it could impact the rights of data subjects. Following the best practices identified in this article will go a long way to supporting your compliance.  

And if you need an easier way to manage tasks like vendor risk management or fulfilling DSARs, using a privacy platform like Osano could be the solution your looking for. Schedule a free demo today to learn how Osano can help support your compliance. 

Schedule a demo of Osano today

Privacy Policy Checklist

Are you in the process of refreshing your current privacy policy or building a whole new one? Are you scratching your head over what to include? Use this interactive checklist to guide you.

Download Now
Frame 481285
Share this article