In this article

Sign up for our newsletter

Share this article

The European Union (EU) has taken a significant step towards regulating artificial intelligence (AI) with the passage of the EU AI Act. This comprehensive legislation establishes a framework that ensures AI is developed and used safely, ethically, and respecting fundamental rights.  

In this article, we will delve into the genesis of the EU AI Act, dissect its structure and key provisions, explore its scope and impact on AI development, and discuss the importance of compliance with this groundbreaking regulation. 

What Is the EU AI Act?

The EU AI Act is a legislative framework establishing a comprehensive regulatory structure and AI governance for AI systems deployed within the EU and/or targeted to EU users. The Act ensures that these systems are safe, transparent, traceable, non-discriminatory, environmentally friendly, and overseen by real human beings.

For such a nascent technology, this is naturally a tough target to hit—however, the Act has been future-proofed to the extent possible, with a flexible, but not excessively broad, definition of AI systems and regulatory obligations.

The AI Act has several distinguishing features, including:

  • A risk-based approach, which categorizes AI systems into different risk levels based on their intended use, including practices that are outright prohibited.
  • Transparency and accountability requirements, which oblige developers to provide clear information on AI systems, create documentation, and conduct reports.
  • Data governance standards, which obligate AI developers to use high-quality and compliant training data for AI systems.
  • And more.

The Act will be fully applicable 24 months after its entry into force in 2026. However, some provisions will come into effect sooner. For example, the ban on AI systems posing unacceptable risks will apply six months after the Act’s entry into force.

Read on to learn more about the AI Act's requirements and significance.

Why the EU AI Act Matters for Data Privacy

As we have and will describe in this article, there are plenty of reasons to want AI regulation. AI models can make biased decisions, be used maliciously to harm people and infrastructure, and more. 

But out of all of these potential domains where AI could do harm, data privacy rights may be one of the most impacted—but least understood. 

Data Collection and Privacy Risks in Generative AI

For one, generative AI systems must be trained on vast sets of data, and the odds are good that some of that data will be an individual’s personal information.

If they aren’t made aware of the collection of their information to train an AI, then that data collection could be in violation of laws like the GDPR and CPRA. That’s true even if their personal data is scrubbed from the AI model’s output—which isn’t always the case.

Any information used to train the model can potentially be regurgitated in its output unless certain steps are taken first (such as using privacy-enhancing technologies to reduce the usage of personal information). And often, user interactions are used to train AI models as well. In such cases, users need to be informed and their consent needs to be secured first. 

Transparency and Security Under the EU AI Act

The EU AI Act addresses these concerns by introducing specific transparency requirements. 

For instance, any content generated by AI must be clearly labeled as such, and summaries of the copyrighted data used for training must be provided. These requirements are designed to prevent unauthorized data usage and enhance user awareness, thereby reinforcing data privacy protections.

Beyond exposing personal information, AI can also be used to thwart cybersecurity measures used to protect personal information. 

Experts have raised the possibility that novel computational techniques could break encryption, in which case virtually every kind of secret and private piece of digital information will be at risk. 

But even if this technological leap isn’t enabled by AI, AI is already quite capable of undermining cybersecurity measures in other ways.

AI and the Risks of Personal Data Identification

Lastly, AI can be used to identify individuals and collect their personal information. 

Consider the case of Clearview AI, which came under fire for collecting individual’s biometric data without their consent. 

Clearview AI scraped the internet for individual photos and built profiles on those individuals, offering their tool to law enforcement professionals. However, the tool has been used by Clearview AI’s investors for private purposes, and the company suffered a data breach that exposed its database of personal information. 

From a data privacy perspective, it’s clear that AI-specific regulation is important. Rules for AI systems are equipped to handle infractions associated with the unlawful collection and processing of personal information for AI training, but AI is capable of violating personal privacy in other ways. 

As a result, it needs bespoke, purpose-built regulation, which the EU AI Act seeks to provide.

The Scope of the EU AI Act: The GDPR of AI?

The EU AI Act has a broad scope that encompasses various industries and geographical boundaries. 

Although the EU AI Act primarily targets AI systems developed and deployed within the European Union, its impact extends beyond Europe's borders, just like the GDPR. The Act recognizes the global nature of AI technology and aims to establish a harmonized framework for AI regulation. 

Organizations outside the EU that offer AI products or services to users in the EU will also need to adhere to the Act. This extraterritorial reach ensures that AI systems used by EU residents, regardless of their geographical location, meet the same standards. 

The Act also applies to providers outside of the EU, ensuring that AI systems used by EU residents, regardless of their geographical location, meet the same standards.

Dissecting the EU AI Act: Major Provisions and Requirements

Since AI is such a broad category, the AI Act adopts a risk-based approach to regulation. It defines four tiers of risk, each with different levels of obligations:

  1. Minimal or no risk 

  2. Limited risk 

  3. High risk 

  4. Unacceptable risk 

Developers are supposed to determine their level of risk themselves based on guidance from the Act. This is a crucial step, as not only does it determine what sort of requirements AI providers need to meet, but miscategorizing an AI system counts as a violation of the law. 

Minimal or No Risk Systems

These are systems like the use of AI for video games or spam filters. They pose essentially no risk to society at large and are subject to no requirements as a result. 

Limited-Risk Systems

These systems must meet specific transparency obligations. Chatbots, for example, are a great example of a limited-risk system—when interacting with a chatbot, an individual must be informed that they are engaging with a machine first and be given the opportunity to speak with a human instead. 

High-Risk Systems

Providers of high-risk AI systems will need to meet additional obligations, including conducting fundamental rights impact assessments, adhering to data governance and record-keeping practices, providing detailed documentation, and meeting high standards for accuracy and cybersecurity. 

Furthermore, these systems must be registered in an EU-wide public database. 

Any incidents associated with high-risk AI systems must be reported as well. Incidents could include damage to a person’s health or death, serious damage to property or the environment, disruption to infrastructure, or the violation of fundamental EU rights. 

The Act identifies two categories of high-risk systems: those that are safety components subject to existing safety standards and systems used for a specific, sensitive purpose. AI-powered medical systems might fall in the first category, while the second category covers eight broad areas: 

  • Biometrics 
  • Infrastructure 
  • Education and vocational training 
  • Employment, workers management, and access to self-employment 
  • Access to essential services 
  • Law enforcement 
  • Border control management 
  • Administration of justice and democratic processes 

Unacceptable Risk Systems

These systems, also known as prohibited AI systems, are deemed too risky to even be permitted at all and are banned outright. This includes systems designed for: 

  • The manipulation of human behavior to circumvent free will 
  • The exploitation of any of the vulnerabilities of a specific group of people 
  • Untargeted scraping of facial data for the creation of facial recognition databases 
  • Social scoring 
  • Emotion recognition in workplaces or educational settings 
  • Biometric categorization systems based on certain characteristics, such as political, religious, and philosophical beliefs; sexual orientation; and race. 

The use of biometric identification was a particular sticking point during debates over the AI Act. Many of these systems are of high value to law enforcement agencies, but it’s still possible that governments and law enforcement agencies could abuse them.  

Ultimately, law enforcement is exempt from the ban on biometric identification under certain circumstances. Prior judicial authorization is required; the target must be suspected of a specific, serious crime such as terrorism and kidnapping; and the use of biometric identification systems must be limited in time and location. 

General-Purpose AI Model

General Purpose AI (GPAI), or foundation models, are a particularly difficult class of AI models to regulate. Here’s how the European Parliament described foundation models: 

Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. 

ChatGPT, for example, is a type of foundation model. Since these models can be used for a wide range of purposes, including harmful and harmless ones, a nuanced means of regulating them is needed. 

Currently, the AI Act provides two categories for foundation models: high-impact and low-impact systems. High-impact systems will be required to adhere to additional requirements, including: 

  • Model evaluations. 
  • Assessing and mitigating systemic risks.  
  • Adversarial testing.  
  • Reporting incidents to the European Commission. 
  • Additional cybersecurity measures. 
  • Reporting on energy efficiency. 
  • And more. 

All foundation models, regardless of their risk profile, are also required to provide technical documentation, summarize training content, and meet other transparency requirements. 

While it’s not the only criterion for GPAI to be categorized as high risk, the AI Act presumes that models trained using a significant amount of computing power (specifically, 10^25 floating point operations, or FLOPs) will have significant enough capabilities and underlying data to represent a system risk. 

Compliance with the EU AI Act

Complying with the EU AI Act is of paramount importance for organizations involved in AI development and usage. 

Steps Towards Compliance

Organizations must take several steps to ensure compliance with the EU AI Act. This includes conducting AI system assessments, implementing necessary safeguards, establishing effective governance mechanisms—potentially through a dedicated AI office—and adhering to transparency and disclosure requirements. 

Compliance timelines vary depending on the specific provisions of the Act. For instance, the ban on certain AI systems posing unacceptable risks will take effect six months after the Act's entry into force, while other requirements, such as those for high-risk systems, will be phased in over a longer period.

Organizations will also need to maintain detailed documentation, including technical files, data management plans, and records of transparency measures. Regular audits and impact assessments will be essential to demonstrate ongoing compliance.

It is essential to develop a comprehensive compliance strategy that aligns with the Act's provisions, potentially overseen by an AI Office dedicated to managing AI governance across the organization.

Penalties for Non-Compliance

The EU AI Act includes penalties for non-compliance. Organizations failing to meet the Act's requirements may face significant fines and reputational damage. Specifically, businesses that violate the AI Act are subject to fines ranging from €7.5 million (~$8.23 million), or 1.5% of turnover, to €35 million (~$38.43 million), or 7% of global turnover.

Penalties will vary depending on the severity of the violation and the risk category of the AI system involved. For instance, breaches involving high-risk AI systems could attract higher fines compared to those involving limited-risk systems.

As the EU AI Act comes into effect, it is crucial for all stakeholders involved in AI development and usage to fully understand its regulations and implications. 

By embracing responsible AI practices and complying with the Act's requirements, organizations can contribute to the development of AI technologies that benefit society while safeguarding fundamental rights and values.

Frequently Asked Questions 

When Does the EU AI Act Take Effect? 

The EU AI Act is expected to take effect in 2026, with full applicability 24 months after it enters into force. However, some provisions, such as the ban on AI systems posing unacceptable risks, will apply six months after the Act enters into force. Businesses are urged to begin their compliance efforts as soon as possible to meet these deadlines.

Who Is Subject to the EU AI Act? 

Any entity that offers AI services to any market in the EU will need to comply, regardless of whether they are based in the EU or another region. Furthermore, AI providers within the EU will be subject to the act. Lastly, if any outputs of an AI system are used in the EU, the AI provider is subject to the act, even if the provider and primary user are based outside of the EU.  

What are the Penalties for Violating the EU AI Act? 

Businesses that violate the AI Act are subject to fines ranging from €7.5 million (~$8.23 million), or 1.5% of turnover, to €35 million (~$38.43 million), or 7% of global turnover. The severity of the fines will depend on the nature of the violation, the risk category of the AI system, and the specific provisions of the Act that were breached. 

Schedule a demo of Osano today

AI Regulation Cheatsheet

Download this handy cheatsheet to quickly compare and contrast the major features of current AI regulations and proposals at a glance.

Download Now
Guide
Share this article