Data Privacy Buy-In: The Usual Suspects and What to Say to Them
Getting the business to say “yes” to data privacy isn’t easy. Yet it...
Read NowGet an overview of the simple, all-in-one data privacy platform
Manage consent for data privacy laws in 50+ countries
Streamline and automate the DSAR workflow
Efficiently manage assessment workflows using custom or pre-built templates
Streamline consent, utilize non-cookie data, and enhance customer trust
Automate and visualize data store discovery and classification
Ensure your customers’ data is in good hands
Key Features & Integrations
Discover how Osano supports CPRA compliance
Learn about the CCPA and how Osano can help
Achieve compliance with one of the world’s most comprehensive data privacy laws
Key resources on all things data privacy
Expert insights on all things privacy
Key resources to further your data privacy education
Meet some of the 5,000+ leaders using Osano to transform their privacy programs
A guide to data privacy in the U.S.
What's the latest from Osano?
Data privacy is complex but you're not alone
Join our weekly newsletter with over 35,000 subscribers
Global experts share insights and compelling personal stories about the critical importance of data privacy
Osano CEO, Arlo Gilbert, covers the history of data privacy and how companies can start a privacy program
Upcoming webinars and in-person events designed for privacy professionals
The Osano story
Become an Osanian and help us build the future of privacy!
We’re eager to hear from you
Updated: September 25, 2024
Published: December 19, 2023
The European Union (EU) has taken a significant step towards regulating artificial intelligence (AI) with the passage of the EU AI Act. This comprehensive legislation establishes a framework that ensures AI is developed and used safely, ethically, and respecting fundamental rights.
In this article, we will delve into the genesis of the EU AI Act, dissect its structure and key provisions, explore its scope and impact on AI development, and discuss the importance of compliance with this groundbreaking regulation.
The EU AI Act is a legislative framework establishing a comprehensive regulatory structure and AI governance for AI systems deployed within the EU and/or targeted to EU users. The Act ensures that these systems are safe, transparent, traceable, non-discriminatory, environmentally friendly, and overseen by real human beings.
For such a nascent technology, this is naturally a tough target to hit—however, the Act has been future-proofed to the extent possible, with a flexible, but not excessively broad, definition of AI systems and regulatory obligations.
The AI Act has several distinguishing features, including:
The Act will be fully applicable 24 months after its entry into force in 2026. However, some provisions will come into effect sooner. For example, the ban on AI systems posing unacceptable risks will apply six months after the Act’s entry into force.
Read on to learn more about the AI Act's requirements and significance.
As we have and will describe in this article, there are plenty of reasons to want AI regulation. AI models can make biased decisions, be used maliciously to harm people and infrastructure, and more.
But out of all of these potential domains where AI could do harm, data privacy rights may be one of the most impacted—but least understood.
For one, generative AI systems must be trained on vast sets of data, and the odds are good that some of that data will be an individual’s personal information.
If they aren’t made aware of the collection of their information to train an AI application, then that data collection could be in violation of laws like the GDPR and CPRA. That’s true even if their personal data is scrubbed from the AI model’s output—which isn’t always the case.
Any information used to train the model can potentially be regurgitated in its output unless certain steps are taken first (such as using privacy-enhancing technologies to reduce the usage of personal information). And often, user interactions are used to train AI models as well. In such cases, users need to be informed and their consent needs to be secured first.
The EU AI Act addresses these concerns by introducing specific transparency requirements.
For instance, any content generated by AI must be clearly labeled as such, and summaries of the copyrighted data used for training must be provided. These requirements are designed to prevent unauthorized data usage and enhance user awareness, thereby reinforcing data privacy protections.
Beyond exposing personal information, AI can also be used to thwart cybersecurity measures used to protect personal information.
Experts have raised the possibility that novel computational techniques could break encryption, in which case virtually every kind of secret and private piece of digital information will be at risk.
But even if this technological leap isn’t enabled by AI, AI is already quite capable of undermining cybersecurity measures in other ways.
Lastly, AI can be used to identify individuals and collect their personal information.
Consider the case of Clearview AI, which came under fire for collecting individual’s biometric data without their consent.
Clearview AI scraped the internet for individual photos and built profiles on those individuals, offering their tool to law enforcement professionals. However, the tool has been used by Clearview AI’s investors for private purposes, and the company suffered a data breach that exposed its database of personal information.
From a data privacy perspective, it’s clear that AI-specific regulation is important. Rules for AI systems are equipped to handle infractions associated with the unlawful collection and processing of personal information for AI training, but AI is capable of violating personal privacy in other ways.
As a result, it needs bespoke, purpose-built regulation, which the EU AI Act seeks to provide.
The EU AI Act has a broad scope that encompasses various industries and geographical boundaries.
Although the EU AI Act primarily targets AI systems developed and deployed within the European Union, its impact extends beyond Europe's borders, just like the GDPR. The Act recognizes the global nature of AI technology and aims to establish a harmonized framework for AI regulation.
Organizations outside the EU that offer AI products or services to users in the EU will also need to adhere to the AI Act, as it applies to all AI interactions within the EU market. This extraterritorial reach ensures that AI systems used by EU residents, regardless of their geographical location, meet the same standards.
The Act also applies to providers outside of the EU, ensuring that AI systems used by EU residents, regardless of their geographical location, meet the same standards.
Since AI is such a broad category, the AI Act adopts a risk-based approach to regulation. It defines four tiers of risk, each with different levels of obligations:
Minimal or no risk
Limited risk
High risk
Unacceptable risk
Developers are supposed to determine their level of risk themselves based on guidance from the Act. This is a crucial step, as not only does it determine what sort of requirements AI providers need to meet, but miscategorizing an AI system counts as a violation of the law.
These are systems like the use of AI for video games or spam filters. They pose essentially no risk to society at large and are subject to no requirements as a result.
These systems must meet specific transparency obligations. Chatbots, for example, are a great example of a limited-risk system—when interacting with a chatbot, an individual must be informed that they are engaging with a machine first and be given the opportunity to speak with a human instead.
Providers of high-risk AI systems will need to meet additional obligations, including conducting fundamental rights impact assessments, adhering to data governance and record-keeping practices, providing detailed documentation, and meeting high standards for accuracy and cybersecurity.
Furthermore, these systems must be registered in an EU-wide public database.
Any incidents associated with high-risk AI systems must be reported as well. Incidents could include damage to a person’s health or death, serious damage to property or the environment, disruption to infrastructure, or the violation of fundamental EU rights.
The Act identifies two categories of high-risk systems: those that are safety components subject to existing safety standards and systems used for a specific, sensitive purpose. AI-powered medical systems might fall in the first category, while the second category covers eight broad areas:
These systems, also known as prohibited AI systems, are deemed too risky to even be permitted at all and are banned outright. This includes systems designed for:
The use of biometric identification was a particular sticking point during debates over the AI Act. Many of these systems are of high value to law enforcement agencies, but it’s still possible that governments and law enforcement agencies could abuse them.
Ultimately, law enforcement is exempt from the ban on biometric identification under certain circumstances. Prior judicial authorization is required; the target must be suspected of a specific, serious crime such as terrorism and kidnapping; and the use of biometric identification systems must be limited in time and location.
General Purpose AI (GPAI), or foundation models, are a particularly difficult class of AI models to regulate. Here’s how the European Parliament described foundation models:
Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained.
ChatGPT, for example, is a type of foundation model. Since these models can be used for a wide range of purposes, including harmful and harmless ones, a nuanced means of regulating them is needed.
Currently, the AI Act provides two categories for foundation models: high-impact and low-impact systems. High-impact systems will be required to adhere to additional requirements, including:
All foundation models, regardless of their risk profile, are also required to provide technical documentation, summarize training content, and meet other transparency requirements.
While it’s not the only criterion for GPAI to be categorized as high risk, the AI Act presumes that models trained using a significant amount of computing power (specifically, 10^25 floating point operations, or FLOPs) will have significant enough capabilities and underlying data to represent a system risk.
Complying with the EU AI Act is of paramount importance for organizations involved in AI development and usage.
Organizations must take several steps to ensure compliance with the EU AI Act, as it applies to all EU member states. This includes conducting AI system assessments, implementing necessary safeguards, establishing effective governance mechanisms—potentially through a dedicated AI office—and adhering to transparency and disclosure requirements.
Compliance timelines vary depending on the specific provisions of the Act. For instance, the ban on certain AI systems posing unacceptable risks will take effect six months after the Act's entry into force, while other requirements, such as those for high-risk systems, will be phased in over a longer period.
Organizations will also need to maintain detailed documentation, including technical files, data management plans, and records of transparency measures. Regular audits and impact assessments will be essential to demonstrate ongoing compliance.
It is essential to develop a comprehensive compliance strategy that aligns with the Act's provisions, potentially overseen by an AI Office dedicated to managing AI governance across the organization and ensuring adherence to EU copyright law.
The EU AI Act includes penalties for non-compliance. Organizations failing to meet the Act's requirements may face significant fines and reputational damage. Specifically, businesses that violate the AI Act are subject to fines ranging from €7.5 million (~$8.23 million), or 1.5% of turnover, to €35 million (~$38.43 million), or 7% of global turnover.
Penalties will vary depending on the severity of the violation and the risk category of the AI system involved. For instance, breaches involving high-risk AI systems could attract higher fines compared to those involving limited-risk systems.
As the EU AI Act comes into effect, it is crucial for all stakeholders involved in AI development and usage to fully understand its regulations and implications.
By embracing responsible AI practices and complying with the Act's requirements, organizations can contribute to the development of AI technologies that benefit society while safeguarding fundamental rights and values.
The EU AI Act is expected to take effect in 2024, with full applicability 24 months after it enters into force. However, some provisions, such as the ban on AI systems posing unacceptable risks, will apply six months after the Act enters into force. Businesses are urged to begin their compliance efforts as soon as possible to meet these deadlines.
Any entity that offers AI services to any market in the EU will need to comply, regardless of whether they are based in the EU or another region. Furthermore, AI providers within the EU will be subject to the act. Lastly, if any outputs of an AI system are used in the EU, the AI provider is subject to the act, even if the provider and primary user are based outside of the EU.
Businesses that violate the AI Act are subject to fines ranging from €7.5 million (~$8.23 million), or 1.5% of turnover, to €35 million (~$38.43 million), or 7% of global turnover. The severity of the fines will depend on the nature of the violation, the risk category of the AI system, and the specific provisions of the Act that were breached.
Download this handy cheatsheet to quickly compare and contrast the major features of current AI regulations and proposals at a glance.
Download Now
Matt Davis is a writer at Osano, where he researches and writes about the latest in technology, legislation, and business to spread awareness about the most pressing issues in privacy today. When he’s not writing about data privacy, Matt spends his time exploring Vermont with his dog, Harper; playing piano; and writing short fiction.
Osano is used by the world's most innovative and forward-thinking companies to easily manage and monitor their privacy compliance.
With Osano, building, managing, and scaling your privacy program becomes simple. Schedule a demo or try a free 30-day trial today.