In this article

Sign up for our newsletter

Share this article

How would you feel if someone could take a photo of your face and learn who you are, where you live, and other intimate details about your life–all without even speaking to you? 

What about if you found out an unlicensed therapist was encouraging your troubled friend’s delusions?

These aren’t random examples. They’re real-world cases of unregulated AI causing harm to individuals’ wellbeing and privacy.

Clearview AI, a facial recognition company, scraped millions of faces from internet content. This enabled its users to identify individuals from a single photo. Clearview AI now faces criminal complaints in the EU for dodging GDPR fines resulting from its non-compliant data collection.

Meta’s Character.ai, a chatbot, was reported to have encouraged teenagers to commit suicide. As of this writing, state attorneys general are investigating Meta for deceptively marketing Character.ai as a mental health tool.

These cases show why AI regulation exists and why AI compliance is required. But they’re extreme cases that clearly involve intentional wrongdoing or gross negligence. In fact, businesses that use AI without the proper frameworks or precautions in place can also cause significant harm. That’s why AI compliance practices are emerging as a key priority for businesses interested in making use of this new technology.

What Is AI Compliance?

AI risk management frameworks—both internal and regulatory—provide guidance on mitigating these risks. AI compliance is the act of ensuring that your business follows these risk management rules and guidelines when developing and deploying AI tools.

For compliance, AI risks that need the most attention often relate to data privacy and data security; specifically, how data is collected, processed, secured, and (crucially) what inferences your AI systems might draw from it. Let’s take a look at the various regulations and frameworks that you may need to comply with.

AI Governance Regulations and Frameworks

AI Governance in Europe

The EU Artificial Intelligence Act is one of the first comprehensive pieces of legislation governing the use of AI. It follows a risk-based approach, focusing on systemic dangers in AI systems and how they might affect human rights.

There are four risk levels as defined by the EU AI Act:

  1. Minimal: Low-level automation and filters that pose no harm to people, and therefore have no specific obligations under the act
  2. Limited: Might affect people but not in a significant way, so the only obligation is to let users know they are interacting with an AI system
  3. High: May pose a risk to people’s safety and fundamental rights and is therefore heavily regulated
  4. Unacceptable: Systems that seek to modify behavior, exploit vulnerabilities, or profile and track people and are banned outright in most circumstances

Different risk levels carry different compliance obligations, with high-risk systems carrying the most (unacceptable systems are completely banned). Providers of high-risk AI tools and technologies must develop a risk management strategy and implement a robust data governance plan to protect sensitive information.

Since general purpose AI (GPAI) systems like ChatGPT can sit across risk levels depending on how they’re used, they have specific compliance requirements. GPAI providers need to:

  • Document their model-building process, including how they trained and tested it and their evaluation results.
  • Provide anyone who wants to use the model in their AI system with its capabilities and limitations, so they can use it in compliance with the act.
  • Prove that any copyrighted content in the training material was lawfully obtained.
  • Give a sufficiently detailed summary of the data used, including its types and sources, whether it’s copyrighted, and the lawful basis and compliance mechanisms.

Non-compliance can have severe consequences. If you don’t comply, you can face administrative fines of up to €35 million or 7% of your total worldwide annual turnover (whichever is higher). Not meeting other serious obligations can get you fines of up to €15 million or 3% of your turnover.

If you’re found to have supplied incorrect, incomplete, or misleading information to authorities, you might have to pay €7.5 million or 1% of your worldwide turnover.

While the EU AI Act governs the design and deployment of AI systems, the GDPR continues to regulate how those systems process personal data. This includes training data and any profiling or inference. You can read in detail about what the GDPR requires in our detailed guide.

The AI Act governs the system’s risks and deployment; GDPR governs the data that powers it. In practice, most AI compliance obligations involve both.

Like the GDPR, the AI Act is extraterritorial, meaning if EU residents access your services, the law applies–even if you’re not based out of the EU.

AI Governance in the US

President Biden signed the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) in 2023, but it was rescinded by the Trump administration in 2025. However, many of the principles it espoused are reflected in the NIST AI Risk Management Framework (AI RMF), which we’ll cover in the next section..

At present, the EO 14110 has been replaced by EO 14179, which is also known as “Removing Barriers to American Leadership in Artificial Intelligence.” 

In July 2025, the Big Beautiful Bill was enacted. In its original form, the act would have put a moratorium on state-led AI governance regulations. However, the moratorium was voted down, and the act no longer includes it. 

As it stands, there is no federal law governing the use of AI, but the states have, or are bringing in, their own laws. This includes California, Utah, Colorado, and Texas, among others, and targeted local legislations such as the NYC Local Law 144. California, in particular, has issued a number of narrow-focused AI bills, but to date, Colorado is the only state with a comprehensive AI regulation on the books. 

The Colorado AI Act is still being tweaked, and its implementation has been delayed until June 2026. Overall, the act is narrower than the EU AI Act, focusing specifically on preventing “algorithmic discrimination.”

US privacy laws are also involved in AI compliance. State privacy laws frequently refer to automated decision-making technology, or ADMT. Generally, if AI systems are used to make a consequential decision about someone (e.g. approving someone for a loan or employment decisions), then the relevant state privacy law will apply.

NIST AI RMF

The NIST AI Risk Management Framework (RMF) provides detailed guidance on ensuring safe and responsible AI development and use. It defines the types of harm an AI system can cause and the traits that a trustworthy AI system should display. The RMF focuses on:

  • Accuracy and reliability
  • Safety and security
  • Protection from threats
  • Transparency and accountability
  • Explainability
  • Data privacy
  • Bias mitigation and fairness

It also includes a section on the risks posed by generative artificial intelligence (generative AI or GenAI) and concerns about data privacy, which touches upon unauthorized data scraping.

Since it’s not a law, there are no fines or legal consequences for not following this framework. However, it’s increasingly becoming the gold standard for responsible AI use and will ensure you have a strong foundation to build off of should you become subject to a regulation in the future. As such, following it can help you obtain contracts, prove compliance in liability cases, and win public trust.

ISO/IEC Standards

The ISO standards are industry-agnostic guidelines for best practices. The two that relate to general AI and data risks are ISO/IEC 27001:2022 (amended in 2024 to provide further privacy and sustainability guidance) and ISO/IEC 23894:2023.

The latest, ISO/IEC 42001:2023, focuses specifically on artificial intelligence management systems (AIMS) and has been widely adopted since 2024.

Like the NIST AI RMF, these standards are voluntary, proactive methods for building the foundation for AI compliance. They help you identify vulnerabilities and plug those gaps. They also take a risk-based approach and describe “processes for the effective implementation and integration of AI risk management.”

Many government and enterprise customers require this certification. If you want to sell them your AI product or become a vendor, you will need to be certified.

The Difference Between AI Governance and AI Regulatory Compliance

It’s easy to conflate governance and compliance, but they are two different—albeit related—concepts. Responsible AI governance refers to setting the overall strategy and internal goals for your use of AI. Compliance will likely be one of those goals and refers to the alignment with an external framework, whether regulatory or voluntary. 

For example, EU AI Act compliance processes help you design systems and risk mitigation strategies to develop AI technologies that align with the rules laid out in the legislation. Meanwhile, AI governance might identify EU AI Act compliance as one goal among others. 

Another perspective to take is one of proactivity versus reactivity. Governance is proactive, as organizations can create their own rules based on the risks they foresee. Compliance is reactive, where the business creates systems that help it follow the rules.

Both are important aspects of managing AI risks. 

The Risks Posed by Non-Compliant AI Models

We’ve mentioned risks associated with the use of AI a few times now. But what are they?

Training Data Risks

AI models are trained on vast quantities of data. The problem arises when this training data contains people’s personal information or copyrighted details. Without an adequate legal basis under AI and data privacy law to process this information, training could be non-compliant. Just consider Anthropic’s $1.5 billion settlement over the use of copyrighted works in its training set.

It boils down to securing the right legal basis for training. Consent is the most common legal basis, but it’s difficult to operationalize when you’re collecting thousands or millions of peoples’ data.

Legitimate interest is another commonly used legal basis, and the more likely to apply when processing personal data for training an AI model. 

This basis allows a company to pursue its commercial interests, so long as the data subject is informed about the processing and the processing poses low risk to their privacy. Legislators and regulators are still hashing out how and when this legal basis applies to AI training, however, and if the training were to rely on copyrighted work (as was the case with Anthropic), then legitimate interest would not be valid.

Bias and Discrimination

Just like people, machine learning and other algorithmic systems are subject to bias and discrimination. If your training data is weighted towards one group, your model will learn to favor it. As a result, it will discriminate against (as much as a machine can) anyone who isn’t part of that group. 

For example, Amazon had to discontinue its recruitment tool because it discriminated against women when they applied for certain roles. It had been trained on data from a male-dominated industry, and it “assumed” men were better at those jobs.

An unbiased system is more than a compliance requirement; it’s also an ethical consideration.

Lack of Transparency and Explainability

Discriminatory behavior, even if it comes from an AI tool, can adversely affect the reputation of your business. However, in some cases, what seems like discrimination might actually be a fair and logical decision. Of course, in that case, you should be able to demonstrate the reasoning behind the results.

Black box behavior is when you cannot explain how your AI system processes information to give results. If you can’t explain it, you can’t defend it.

Another concern is transparency. In the context of AI, transparency allows developers, deployers, and users to see how AI models use the information they process. In August 2025, over 300,000 conversations with xAI’s Grok chatbot were discovered to be indexed by search engines. Any conversations users had with it were potentially visible online. 

Users assumed their chats were private, yet their personal thoughts, questions, and potentially sensitive details ended up exposed. This is a transparency and consent failure, as the users weren’t explicitly told that their chats with the chatbot were not private. 

Profiling and Automated Decision-Making

Automated profiling is the process of analyzing available personal information about people to predict other details about them. It can be very useful in some cases. For example, in healthcare, profiling might help doctors make quicker diagnoses. In banking, it can help determine a person’s risk of defaulting on a loan.

Where it can be problematic is:

  • People may not be aware that their information is being used to make guesses about them.
  • They may be aware but don’t know exactly how the profiling process works and how it’ll affect them.
  • When AI is trained on biased data, then it’ll make biased decisions when used to profile individuals.
  • Some decisions made through the profiling may be faulty and significantly affect the life or livelihood of the person.

AI models can learn patterns and draw conclusions. This makes them capable of extracting sensitive or hidden insights from limited data. At the same time, they can hallucinate false conclusions and present them as true. That makes it far harder for businesses to anticipate or control AI models. And, correspondingly, it makes AI potentially dangerous for the consumers being profiled.

That’s why most privacy laws regulate profiling and automated decision-making, and some even give consumers the right to object to profiling or contest the results of automated decision-making.

Security and Model Vulnerability

AI models can be tricked or manipulated in ways traditional software can’t. 

Let’s take model inversion attacks, for example. By training an AI model on your AI model’s outputs, a bad actor can predict its training data and potentially reconstruct the personal information used in your training set. Prompt injection attacks can also coerce a model into revealing information it was never meant to disclose.

From a compliance standpoint, security is critical. Data protection laws require organizations to implement appropriate technical and organizational measures to safeguard personal data. 

It doesn’t matter if you didn’t understand the vulnerability or whether it was unique to artificial intelligence systems. Regulators don’t care; all they will see is that you didn’t put adequate effort into securing your systems.

Accountability and Auditability

AI regulations—particularly the EU AI Act—put a great deal of emphasis on documentation. It’s not enough to say your model works. You must be able to show how you designed, trained, tested, and plan to monitor it.

For that, you need detailed records of your training data sources. You need to conduct risk assessments and identify system limitations. You must also explain what human oversight you’ve built in.

Without this paper trail, you can’t prove compliance or show regulators that you have effective programs in place for it. It doesn’t matter if your AI system never causes harm. Not being able to demonstrate accountability is itself a compliance failure. 

Regulators want to see that you understand your model. They want documented proof that you’ve considered its risks and have taken steps to mitigate them.

Artificial Intelligence Compliance and Risk Management Best Practices

AI innovation is still fairly new, and you can’t anticipate every possible problem, especially given today’s fast-moving regulatory landscape. However, implementing and adhering to AI governance frameworks before you start AI-related development can put you firmly on track for compliance. Here are some steps you can take.

Start with Risk Assessments

Relevant for:

  • EU AI Act: Requires a risk management system and continuous assessment for high-risk AI
  • GDPR/UK GDPR: Mandates DPIAs when processing data that is “likely to result in high risk”
  • NIST AI RMF + ISO/IEC 23894: Provide frameworks for identifying and mitigating risk across the AI lifecycle

Before you even begin developing or implementing AI solutions, conduct a data protection impact assessment (DPIA), privacy impact assessment, or an AI impact assessment (AIA). These will force you to consider whether your use of AI poses a risk to consumers or whether users of your AI system could create risk for consumers. 

That, in turn, allows you to plan mitigation strategies. In short, this step can help you plan an approach that has compliance standards built in from the start, rather than treating mitigation as an afterthought. It’s a good first step, too, as it will help prioritize which of the ensuing activities are needed most for your AI use case.

Establish Data Minimization and Governance Practices

Relevant for:

  • GDPR/CCPA/UK GDPR: Require data minimization, lawful basis, and purpose limitation
  • EU AI Act: Demands quality and governance of training data, especially for high-risk systems
  • ISO/IEC 27001 & ISO/IEC 42001: Require documented data governance controls for AI and security

Create a plan for what information your proposed AI solution requires and how it should be governed. This includes training data, for which an appropriate legal basis must be obtained under many data privacy laws.

Data minimization is a principle enshrined in every data privacy law, including the GDPR and most AI regulations. It states that data should only be collected and used if it’s absolutely necessary to do so. 

The less data you collect, the less data you need to protect. If a bad actor accesses your systems, they’re limited in what they can steal. Data minimization also reduces the likelihood of mishandling data and becoming non-compliant.

Identify the categories of data you need and ask yourself if there are alternative solutions. The best way to protect the privacy of personal information is not to collect it in the first place.

Build in Transparency and Explainability

Relevant for:

  • EU AI Act: Requires transparency, record-keeping, and technical documentation for high-risk AI
  • GDPR/UK GDPR: Gives individuals a right not to be subject to fully automated decisions and to receive meaningful explanations
  • NIST AI RMF + ISO/IEC 42001: State that explainability is a core trait of “trustworthy AI”

Develop clear documentation about the data going into the AI system and how it will be used to generate results. Having clarity on your model’s processes will allow you to better communicate them to consumers and get their informed consent when required.

It’s far easier to bake explainability and transparency into your AI initiative early on, rather than try to untangle systems and processes after the fact.

Know Where Your Data Is Coming From

Relevant for:

  • EU AI Act: Requires detailed traceability and data governance for training datasets
  • GDPR/CCPA: Require lawful basis and disclosure for collection and use
  • ISO/IEC 42001: Requires traceable training datasets

Establish a clear path of how you obtained data and what basis you had for collecting it. This shouldn’t just be part of your process; it should be demonstrable with a paper trail. 

Ensure Human Oversight

Relevant for:

  • EU AI Act: Explicitly requires human oversight and the ability to override high-risk systems
  • GDPR/UK GDPR: Requires the provision of “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”
  • NIST AI RMF: Identifies oversight as part of risk-based governance

Even a fully automated system needs a human to oversee its decisions and intervene when needed. If an AI’s bad decision causes fallout, you’re more liable than if a human makes one–you’ll have the added violation of failing to keep a human in the loop on top of a data breach, PI exposure, or whatever AI risk was realized. A person supervising the model’s decisions helps keep your risk low by ensuring it doesn't cause damage in the name of following rules.

Strengthen Security Controls

Relevant for:

  • GDPR/CCPA/UK GDPR: Require “appropriate technical and organizational measures” to secure personal data
  • EU AI Act: Mandates robustness, cyber-resilience, and secure lifecycle management for high-risk AI
  • NIST & ISO/IEC 27001/42001: Provide concrete controls for securing model pipelines and data

Protecting AI systems is more than just securing databases. These tools, especially generative AI, are interactive and don’t have a fixed logic path. As such, you can’t just rely on regular software security measures.

You need to consider AI-specific threats, such as model inversion attacks, prompt injections, data poisoning, and so on. Who gets access to which parts of the system? Who monitors the usage, and what activities should raise red flags? Could red-teaming and penetration testing exercises reveal additional threats that you hadn’t previously considered? All these questions must be answered during the security planning process.

Once you’ve determined the best way forward, you’ll need to regularly assess your security solutions. Be prepared to change them as your operational and security needs evolve.

Maintain a Compliance Paper Trail

Relevant for:

  • EU AI Act: Requires a full technical file and continuous monitoring documentation
  • GDPR/UK GDPR: Require record-keeping for lawful processing
  • ISO/IEC 42001: Certification hinges on documented evidence of controls

There are guidelines for every aspect of using artificial intelligence. Part of AI compliance is keeping a record of how you manage each one. 

Regulators can ask you to show how well you’re following data privacy and AI compliance standards, and you should have the paperwork to prove it. If you don’t, even a “safe” AI system might be deemed non-compliant.

Continue to Monitor and Audit the Model After Deployment

Relevant for:

  • EU AI Act: Requires ongoing human monitoring, documented model performance testing, and mandatory conformity assessments for high-risk systems
  • ISO/IEC 42001 & 23894: Treat AI as a continuous management lifecycle and require evaluation of risk, fairness, and performance
  • NIST AI RMF: Emphasizes iterative monitoring, validation, and redress throughout the lifecycle
  • NYC Local Law 144: Requires regular bias audits for automated employment decision tools

Your responsibility doesn’t end with building and deploying an AI solution. It must also be monitored for data drift, biases, new vulnerabilities, and errors. These must be fixed as you find them, as you are still liable for the outputs. Periodic audits will help you ensure the model remains bias-free, keeping you compliant.

Manage Vendors and Third-Parties

Relevant for:

  • EU AI Act: Mandates the provision of technical documentation to deployers of high-risk AI systems
  • GDPR/CCPA: Make organizations liable for third-party processors and their data practices
  • ISO/IEC 42001: Requires third-party risk management

Regulators aren’t just looking at the AI tools you’ve built; they also hold you responsible for any third-party tools you use.

Vendors should be able to provide you with documentation explaining a variety of characteristics of the AI systems they’ve made available to you. Here’s what the EU AI Act mandates deployers of high-risk AI systems receive:

  • Contact information for the AI model provider
  • The model’s intended purpose
  • The model’s of accuracy, including its metrics, robustness, and cybersecurity, and any known and foreseeable circumstances that may impact the expected accuracy, robustness, and cybersecurity
  • Any known or foreseeable circumstances that could lead to risks to individuals’ health, safety, or fundamental rights 
  • The technical capabilities and characteristics of the model to provide information that is relevant to explain its output
  • The performance regarding specific persons or groups of persons on which the system is intended to be used
  • Specifications for the training data
  • Information to enable deployers to interpret the output of the high-risk AI system and use it appropriately

Support AI Compliance with Osano

AI compliance may be a new requirement for businesses, but the good news is that it has significant overlap with privacy compliance.

Assessments, consent management, subject rights requests, vendor management, and the like–these workflows need to be automated, streamlined, and managed whether you’re primarily concerned with privacy or AI regulation.

Osano helps businesses run flexible assessments, including AI assessments; manage consent for data collection and tracking; handle rights requests, such as the right to object to automated decision-making; and monitor and evaluate vendors for their data handling practices. Schedule a demo to see if Osano can support your compliance risk management needs.

Get a demo of Osano today

Stay Up to Date on the Latest AI News

Subscribe to our newsletter to learn about the latest developments in AI regulation so you can stay ahead of compliance requirements.

Subscribe to the Newsletter
newsletter-featured-image
Share this article