EU AI Act Explained: What Businesses Need to Know

Understand the EU AI Act's impact on businesses with this comprehensive guide outlining risk levels, compliance requirements, and penalties for non-compliance.

The EU AI Act is a new set of rules governing the development, deployment, and use of artificial intelligence (AI) systems in the European Union. It introduces a risk-based approach, dividing AI systems into four levels:

  • Unacceptable Risk: These AI systems are banned due to clear threats to rights and values.
  • High Risk: These AI systems have strict requirements due to potential harm, including:
Requirement Details
Data Governance Ensure high-quality data, address biases, document data sources
Transparency Disclose AI system's purpose, capabilities, limitations, and risks
Human Oversight Enable human review, monitoring, and control
Technical Robustness Ensure resilience against attacks, errors, and inaccuracies
Record-Keeping Maintain comprehensive logs and documentation
Conformity Assessment Undergo assessment by an approved body and register compliant systems
  • Limited Risk: These AI systems must disclose being AI and if content is artificial.
  • Minimal Risk: These AI systems have no specific rules, but guidelines are recommended.

The Act aims to create a level playing field for businesses, build trust in AI systems, and protect fundamental rights and values. Non-compliance can result in significant fines, up to €35 million or 7% of global annual turnover for the most severe violations.

To comply, businesses should:

  • Conduct an AI audit to identify all AI systems and their risk levels
  • Develop compliance plans for each system
  • Update policies and documents to align with transparency, fairness, and non-discrimination principles
  • Train employees on the new rules and their roles
  • Implement AI governance frameworks, including risk management, logging, and human oversight processes
  • Seek expert guidance and advice from legal, technical, and AI ethics experts

Current Laws vs. EU AI Act

EU AI Act

The EU AI Act brings a new approach to regulating AI systems within the European Union. Here's how it differs from existing laws:

Aspect Current Laws EU AI Act
Focus Laws like GDPR cover data protection and privacy but lack specific AI rules. Focuses solely on regulating AI systems, providing a dedicated framework.
Risk-Based Approach No explicit risk categorization for AI systems. Classifies AI systems into different risk levels (unacceptable, high, limited, minimal) with corresponding requirements.
Transparency Limited transparency requirements for AI systems. Mandates transparency for high-risk AI systems, requiring disclosure of AI use, system information, and decision-making processes.
Accuracy and Bias No specific provisions for ensuring AI accuracy or mitigating biases. Imposes requirements for high-risk AI systems to be accurate, minimize errors, and address potential biases.
Human Oversight No explicit requirements for human oversight. Requires human oversight mechanisms for high-risk AI systems to ensure accountability and control.
Compliance and Certification No AI-specific compliance or certification processes. Introduces conformity assessments and certification processes for high-risk AI systems.
Governance and Enforcement Relies on existing data protection authorities. Establishes a new EU-level regulatory body (AI Office) to oversee compliance and enforcement.
Penalties Limited penalties for data protection violations. Proposes significant fines (up to €30 million or 6% of global turnover) for non-compliance.

The EU AI Act aims to provide a dedicated legal framework tailored to the challenges and risks associated with AI systems. By introducing risk-based rules, transparency requirements, and governance mechanisms, the Act seeks to promote responsible AI development and deployment while protecting rights and enabling innovation within the EU.

Risk Levels for AI Systems

The EU AI Act divides AI systems into four risk levels. The level determines the rules you must follow when using AI in the EU.

Unacceptable Risk

These AI systems are banned. They pose a clear threat to rights and values. Examples include:

  • Systems that categorize people based on race, gender, or beliefs
  • Systems that score people based on their behavior or traits
  • Systems that manipulate human decisions or exploit vulnerabilities
  • Scraping facial images from the internet or CCTV for facial recognition databases

High Risk

These AI systems have strict rules due to potential harm. They must undergo testing, risk assessments, and have safeguards. Examples include:

High-Risk AI Systems Examples
Critical Infrastructure Energy, transportation
Education and Employment Exam scoring, resume filtering
Law Enforcement and Judicial Evidence evaluation, biometric identification
Medical and Healthcare Medical devices, healthcare applications

Limited Risk

These AI systems must be transparent about being AI and disclose if content is artificial or manipulated. Examples:

  • Chatbots
  • Deepfake generators

Minimal Risk

These AI systems have no specific rules. They can be freely used, but codes of conduct are encouraged. Examples:

  • AI-enabled video games
  • Spam filters

The risk levels aim to balance innovation with mitigating AI risks. Organizations must assess their AI systems and follow the rules for the applicable risk level.

Compliance Requirements

Risk Assessment and Classification

You must carefully assess the risks of your AI systems. This will determine the rules you need to follow under the EU AI Act. The Act divides AI systems into four risk levels:

  • Unacceptable Risk: These AI systems are banned due to clear threats to rights and values.
  • High Risk: These AI systems have strict rules due to potential harm.
  • Limited Risk: These AI systems must be transparent about being AI and disclose if content is artificial or manipulated.
  • Minimal Risk: These AI systems have no specific rules, but codes of conduct are encouraged.

Requirements for High-Risk AI Systems

If your AI system is classified as high-risk, you must follow these key requirements:

Data Governance

  • Ensure high-quality training, validation, and testing data
  • Address potential biases and inaccuracies in data
  • Document data sources and processing steps

Transparency and Disclosure

  • Disclose the AI system's purpose, capabilities, and limitations
  • Explain key characteristics and known risks

Human Oversight

  • Implement processes for human review, monitoring, and control
  • Enable human intervention when needed

Technical Robustness and Cybersecurity

  • Ensure resilience against attacks, data poisoning, and errors
  • Implement measures for accuracy and cybersecurity

Record-Keeping and Documentation

  • Maintain comprehensive logs and documentation
  • Record activities, risk management measures, and outcomes

Conformity Assessment and Registration

  • Undergo a conformity assessment by an approved body
  • Register compliant systems in an EU database

General-Purpose AI Models

If you use general-purpose AI models (GPAIMs) for high-impact, high-risk applications, you must follow additional governance and risk management obligations.

Penalties for Non-Compliance

Failure to comply with the EU AI Act can result in significant fines of up to €30 million or 6% of your global turnover.

To ensure compliance, you must carefully evaluate your AI systems, implement necessary measures based on their risk level, and follow the Act's requirements.

sbb-itb-ea3f94f

Getting Ready for the New Rules

1. Take Stock of Your AI Systems

Make a list of all the AI systems you use, whether developed in-house or bought from third parties. Determine their risk levels and intended uses to understand how the new rules will affect your business. This inventory will help you plan how to keep using or selling AI systems in the EU market.

2. Develop Compliance Plans

Based on your AI inventory, create plans to meet the requirements for each system. Depending on the risk level, you may need to:

  • Assess risks
  • Check for biases
  • Implement measures for transparency, explainability, and non-discrimination

3. Update Policies and Documents

Review and update your internal policies and customer-facing documents to align with principles like transparency, fairness, explainability, and non-discrimination in automated decision-making. Clear policies will help you address challenges to AI decisions and respond to regulatory queries.

4. Train Your Employees

Educate your employees about the new rules and how they impact their roles. Training is crucial to meet human oversight requirements and demonstrate effective risk management.

5. Establish AI Governance and Monitoring

Set up AI governance frameworks, including:

  • Risk management systems
  • Logging capabilities
  • Human oversight processes

Continuously monitor regulatory updates, guidelines, and interpretations to adapt to changing requirements.

6. Seek Expert Advice and Guidance

Consult legal, technical, and AI ethics experts to navigate the complex regulatory landscape. Collaborate with regulatory authorities to seek guidance, clarify requirements, and foster transparency and compliance.

Key Steps Description
AI Inventory Identify all AI systems and their risk levels
Compliance Plans Develop plans to meet requirements for each system
Policy Updates Align policies with transparency, fairness, and non-discrimination principles
Employee Training Educate employees on the new rules and their roles
AI Governance Implement risk management, logging, and human oversight processes
Expert Guidance Consult experts and authorities for advice and clarification

By taking these proactive steps, businesses can demonstrate responsible AI development, maintain compliance with the new rules, and unlock the potential of AI technologies while mitigating risks.

Penalties for Non-Compliance

The EU AI Act introduces significant fines for failing to follow its rules. These penalties aim to ensure businesses prioritize responsible AI development and use.

Penalty Levels

The Act has different penalty levels based on the violation:

  • Prohibited AI Systems: Using banned AI systems that pose unacceptable risks can result in fines up to €35 million or 7% of global annual turnover, whichever is higher.

  • High-Risk AI Systems: Not complying with requirements for high-risk AI systems (e.g., data governance, transparency, human oversight) can lead to fines up to €15 million or 3% of global annual turnover.

  • Providing Incorrect Information: Providing incorrect, incomplete, or misleading information to authorities can result in fines up to €7.5 million or 1.5% of global annual turnover.

Factors Considered

When determining the specific penalty amount, the following factors will be considered:

  • Nature, severity, and duration of the violation
  • Whether the violation was intentional or due to negligence
  • Actions taken to mitigate or remedy the violation
  • Any previous violations or fines
  • Company size and market share
  • Financial benefits gained from the violation
  • Whether the AI system was used for professional or personal activities

This approach aims to ensure penalties are proportionate to the circumstances of each case.

Comparison to GDPR

The AI Act's penalties are similar to, and in some cases higher than, those under the General Data Protection Regulation (GDPR). For example, the maximum GDPR fine is €20 million or 4% of global annual turnover, while the AI Act's highest penalty can reach 7% of global turnover.

This alignment with GDPR penalties highlights the EU's commitment to ensuring businesses take AI regulation as seriously as data protection regulations.

Compliance Importance

Given the substantial financial and reputational risks of non-compliance, businesses must prioritize understanding and adhering to the EU AI Act's requirements. Proactive measures, such as:

  • Conducting risk assessments
  • Implementing robust governance frameworks
  • Seeking expert guidance

...can help organizations navigate the complex regulatory landscape and mitigate potential penalties.

Key Takeaways

The EU AI Act sets new rules for using artificial intelligence (AI) in the European Union. Here are the main points:

  1. Risk Levels for AI Systems:

    • Unacceptable Risk: These AI systems are banned due to risks to rights and values.
    • High Risk: These AI systems have strict rules due to potential harm.
    • Limited Risk: These AI systems must be clear about being AI and disclose if content is artificial.
    • Minimal Risk: These AI systems have no specific rules, but guidelines are recommended.
  2. Requirements for High-Risk AI:

    • Conduct risk assessments
    • Implement data governance practices
    • Ensure human oversight and transparency
    • Maintain detailed documentation
Requirement Details
Data Governance Ensure high-quality data, address biases, document data sources
Transparency Disclose AI system's purpose, capabilities, limitations, and risks
Human Oversight Enable human review, monitoring, and control
Technical Robustness Ensure resilience against attacks, errors, and inaccuracies
Record-Keeping Maintain comprehensive logs and documentation
Conformity Assessment Undergo assessment by an approved body and register compliant systems
  1. Territorial Scope: Even businesses outside the EU may need to comply if their high-risk AI systems are used within the EU or if their AI outputs impact EU citizens.

  2. Penalties for Non-Compliance:

    • Prohibited AI Systems: Up to €35 million or 7% of global annual turnover
    • High-Risk AI Systems: Up to €15 million or 3% of global annual turnover
    • Providing Incorrect Information: Up to €7.5 million or 1.5% of global annual turnover
  3. Preparation Steps:

    • Conduct an AI audit to identify all AI systems and their risk levels
    • Develop compliance plans for each system
    • Update policies and documents to align with transparency, fairness, and non-discrimination principles
    • Train employees on the new rules and their roles
    • Implement AI governance frameworks, including risk management, logging, and human oversight processes
    • Seek expert guidance and advice from legal, technical, and AI ethics experts

The EU AI Act aims to ensure responsible AI development and use while mitigating risks. By prioritizing compliance, businesses can avoid penalties and contribute to the ethical deployment of AI technologies, benefiting society.

FAQs

What are the main points of the EU AI Act?

The EU AI Act sets new rules for using artificial intelligence (AI) in the European Union. Here are the key points:

  • Risk-Based Approach: AI systems are divided into four risk levels:

    • Unacceptable Risk: These AI systems are banned due to risks to rights and values.
    • High Risk: These AI systems have strict rules due to potential harm.
    • Limited Risk: These AI systems must clearly state they are AI and disclose if content is artificial.
    • Minimal Risk: These AI systems have no specific rules, but guidelines are recommended.
  • Requirements for High-Risk AI: Systems classified as high-risk, such as those used in employment, healthcare, or law enforcement, must:

    • Undergo risk assessments
    • Implement data governance practices
    • Ensure human oversight and transparency
    • Maintain detailed documentation
Requirement Details
Data Governance Ensure high-quality data, address biases, document data sources
Transparency Disclose AI system's purpose, capabilities, limitations, and risks
Human Oversight Enable human review, monitoring, and control
Technical Robustness Ensure resilience against attacks, errors, and inaccuracies
Record-Keeping Maintain comprehensive logs and documentation
Conformity Assessment Undergo assessment by an approved body and register compliant systems
  • Territorial Scope: Even businesses outside the EU may need to comply if their high-risk AI systems are used within the EU or if their AI outputs impact EU citizens.

Who needs to comply with the EU AI Act?

The EU AI Act applies to a wide range of entities involved in the development, deployment, or use of AI systems within the EU market, including:

  • Providers: Companies or individuals that develop and market AI systems, regardless of their location.
  • Deployers: Organizations or individuals that use AI systems under their authority within the EU, even if they are located outside the EU.
  • Importers and distributors: Entities that import or distribute AI systems within the EU market.

If your business develops, deploys, or uses high-risk AI systems within the EU, or if your AI system's outputs are used within the EU, you will need to comply with the Act's requirements.

What does the EU AI Act mean for businesses?

The EU AI Act has significant implications for businesses operating within or serving the EU market:

  • Compliance Costs: Following the Act's requirements, such as conducting risk assessments, implementing safeguards, and maintaining documentation, may incur substantial costs, especially for high-risk AI systems.

  • Operational Changes: Businesses may need to modify their AI systems, data practices, and governance processes to align with the Act's standards, potentially impacting their operations and workflows.

  • Competitive Landscape: The Act aims to create a level playing field for AI development and deployment within the EU, potentially reshaping the competitive landscape for businesses operating in this space.

  • Penalties for Non-Compliance: Failure to comply with the Act can result in significant fines, up to €35 million or 7% of a company's global annual turnover for the most severe violations.

To mitigate risks and capitalize on opportunities, businesses should:

  • Assess their AI systems
  • Develop compliance strategies
  • Seek expert guidance to navigate the new regulatory landscape effectively

Related posts

Legal help, anytime and anywhere

Join launch list and get access to Cimphony for a discounted early bird price, Cimphony goes live in 7 days
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Unlimited all-inclusive to achieve maximum returns
$399
$299
one time lifetime price
Access to all contract drafting
Unlimited user accounts
Unlimited contract analyze, review
Access to all editing blocks
e-Sign within seconds
Start 14 Days Free Trial
For a small company that wants to show what it's worth.
$29
$19
Per User / Per month
10 contracts drafting
5 User accounts
3 contracts analyze, review
Access to all editing blocks
e-Sign within seconds
Start 14 Days Free Trial
Free start for your project on our platform.
$19
$9
Per User / Per Month
1 contract draft
1 User account
3 contracts analyze, review
Access to all editing blocks
e-Sign within seconds
Start 14 Days Free Trial
Lifetime unlimited
Unlimited all-inclusive to achieve maximum returns
$999
$699
one time lifetime price

6 plans remaining at this price
Access to all legal document creation
Unlimited user accounts
Unlimited document analyze, review
Access to all editing blocks
e-Sign within seconds
Start 14 Days Free Trial
Monthly
For a company that wants to show what it's worth.
$99
$79
Per User / Per month
10 document drafting
5 User accounts
3 document analyze, review
Access to all editing blocks
e-Sign within seconds
Start 14 Days Free Trial
Base
Business owners starting on our platform.
$69
$49
Per User / Per Month
1 document draft
1 User account
3 document analyze, review
Access to all editing blocks
e-Sign within seconds
Start 14 Days Free Trial

Save 90% on your legal bills

Start Free Trial