top of page

The EU AI Act

Donna AI Resource Hub

 

 

 

 

 

 

 

 

Summary of the AI Act and Its Implications

Overview 

The conclusion of negotiations between the EU Council and Parliament on the Artificial Intelligence Act (AI Act) marks a significant milestone, bringing AI into mainstream regulatory focus. Similar to the General Data Protection Regulation (GDPR), the AI Act is expected to have widespread impact across various industries, necessitating awareness and compliance from all sectors.

 

Key Points: 

1. Purpose and Goals:

  • The AI Act aims to promote human-centric, trustworthy AI while protecting health, safety, and fundamental rights as outlined in the European Charter of Fundamental Rights.

  • It seeks to support innovation and improve internal market functioning through enhanced competitiveness.

​

2. Definition and Scope:

  • The Act defines AI systems broadly, including those using machine learning, logic, and knowledge-based approaches to produce outputs like content, predictions, recommendations, or decisions.

  • There is ongoing debate about the broadness of this definition and its applicability to non-AI technologies.

​

3. Regulatory Framework - Risk-Based Approach:

  • Prohibited AI Practices: Includes AI systems that exploit vulnerabilities, use manipulative techniques, perform social scoring, and untargeted facial recognition scraping. Examples include (exceptions exist for law enforcement with prior authorization):

    • ​Cognitive Behavioral Manipulation: Techniques aimed at circumventing free will. (gambling nudges?)

    • Social Scoring: Systems that evaluate individuals based on behavior or personal characteristics.

    • Exploitation of Vulnerabilities: AI that takes advantage of age, disability, or socio-economic circumstances.

    • Biometric Categorization: Inferences about sensitive attributes like political views, sexual orientation, religious beliefs, and race.

    • Emotion Recognition: Use in workplaces or educational settings, except for certain safety applications (e.g., detecting if a driver is falling asleep).

    • Predictive Policing: Some methods of predicting criminal behavior.

    • Facial Web-Scraping: Untargeted collection of facial images from the internet or CCTV for facial recognition databases.

    • Important to note that its not clear what does and does not fall under this category e.g. does Online gambling, fark patterns, digital nudging or social scoring in social media call under this?)

  • High-Risk AI Systems: must undergo rigorous conformity assessments and meet strict requirements.

    • ​AI in critical infrastructure, education, recruitment, evidence evaluation, migration, and justice.

    • These systems must undergo thorough conformity assessments covering data quality, transparency, human oversight, accuracy, cybersecurity, and more.

    • AI performing narrow procedural tasks or used in regulated products (e.g., machinery, toys) may not always be high risk.

  • Limited Risk AI Systems: Require transparency, ensuring users are informed when interacting with AI (e.g., chatbots and deepfakes).

  • Minimal Risk AI Systems: Include most current AI applications and are largely unregulated.

​

4. Obligations for Providers and Deployers

  • Providers: Must establish risk management systems, ensure data governance, provide technical documentation, and design for human oversight, accuracy, and cybersecurity.

  • Deployers: Must comply with transparency requirements and ensure AI literacy among staff.

​

5. Governance and Enforcement​

  • An AI Office within the European Commission will oversee enforcement and compliance. Non-compliance can result in:

    • Fines: Severe financial penalties ranging from €7.5 million to €35 million, or 1% to 7% of worldwide turnover, for non-compliance, prohibited use, and supply of misleading information.​

    • Enforcement Action: Authorities can prohibit the use of AI systems, demand full access to system data and source code, and implement proactive market surveillance.

​

6. Impact and Implementation

  • The Act's provisions will apply at different intervals:

    • ​six months for prohibited AI systems,

    • 12 months for general purposes AI, and 

    • up to 36 months for high-risk AI systems.

  • The Commission will issue guidelines to provide clarity on the Act’s application, particularly for high-risk categories.

​

7. Special Considerations

  • Exemptions exist for national security, military, and purely personal, non-professional use.

  • The Act applies to both public and private organizations within and outside the EU, impacting any AI system affecting people in the EU.

​

The AI Act has numerous implications for legal professionals, both in terms of compliance and the broader impact on legal practice. Here are the detailed points:

1. Awareness and Education

  • Legal professionals must familiarize themselves with the AI Act's provisions, similar to the GDPR, to advise clients accurately and ensure their own practices comply.

  • Continuous education on AI technologies and the evolving regulatory landscape will be crucial.

​

2. Compliance Responsibilities

  • Law firms using AI systems in their operations (e.g., for document review, case prediction, or client interactions) must determine whether these systems fall under high-risk, limited risk, or minimal risk categories.

  • Firms must ensure that any high-risk AI systems used comply with the rigorous conformity assessments, including data quality, documentation, traceability, transparency, human oversight, accuracy, cybersecurity, and robustness.

 

3. Role as Deployers

  • Even if a law firm does not develop AI systems, it may deploy them in practice. Deployers have specific obligations under the AI Act, such as ensuring transparency and maintaining AI literacy among staff.

  • Firms must assess if their use of AI systems poses significant risks to health, safety, or fundamental rights, particularly in employment-related applications.

​

4. Advising Clients

  • Legal professionals must be prepared to advise clients across various industries on compliance with the AI Act. This includes assessing clients' AI systems, determining their risk categories, and guiding them through conformity assessments.

  • Specialized advice will be required for clients in sectors heavily impacted by the AI Act, such as healthcare, finance, education, and law enforcement.

  • It can be very difficult to spot potentially prohibited AI as it can be hidden down the value chain.

​

5. Contracts and Agreements

  • Contracts involving AI systems will need to include clauses ensuring compliance with the AI Act. This includes agreements with AI providers, deployers, and other stakeholders.

  • Legal professionals must draft, review, and negotiate terms that address responsibilities for risk management, data governance, technical documentation, and incident reporting.

​

6. Data Protection and Privacy

  • Given the overlap between AI systems and personal data processing, compliance with both the AI Act and GDPR will be necessary. Legal professionals must ensure AI systems adhere to data protection principles.

  • Firms must advise clients on managing data governance practices to align with both regulatory frameworks.

​

7. Litigation and Dispute Resolution:

  • The AI Act will likely give rise to new areas of litigation, particularly concerning non-compliance, data breaches, and impacts on fundamental rights.

  • Legal professionals must be prepared to handle disputes arising from AI system implementations and advise on alternative dispute resolution mechanisms involving AI.

​

8. Professional Conduct and Liability

  • Use of AI systems in legal practice must comply with professional conduct guidelines to avoid issues of negligence or breaches of duty.

  • Lawyers must be vigilant about the ethical implications of AI, ensuring AI usage does not compromise client confidentiality or the integrity of legal advice.

​

9. AI Literacy and Training

  • Law firms must invest in training programs to enhance AI literacy among their staff, ensuring they understand the capabilities, limitations, and regulatory requirements of AI systems.

  • This includes training on recognizing when an AI system is in use and understanding its compliance obligations under the AI Act.

​

10. Monitoring and Reporting

  • Firms using high-risk AI systems must implement mechanisms for ongoing monitoring and reporting to ensure continuous compliance with the AI Act.

  • Incident reporting protocols must be established to address any breaches or issues promptly.

​

11. Impact on Legal Services Market 

  • The AI Act may drive changes in the legal services market, with increased demand for expertise in AI compliance, risk assessment, and litigation.

  • Firms may need to develop new practice areas or enhance existing ones to meet client needs related to AI regulations.

​

12. Engagement with Regulatory Authorities

  • Legal professionals will need to interact with the European AI Office and national competent authorities regarding compliance issues, incident reporting, and conformity assessments.

  • Active participation in consultations and standardization processes may be necessary to influence the development of guidelines and standards under the AI Act.

 

13. Future-Proofing Legal Practice

  • The AI Act's dynamic nature, with potential modifications and additions to high-risk use cases, requires law firms to adopt a proactive approach to compliance.

  • Staying updated on regulatory changes and emerging AI technologies will be essential for future-proofing legal practice.​

​

Assessing AI Utilization in Business Operations

Key Steps for Compliance

  1. Identify AI in use and supplied by the business

  2. Assess if scope of the AI Act applies

  3. Assess role in value chain

  4. Identify Risk Category

  5. Identify Compliance Obligations

​

Initial Assessment 

  • Begin by auditing all AI systems and models within the enterprise.

  • Determine if your business develops AI tools or applications.

  • Evaluate if your business purchases AI tools or applications.

  • Assess if your business markets AI tools or applications.

  • Check if your business offers AI applications under a different brand name.

​

Detailed Evaluation

  • Complete the EU AI Act Readiness Questionnaires.

  • Investigate if AI applications are integrated into your website or app.

  • Examine if AI is used in your hiring processes.

  • Identify if any of your engineers or developers have access to and utilize AI code.

​​

Evaluating Your Position in the AI Value Chain

  • Level 1: Lead GPAI Provider: e.g. Open Source

    • ​Holds primary compliance obligations for general-purpose AI models.

  • Level 2: Downstream Provider: Harvey AI (but if Harvey fine tunes the model then they are considered Lead GPAI)

    • ​May have compliance obligations for GPAI models.

    • Must ensure compliance with high-risk AI use cases as necessary.

  • Level 3: Deployer:

    • ​Must comply with high-risk AI regulations, especially if deploying new high-risk use cases.

    • Shares compliance responsibilities with upstream providers.

​

Key Points:

  • Compliance duties are concurrent across the value chain.

  • Compliance burdens are distributed among all stakeholders, with an emphasis on information disclosure and adherence to high-risk AI use case requirements.

​

General Purpose AI Obligations Under the EU AI Act 

The EU AI Act places significant responsibilities on providers of General Purpose AI (GPAI) models. These obligations ensure that AI systems are trustworthy, transparent, and accountable. Providers must comply with several key requirements:

1. Transparency and Documentation:

  • Providers must disclose detailed information about the AI model, including the datasets used for training and the methodologies applied.

  • They must also outline the capabilities and limitations of the AI systems [1, 6].

​

2. Risk Management 

  • High-risk AI systems must undergo rigorous risk assessments, including adversarial testing, to identify and mitigate potential risks.

  • Continuous monitoring and reporting of system operations and outcomes are required to track and address any incidents [3, 4].

​

3. Compliance and Governance

  • Providers are subject to strict compliance checks and must ensure their AI systems adhere to the standards set by the AI Act.

  • This includes implementing automated event logs and maintaining transparency about the system's operations [2].

​

4. Additional Obligations for Systemic Risk AI:

  • AI models with high computational power (above 10^25 FLOPs) are classified as systemic risk and must fulfill extra requirements, such as detailed risk evaluations and enhanced transparency [1].

​

These measures aim to ensure that General Purpose AI systems deployed in the EU are safe, respect fundamental rights, and promote innovation while mitigating risks associated with advanced AI technologies.

​

Transparency Obligations Under the AI Act

The AI Act imposes several transparency obligations on AI providers to ensure clarity and accountability in the use of AI systems. Providers must declare any "public interest" text to keep the public informed. They are also required to inform users when they are interacting with an AI system to avoid any confusion. Furthermore, any content generated by "deepfake" technology, including images, audio, or video, must be clearly identified as such. Additionally, the use of emotional recognition or biometric categorization technologies must be disclosed, especially when such use is legally permitted. These measures are designed to enhance transparency and protect individuals from potential misuse of AI technologies.

​

The UK Position (Legislationlight Approach)

The UK Government aims to balance innovation with regulation in the field of AI. Individual regulators will define their approaches based on specific sectors.

 

The regulatory framework focuses on five core principles:

  1. Safety, Security, and Robustness

  2. Transparency and Explainability

  3. Fairness

  4. Accountability and Governance

  5. Contestability and Redress

Initially, the regulation will be non-statutory, although a member's bill has been introduced, it is unlikely to become law. The government's response is ongoing, and the UK Labour Party is likely to push forward with AI regulation following an election victory (if they are elected!).

​

 

EU and British Flag
bottom of page