top of page
Writer's pictureCiara O'Buachalla

The EU AI Act and its implication for Lawyers

Introduction

On the 2nd of February 2024, the European Union finally reached a consensus on a regulatory framework that aspires to be the global blueprint for AI governance—the AI Act ("the Act"). The EU's legislative journey towards the Act has been marked by robust debate and negotiation, reflecting a collective effort to balance fostering innovation and ensuring the safety and rights of individuals in the digital age.


Who, What, Why, When, Where, and How of the Act:

  • Who: Regulate AI developers, deployers, and users within the EU and beyond.

  • What: Provide a legal framework for the development and use of AI. The act doesn't offer a specific definition but describes AI systems as "software that, for a given set of human-defined objectives, can generate outcomes such as predictions, recommendations or decisions influencing relevant human outcomes, by using, among other techniques, machine learning, statistics and search." This implies AI systems involve analyzing data, learning from it, and using learned patterns to achieve specific goals.

  • Why: Establish standards to ensure AI’s ethical use and to protect EU citizens’ fundamental rights.

  • When: Developed over several years, with a proposed timeline for implementation in the next 6 -36 months

  • Where: Where: Entities that supply or operate AI systems in the EU, no matter if they are EU-based or situated outside of the EU, Entities within the EU that utilize AI systems, Suppliers and users of AI systems based outside the EU, if the results of these systems are employed within the EU. The term "provider" refers to the party responsible for creating an AI system or introducing it into the EU market.

  • How: Through a risk-based approach, setting compliance obligations and fines for non-compliance.


The Risk-Based Approach

The Act classifies AI systems into unacceptable, high, limited, or minimal risk, with corresponding regulatory strictness. Unacceptable risk AI is banned, high-risk requires extensive compliance, limited risk calls for transparency obligations, and minimal risk is subject to minimal requirements.

  1. Unacceptable Risk AI: These systems are banned due to their inherent dangers, such as real-time facial recognition in public spaces or social scoring similar and generative AI used for malicious purposes like creating deepfakes for disinformation or inciting violences.

  2. High-Risk AI: These systems require stringent compliance measures due to their significant impact on safety, fundamental rights, or fairness. This includes AI used in critical infrastructure (e.g., energy grids) or in sensitive professions (e.g., recruitment, where an AI-powered recruitment system biased towards specific demographics, jeopardizing equal opportunities. Compliance obligations could include:

    1. Transparency and explainability: Ensuring users understand how the AI generates outputs and mitigating bias.

    2. Data governance: Implementing robust data security and privacy measures for the training data used.

    3. Human oversight: Establishing mechanisms for human intervention to prevent harmful outputs.

    4. Risk assessment and mitigation: Conducting thorough risk assessments and adopting measures to address identified risks.

  3. Limited Risk AI: These systems pose some potential risks but require less stringent measures, possibly focusing on data protection and transparency. Examples include chatbots with basic functionalities or spam filters. Imagine a customer service chatbot providing basic information retrieval, which may occasionally misinterpret user intent but has minimal overall impact.

  4. Minimal or No Risk AI: These systems pose negligible risks and require no specific regulations. They include simple games or basic image filters. Think of a mobile game using AI for character movement with no personal data involved, posing no significant risk to users.


Timeline for Implementation

The Act proposes a phased implementation, allowing organizations time to adapt.

  • Early 2024: The provisional agreement needs to be formally adopted by both the Council and Parliament. This might involve minor adjustments during legal scrubbing.

  • 2024-2025: Once adopted, a transition period of 18-24 months is likely before the AI Act becomes fully enforceable. This will allow organizations time to adapt and comply.

  • 2026: The AI Act is expected to be fully applicable by 2026.


Fines

For individual offenders: Up to €30,000 or 3% of their global annual turnover, whichever is higher.

For companies:

  • For non-compliance with prohibitions: Up to €35 million or 7% of their global annual turnover, whichever is higher.

  • For breach of obligations for high-risk AI: Up to €20 million or 4% of their global annual turnover, whichever is higher.

  • For providing incorrect information: Up to €7.5 million or 1.5% of their global annual turnover, whichever is higher.

The Act considers the size, nature, and severity of the infringement, intention, and any remedial measures taken when determining the fine amount and reduced fines for SMEs and startups. Besides fines, the Act empowers member states to impose other corrective measures like temporary bans on using the non-compliant AI system or even suspending its operation.


The Impact of the act on Law Firms

Legal tech providers must navigate the Act's requirements, potentially adjusting their AI offerings. Law firms must ensure their use of AI in practice complies with the new regulations, influencing how legal services are delivered.

Advising their clients

Law firms will encounter a wave of work related to advising clients on compliance with the new AI Act, especially for those operating within the EU or serving EU citizens. This involves understanding the categorization of AI systems under the Act and ensuring clients' AI applications comply with the relevant requirements.

The Act establishes extensive compliance obligations for high-risk AI systems, including risk management systems, data governance, technical documentation, and transparency requirements. Law firms will need to guide clients through these obligations to avoid legal and financial penalties.

For law firms using AI technologies

The Act mandates adherence to specific transparency and disclosure requirements, especially for AI systems that interact with natural persons. This means that legal tech tools like chatbots or document automation systems used by law firms must clearly inform users that they are interacting with AI.

Law firms will also need to ensure that any high-risk AI systems they use comply with the Act's provisions on data governance, accuracy, and human oversight among others. This includes internal systems for document review, case prediction, or any other AI tool that could fall under the high-risk category.

Developing their own LLMs and in-house legal AI tools

Law firms developing their own AI tools, will need to navigate the AI Act's requirements from the ground up. This includes registering high-risk AI systems in an EU database before deployment and ensuring these tools meet all regulatory requirements regarding transparency, safety, and data governance.

The AI Act encourages the development of innovative AI systems through regulatory sandboxes, which may offer law firms opportunities to test and refine their AI tools in a controlled environment with regulatory support. This is particularly relevant for smaller providers and startups.

Using out-of-the-box legal AI tech products

When adopting third-party AI solutions, law firms must verify that these products are compliant with the AI Act, especially if they are classified as high-risk. This involves understanding the product's risk category, its data handling practices, and any transparency obligations.


Determining the AI category under the EU AI Act for a law firm using AI for client advising requires assessing the specific functionalities and potential risks involved

  1. High-Risk Category: If the AI directly makes legal advice or decisions that significantly impact clients (e.g., recommending specific legal actions, determining eligibility for benefits), it could fall under the high-risk category. This is due to the potential for harm if biased or inaccurate advice leads to adverse legal consequences for clients. Additionally, if the AI processes sensitive personal data like financial information, health records, or criminal history, it might be classified as high-risk due to privacy concerns and potential discriminatory outcomes.

  2. Limited Risk Category: On the other hand, if the AI is used for limited legal tasks like document review, contract analysis, or legal research, it might fall under the limited risk category. These tasks involve less direct impact on clients' final decisions and potentially handle less sensitive data. Additionally, if the AI mainly provides informational support without directly advising clients on legal actions, it might be considered limited risk. This assumes the information accuracy is adequately ensured and doesn't significantly influence client decisions.

  3. Minimal or No Risk Category: If the AI is used for basic tasks like scheduling appointments, managing client documents, or providing general legal information resources, it might fall under the minimal or no risk category. These tasks pose minimal impact on clients' legal outcomes and potentially handle little to no sensitive data.


Practical Steps for Compliance

For law firms and in-house lawyers working towards compliance with the EU AI Act, the following practical steps are advisable:

  1. Review and Classify AI Tools: Conduct an inventory to classify AI tools and systems according to the AI Act’s risk categories, focusing on identifying high-risk or limited-risk systems

  2. Risk Assessments for High-Risk Systems:*Perform thorough risk assessments for AI systems classified as high-risk, implementing measures to comply with requirements related to transparency, data governance, and cybersecurity

  3. Data Protection Practices: Enhance data protection practices in alignment with GDPR, especially for AI systems that process personal data

  4. Training and Awareness: Develop and deliver training programs for legal professionals on the AI Act's requirements and compliance obligations.

  5. Policies for Ethical AI Use: Establish clear policies for the ethical use of AI, addressing potential biases and ensuring fairness in AI applications

  6. Communication with AI Vendors: Maintain open lines of communication with AI vendors to ensure their tools comply with the AI Act, integrating contractual safeguards to mitigate risk

  7. Collaboration with Tech Suppliers with Strong AI Governance:** Work closely with technology suppliers that demonstrate robust AI governance practices, ensuring that their AI tools and systems meet legal and ethical standards


Final Thoughts

As AI continues to evolve, so too will its regulation. The AI Act is not the final word but the beginning of an ongoing dialogue and evolution in the regulatory approach to AI, setting a precedent for jurisdictions worldwide. The Act is poised to revolutionize industries by mandating a comprehensive framework for the ethical use, development, and deployment of artificial intelligence across Europe. This landmark legislation aims to foster innovation while ensuring AI technologies are safe, transparent, and aligned with data protection standards. By setting clear guidelines, the Act is expected to bolster consumer and business confidence in AI applications, encouraging their broader adoption and integration into various services and operations.

In the legal sector, the Act offers unique opportunities for law firms to lead in compliance advisory, navigate the integration of AI across jurisdictions, and innovate within legal tech under a well-defined regulatory environment. It signals a shift towards more efficient, accurate, and accessible legal services, leveraging AI to benefit the industry and its clientele. Overall, the Act represents a significant stride towards harmonizing technological advancement with ethical and societal values across all sectors.

bottom of page