blue and yellow star flag

Navigating the EU AI Act: A Comprehensive Overview

Artificial Intelligence Regulations

In a significant stride toward regulating artificial intelligence (AI), the European Union proposed the EU AI Act as a draft regulation on April 21, 2021. Fast forward to December 8, 2023, and the political consensus across all three EU institutions marked a crucial step toward making this regulatory framework a reality.

The EU AI Act represents a robust set of guidelines aimed at both providers and users of AI systems, delineating transparency and reporting obligations for entities operating in the EU market. Beyond its scope on European companies, the act extends its influence to any AI system impacting individuals within the EU, regardless of the system’s origin or deployment location.

While a major portion of the EU AI Act was agreed upon in December, some technical details are yet to be finalized. Technical teams will continue to collaborate to refine these details, with the Parliament’s Internal Market and Civil Liberties Committees slated to vote on the agreement around January 25, 2024. Following this vote, the final text of the EU AI Act is expected to be published in the Official Journal of the EU in spring 2024, initiating enforcement timelines.

Enforcement under the EU AI Act brings forth a tiered structure of fines, signalling a stern approach to non-compliance. Violations could incur fines of up to 7% of the global annual turnover or €35 million for prohibited AI activities, while other infractions might result in penalties of up to 3% of the global annual turnover or €15 million. Businesses supplying incorrect information may face fines of up to 1.5% of global annual turnover or €7.5 million, with specific caps in place for SMEs and startups.

For businesses engaged in AI, the onus of compliance with the EU AI Act lies squarely on their shoulders, necessitating adequate preparation during the lead-up to enforcement. The extent of compliance obligations is contingent upon the level of risk posed by an AI system along the value chain.

Under the AI Act’s tiered compliance framework, the most rigorous requirements are directed at AI systems classified as “high-risk” and general-purpose AI systems deemed high-impact with “systemic risks.” Depending on the risk threshold of their systems, businesses may need to undertake various responsibilities, including conducting risk assessments, providing conformity assessments using EU-approved technical standards, maintaining meticulous technical documentation, and record-keeping.

Transparency and disclosure obligations also play a crucial role, varying based on the risk level:

  1. Prohibited: AI systems with no transparency obligations must be removed from the market.
  2. High-risk: Registering high-risk AI systems on the EU database before market placement becomes mandatory.
  3. Limited-risk: Informing and obtaining consent from individuals exposed to permitted emotion recognition or biometric categorization systems. Disclosure and clear labeling are required for visual or audio “deep fake” content manipulated by AI.
  4. Minimal risk: No transparency obligations are imposed.

Furthermore, businesses must ensure that AI systems comply with specific requirements corresponding to their risk level, especially when the original provider or a third party introduces substantial modifications to the system’s intended purpose.

As the EU AI Act progresses toward its finalization and implementation, businesses operating in the AI landscape must remain vigilant, aligning their practices with the forthcoming regulatory landscape to ensure compliance and ethical AI use.

Leave a Reply

Your email address will not be published. Required fields are marked *