EU AI Act Compliance
- AI Startups
- Venture Capitalists
Are you developing a solution based on Artificial Intelligence or integrating AI into your processes, products or services? Get ready, as you will need to comply with a new regulation, the AI Act.
How can Westminster & Partners help your organization?
- AI Startups
- Venture Capitalists
30+ European AI Startups Compliant
10+ CE Marking Granted
5+ European Jurisdictions
Creating a mapping of AI system to identify existing and future AI projects
Defining the project team
Estimating the budget for the AI project
Estimating the budget for the associated compliance
Assessing the risk level associated with different AI systems
The AI Act aims to ensure that artificial intelligence systems and models marketed within the European Union are used ethically, safely, and respectfully, adhering to the fundamental rights of the EU.
It has also been written to strengthen the business AI competitiveness and innovation.
It will reduce the risks of AI drifts, strengthening users’ confidence in its use and adoption.
All providers*, distributors*, or deployers* of AI systems and models, legal entities (companies, foundations, associations, research laboratories, etc.), that have their registered office in the European Union, or are located outside the EU, who market their AI system or model within the European Union.
The level of regulation and associated obligations depend on the risk level posed by the AI system or model:
- “Minimal Risk,” i.e., AI systems that present minimal risk for individuals’ rights and safety. These systems are subject to certain transparency and identification requirements, such as requiring that users are made aware that they are interacting with an AI system.
- “Limited Risk,” i.e., AI systems that perform generally applicable functions such as image and speech recognition, translation, and content generation. These systems and models must adhere to certain transparency requirements, including compliance with copyright law and publishing summaries of the content used for training the AI system. Further, systems and models that pose systemic risks are subject to more stringent requirements, such as model evaluations, risk assessments, and testing and reporting requirements.
- “High Risk,” i.e., AI systems such as those used in healthcare or infrastructure, are subject to stricter requirements, including technical documentation, data governance, human oversight, security, conformity assessments, and reporting obligations.
- “Unacceptable Risk,” i.e., those that present a clear threat to the safety, livelihoods and rights of people will be banned (with narrow exceptions for law enforcement).
Different deadlines, ranging from 6 to 36 months, apply depending on the risk level of AI systems and models. Regardless of the deadline, it is essential to be prepared and anticipate compliance, which may disrupt the tech, product, and legal roadmaps of companies.
While this progressive timeline has been designed by the regulator to give companies enough time to comply, it is crucial to anticipate the next steps.