Yesterday, the European Parliament formally endorsed the Artificial Intelligence Act marking a significant milestone in the regulation of AI and its rapidly evolving role within businesses. The Act will likely take effect from May.
The Act is the first comprehensive legal framework for managing the risks posed by AI within businesses, with a strong emphasis on managing high-risk AI. All businesses with touch points in the EU will be affected by this regulation. The regulation has been created to ensure the responsible development and deployment of AI technologies. Creative industries that already use AI to generate artwork, music, images or gaming already face the risk of copyright infringements. Businesses failing to comply with The Act could be fined £20m or 7% of global turnover per breach. We encourage businesses to risk assess their use of AI and to conduct a gap analysis of their compliance with The Act.
Whilst there are still some areas of The Act to be resolved, we want to share with you everything you need to know, your obligations and what you can do to get yourself prepared. This blog builds on advice provided by Toro's recent AI webinar which can be watched here.
What do we know?
Risk-Based Approach
The EU AI Act classifies AI systems into four risk tiers:
- Unacceptable risk
- High risk
- Limited risk
- Minimal risk
Companies developing high-risk AI systems such as those used in healthcare, transportation, finance and law enforcement need to proactively assess and prepare to comply with their new obligations. This includes:
- Fundamental rights impact and conformity assessments
- Operational monitoring
- Risk and quality management systems
- Public registration
- Other transparency requirements.
Prohibited Practices
The new regulations prohibit specific AI applications that endanger human behaviour or exploit vulnerabilities in certain groups. This ensures ethical deployment of AI technologies. Prohibited practices include biometric categorisation systems based on sensitive characteristics and indiscriminate scraping of facial images from the internet or CCTV footage to create facial recognition databases.
Transparency & Accountability
The regulation underscores the importance of transparency and accountability in AI development and deployment. Developers must provide comprehensive documentation on AI system operations, including data usage, algorithms employed, and potential biases. Organisations must establish mechanisms for human oversight and accountability to address any negative impacts of AI systems. General-purpose AI (GPAI) systems and their models must meet transparency requirements, comply with EU copyright law, and publish detailed training content summaries. General purpose AI systems are AI systems that have a wide range of possible uses, they possess the ability to learn and apply knowledge in a wide range of tasks. An example of this is ChatGPT. More powerful GPAI models posing systemic risks are subject to additional requirements, such as model evaluations, systemic risk assessments, and incident reporting. Additionally, artificial or manipulated images, audio, or video content ("deepfakes") must be clearly labelled.
Measures to Support Innovation & SMEs
National-level regulatory sandboxes and real-world testing facilities will be established to support SMEs and start-ups in developing and training innovative AI technologies before market release.
What should you be doing?
The first step we’d recommend is to conduct a gap analysis and create an internal road map to compliance.
Start by asking yourself the following questions:
- Ownership - Do you have inventory of where AI is being used within your organisation? Who is accountable for the use of the AI systems you have in place? Have leaders been engaged with the systems that you are using and do they understand the impacts of the new EU AI Act.
- Risk - Have you assessed where AI is being used and what the potential risk is for each of these user cases?
- Governance – Do you an AI governance policy and framework to ensure responsible AI development and deployment across your organisation?
- Ongoing Compliance – Have you established monitoring mechanisms to oversee AI systems’ compliance with the new Act’s requirements? Have you put something in place that will regularly review and update these processes to align with evolving regulations?
These questions should be used as the starting point of building and implementing an AI governance strategy within your organisation. This strategy will involve mapping out and categorising the AI systems that you use or are planning to use, based on the risk levels laid out in the framework.
The approval of the EU AI Act represents a huge milestone in aligning technological progress with the protection of fundamental rights. As it prepares to enter force, UK businesses need to start taking proactive steps forward to align with its regulations and maintain competitiveness in the European market.
It's important that you stay informed and monitor the development and updates to the EU AI Act and if needed speak to an compliance experts such as Toro to gain insights and guidance. Toro will also be posting out regular updates covering the changes in legislation over the coming months.
If you would like to speak to a member of the Toro team about what the AI Act means for your business and how we can help get you prepared, then please get in touch.