EU AI regulation is moving at pace.  What you need to think about now.

EU AI regulation is moving at pace. What you need to think about now.

Introduction

The EU is rapidly progressing its rules that aim to regulate AI.  While the rules are likely to come into force in 2024 at some point, the current shape of the regulation offers an interesting insight into what will be expected of organisations.  TL:DR – start by assessing how you want to use AI, and whether that use is acceptable. 

What is AI?

As defined by the OECD: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

Different risk levels

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

Risk level

Action

Examples include

Unacceptable

Banned

Cognitive behavioural manipulation of people or specific vulnerable groups: an example given is voice-activated toys that encourage dangerous behaviour in children.

Social scoring: classifying people based on behaviour, socio-economic status, or personal characteristics.

Biometric identification and categorisation of people.

Real-time and remote biometric identification systems, such as facial recognition.

High risk

 

Lots, including:

Ongoing assessment

Registration

Risk management

Data governance

Monitoring

Record-keeping practices

Detailed documentation

Transparency

Human oversight

Standards for accuracy, robustness

Cybersecurity

Management and operation of critical infrastructure.

Education and vocational training.

Employment, worker management and access to self-employment.

Access to and enjoyment of essential private services and public services and benefits.

Law enforcement.

Migration, asylum and border control management

Assistance in legal interpretation and application of the law.

Limited risk

Transparency requirements

AI systems that generate or manipulate image, audio or video content.

 

Who does it apply to?

It looks like it applies if you are (a) in the EU or (b) targeting services at EU residents. 

What should you be doing?

This is moving quickly, and the specific rules and regulations are crystallising.  A good start is to:

  1. Formalise your AI governance. How do you oversee your AI development.  What does this mean?  Start with ensuring you have skilled, responsible people who know both how you are using AI and the standards you apply to it. Don’t just leave it to IT.
  2. Create an inventory. List where you are using AI, and how.  Start to build your documentation on use, including (where relevant) design, components, training data and failure modes.
  3. Begin to assess the risk. For each use case, consider whether (a) it’s acceptable under the regulation and (b) whether it’s high or limited risk.

This is really about starting a process, rather than trying to guess what the regulations will cover, but the above three steps offer a fair starting point. We’ll keep an eye on the regulations as they develop, and will issue more detailed guidance as the requirements become clear.

Back to blog