AI Assurance techniques

AI Assurance techniques

There’s a lot of noise around AI regulation, governance and control at the moment.  While there often seem to be different approaches being taken by different groups, one area of apparent convergence is in AI assurance.

Put simply: how does an organisation ensure that the AI they use or develop is within acceptable risk, legal, quality and ethical boundaries?

 

Most standards in this area focus in the same areas, and are as follows:

Title

What is it

Tips

Risk and impact assessments

The aim of risk and impact assessments is to determine whether a system, model or implementation is likely to have an effect on given risk parameters, and to agree actions to manage that risk within acceptable bounds.

The risk parameters could include financial, environmental, bias, equality, human rights, data privacy: there are many sources highlighting risk.

Involve diverse specialists in completing this exercise.  Hire them if you need to, but you’ll need a broad range of skills around the table.

Audit and assessment

There are several different types of audit and assessment you could look at.  These include:

Bias audit: Checking inputs and outputs to determine if there is unfair bias.

Compliance audit: Checking compliance with a law or regulation (and there are plenty being developed).

Conformity assessments and certification: Checking compliance with an external standard.

It’s likely you’ll need a combination of audit types in your project: some based on subjective assessments, others based on prescriptive standards.  Make sure you schedule them properly to avoid audit fatigue and use a mix of people to get a broad view.

Performance testing

Checking the performance of a system to quantitative requirements or benchmarks.

Define acceptable performance benchmarks up front. 

Formal verification:

This verification checks whether a system satisfies requirements or uses formal mathematical models to verify outputs.

Ensure you’re clear (and have specified) the characteristics of the output.  Your risk assessment should be helpful here.

 

There are a range of methods to assuring AI models, each designed for a different part of the development and deployment lifecycle and to be deployed by a different team.  Together, they can work to offer broad assurance for your system or model. We’ll write more on each type and share examples as AI assurance develops.

Back to blog