An overview of the Auditing Framework for Artificial Intelligence and its core components

Reuben Binns, our Research Fellow in AI, and Valeria Gallo, ICO Technology Policy Advisor, outline the proposed structure, core components and areas of focus of the ICO’s new auditing framework for artificial intelligence (AI).


Last week Simon McDougall invited comment from organisations on the development of our auditing framework for AI. The framework will support the work of our investigation and assurance teams when assessing the compliance of data controllers using AI and help guide organisations on the management of data protection risks arising from AI applications.

As promised, we will be posting regularly about the development of the framework and our thinking, and we want your feedback at all stages.

In this post we will briefly outline the framework’s two key components (Figure 1), which are:

  1. governance and accountability; and
  2. AI-specific risk areas.

The governance and accountability component will discuss the measures an organisation must have in place to be compliant with data protection requirements.

The second component will focus on the potential data protection risks that may arise in a number of AI specific areas and what the adequate risk management practices would be to manage them.

Figure 1 – Framework overview
Governance and accountability
Accountability is a legal obligation for any organisation processing personal data and a key principle of the General Data Protection Regulation (GDPR). The ICO already has detailed guidance about this. However, when adopting AI applications data controllers will have to re-assess whether their existing governance and risk management practices remain fit for purpose.

This is because AI applications can exacerbate existing data protection risks, introduce new ones, or generally make risks more difficult to spot or manage. At the same time, detriment to data subjects may increase due to the speed and scale of AI applications.

Against this background, Boards and senior leaders may need to reconsider (or in many cases define) their data protection risk appetite and examine how AI applications, individually and collectively, fit within the chosen parameters.
And while AI increases the importance of embedding data protection by design and default into an organisation’s culture and processes, the technical complexities of AI applications can make it more difficult.

To design and implement effective data protection measures organisations need to be able to understand and manage the key risk areas specific to AI. This is what the second component of the framework focuses on.

AI-specific areas

We have identified eight AI specific risk areas the framework will cover:

  1. Fairness and transparency in profiling – including issues of bias and discrimination, interpretability of AI applications, and explainability of AI decisions to data subjects.
  2. Accuracy – covering both accuracy of data used in AI applications and of data derived from them.
  3. Fully automated decision making models – including classification of AI solutions (fully automated vs. non-fully automated decision making models) based on the degree of human intervention, and issues around human review of fully automated decision-making models.
  4. Security and cyber – including testing and verification challenges, outsourcing risks, and re-identification risks.
  5. Trade-offs – covering challenges of balancing different constraints when optimising AI models (e.g. accuracy vs. privacy).
  6. Data minimisation and purpose limitation.
  7. Exercising of rights – including individuals’ right to be forgotten, data portability, and right to access personal data.
  8. Impact on broader public interests and rights – such as freedom or association, freedom of speech, etc. (NB the framework will only consider these issues as they pertain to data protection legislation, not the broader public policy objectives).

In the framework we will discuss each of these areas with an analysis of the associated data protection risks. We will list a number of organisational and technical controls that we would consider good practice. However, many of the risk controls that organisations may need to adopt will be use-case specific and therefore it is not possible to include an exhaustive or definitive list in the framework.

Next steps

We aim to post to this blog site every 2 or 3 weeks for the next 6 months or so. Each of our posts will deep-dive into one of the AI areas listed above and explore the associated risks. Whenever possible, we will refer to a simplified AI application lifecycle (Figure 2) to highlight the stage the risks are most likely to manifest themselves, or where the proposed controls are likely to be most effective. We will also highlight the key implications for the different positions within an organisation, e.g. boards, Data Protection Officers or data scientists, where appropriate.



Figure 2 – Simplified AI application lifecycle



Your feedback

We welcome your feedback on any element of the proposed framework. In particular we would appreciate your views on the following:

1)   What do you think of the overall structure of the framework?
2)   What do you think of the AI-specific risk areas we identified?
3)   Are there any gaps in the risk areas?
4)   What particular topics or challenges should we address under each area?

It is worth noting that our work will focus exclusively on the data protection challenges introduced or heightened by AI. As such, general data protection considerations (eg lawful basis for processing or general security), which are already covered by existing ICO guidance will not be addressed. If you believe there is need for further clarification, please to let us know.

We encourage you to share your views by leaving a comment below or by emailing us at AIAuditingFramework@ico.org.uk

Dr Reuben Binns, an influential figure in the emerging AI and data protection policy community, is joining the ICO on a fixed term fellowship. During his two-year term, Dr Binns will research and investigate a framework for auditing algorithms and conduct further in-depth research activities in AI and machine learning.

Valeria Gallo is currently seconded to the ICO as a Technology Policy Adviser. She works with Reuben Binns, our Artificial Intelligence (AI) Research Fellow, on the development of the ICO Auditing Framework for AI. Prior to her secondment, Valeria was responsible for analysing and developing thought leadership on the impact of technological innovation on regulation and supervision of financial services firms.


Source

Spread the love

Related posts

Leave a Comment