UK Information Commissioner’s Office releases strategy for use of AI and biometrics

July 24, 2025 |

The UK Information Commissioner has responded to the growing concern about the development and use of AI and biometrics and its impact on privacy. He has released an AI and biometrics strategy.

The strategy, absent footnotes, provides:

Artificial intelligence (AI) is fast becoming part of everyday life. It shapes how decisions are made, how services are delivered and, through biometric technologies, how people are identified. AI’s adaptability, scalability and ability to solve complex problems promises advances across science, public services and the economy.

Biometric technologies, powered by AI, can help organisations operate  more efficiently and securely and support law enforcement in keeping our communities safe. 

Realising these opportunities depends on people trusting that organisations are using these technologies responsibly and in compliance with the law. From a data protection perspective, people need to be able to trust that organisations using this technology:

    • are transparent about the personal information they use;
    • use this personal information fairly; and
    • take appropriate care, putting in place governance and technical measures to protect people from harm. 

A lack of transparency about how organisations use personal information risks undermining public trust in AI and biometric technologies. Without that trust, people are less likely to support or engage with AI-powered services. This creates a barrier to responsible adoption across the UK economy.

Public concerns are especially strong in high-impact cases:

    • In policing, 54% of adults have some concerns that facial recognition technology would impact civil liberties and infringe on people’s right to privacy.
    • In recruitment, 64% believe employers will rely too heavily on AI, and 61% are concerned it will perform worse than human decision-makers when assessing individual circumstances.
    • In public services, concern about the use of AI to determine welfare eligibility has risen from 44% in 2022/23 to 59% in 2024/25.

Public concern is not limited to outcomes. It extends to how organisations use personal information to build AI systems in the first place. These perceptions risk hampering the uptake of AI and biometric technologies. In 2024, just 8% of UK organisations reported using AI decision-making tools when processing personal information, and 7% reported using facial or biometric recognition. Both were up only marginally from the previous year.

Our objective is to empower organisations to use these complex and evolving AI and biometric technologies in line with data protection law. This means people are protected and have increased trust and confidence in how organisations are using these technologies.

We will use our regulatory guidance and tools to signal clear expectations and provide certainty on how data protection law applies. This will help organisations across the public and private sectors ensure their governance and use of personal information results in responsible innovation, prevents harm and promotes trust. 

However, we will not hesitate to use our formal powers to safeguard people’s rights if organisations are using personal information recklessly or seeking to avoid their responsibilities. By intervening proportionately, we will create a fairer playing field for compliant organisations and ensure robust protections for people. 

This strategy sets out how we will:

    • set clear expectations for responsible AI through a statutory code of practice for organisations developing or deploying AI and automated decision-making, to enable innovation while safeguarding privacy;
    • secure public confidence in generative AI foundation models by working with developers to ensure they use people’s information responsibly and lawfully in training these models;
    • ensure that automated decision-making (ADM) systems are governed and used in a way that is fair to people, focusing on how they are used in recruitment and in public services; and
    • ensure the fair and proportionate use of facial recognition technology (FRT), working with law enforcement to ensure that the technology is effective and people’s rights are protected.

As these technologies evolve, new risks are emerging. AI systems that are increasingly capable of acting autonomously – so-called agentic AI –raise questions around accountability and redress. Meanwhile, some systems make speculative inferences about people’s intentions or emotions based on their physical or behavioural characteristics. These developments demand careful scrutiny. We will remain responsive to new issues that emerge and be transparent when our focus needs to shift.

This strategy supports our ICO25 strategic enduring objectives to:

    • promote responsible innovation and sustainable economic growth; and
    • safeguard and empower people, particularly those who need extra support to protect themselves. 

It also reinforces our ongoing commitment to supporting economic growth by addressing risks that regulatory uncertainty is a barrier to organisations innovating with, and adopting, new technologies.

We’ve already acted to safeguard people and enable innovation in some of the most significant areas of AI and biometric technologies. Highlights include:

More broadly, our innovation services provide organisations with expertise and advice to help them innovate responsibly, respecting people’s privacy. Current and past participants have included organisations innovating with AI and biometric technologies. 

We will build on this work over the next year (2025/26), promoting high standards of data protection and trust in the use of AI and biometric technologies.

Trust in AI and biometric technologies depends on responsible innovation that safeguards people’s rights. There are two significant challenges to ensuring this is the case: 

Firstly, private and public sector organisations can lack the regulatory certainty and confidence to invest in and use AI and biometric technologies compliantly. 

Concerns about data protection and privacy regulation can be a barrier to AI adoption. A 2024 Bank of England and FCA survey of 118  firms found that  these concerns can be seen as a leading constraint to the adoption of AI in the financial services sector.

In the public sector, 56% of government bodies surveyed by the National Audit Office cited privacy, data protection and cyber security as key barriers, with some struggling to navigate existing guidance.

The Biometrics Institute’s 2024 annual industry survey found that 58% of respondents viewed privacy and data protection concerns as the main obstacle to market growth.

Secondly, a lack of transparency and confidence about how personal information is used in these technologies can undermine public trust.

Government research in 2024 found that public perceptions of AI are dominated by concerns, particularly among the digitally disengaged, with 91% of this population seeing decisions made without human input as a major risk.  DRCF research from the same year found only moderate public trust in generative AI outputs.

ICO research with Ada Lovelace Institute and Hopkins Van Mil from 2022 highlighted how there can be little awareness among the public on how biometric technology is used or regulated.  The importance of organisations being transparent about where they use biometrics and what information is processed was also emphasised. 

These challenges need addressing if the benefits of AI and biometric technologies are to be fully realised. 

We’ve said before, if people don’t trust a technology, then they’re less likely to use it or agree to their own information being used to power it. This will hamper innovation in the process. We will support responsible use of AI and biometric technologies to help ensure this isn’t the case in the UK. Wherever personal information is processed by these technologies, clear, proportionate and robust standards of data protection will apply to prevent harm and promote trust.

AI and biometric technologies are already in use across a wide range of contexts –  from law enforcement, to education, to healthcare. While our regulatory interest spans the full breadth of these developments, this strategy targets three priority situations where: 

    • the stakes for people are high;
    • public concern is clear; and
    • regulatory clarity can have the most immediate impact.

These are high-impact cases with significant potential for public benefit, but they also concentrate many of the risks people care most about:

    • the development of foundation models — large-scale models trained on vast datasets and adaptable to a wide range of downstream tasks;
    • the use of automated decision-making in recruitment and public services; and
    • the use of facial recognition technology by police forces.

By raising standards here, we aim to create clear regulatory expectations and scalable good practice that will influence the wider AI and biometrics landscape.

Within these cases, we consistently see public concern forming around three cross-cutting issues:

    • transparency and explainability;
    • bias and discrimination; and
    • rights and redress.

Transparency and explainability

People expect to understand when and how AI systems affect them. But across generative AI, ADM and FRT, the picture is often unclear.

In recruitment, people want to know when organisations use automated tools and how they make decisions. 

In generative AI, users call for greater clarity on how organisations develop and train tools. 

And in policing, concerns about fairness and privacy are closely linked to how well they can explain and justify FRT decisions.

What the evidence shows:

Generative AI: 26% of users cite data protection as a concern; 15% are concerned about transparency in how tools are developed; and 14% highlight a lack of clarity around sources and results.

Understanding consumer use of generative AI, DRCF, 2025 

Automated decision-making: People expect to be informed when ADM is used in recruitment, and want clarity on the information and logic behind decisions:

“You should be told before you apply that [ADM] is being used so you can make an informed decision whether you want to continue your application.”

? Research participant, Understanding public perceptions towards automated decision-making in recruitment, ICO, 2025

Facial recognition technology: People who have concerns about civil liberties, privacy and a lack of transparency are less comfortable with police use of FRT.

“It just depends on who’s using it [live facial recognition] and for what reason… If it’s just random or wherever they feel like it, then that doesn’t feel fair.”

? Research participant, Understanding the UK public’s views and experiences of biometric technologies, ICO, 2025

Bias and discrimination

People are concerned that AI systems can replicate or amplify bias, particularly when trained on flawed, incomplete or unrepresentative information.

In recruitment, people fear that automated tools may reflect and reinforce social inequalities, particularly in how candidates are ranked or filtered. 

In generative AI, users have observed biased outputs and expect developers to take visible action to address them. 

In facial recognition, public trust is shaped by whether systems perform consistently across different demographic groups. Independent testing of facial recognition algorithms demonstrates differential performance according to gender and ethnicity

What the evidence shows:

Generative AI: Around 10% of users report observing bias in outputs. Public confidence is strongly linked to active steps taken by developers to address bias, which ranks among the top five drivers of trust.

? Understanding consumer use of generative AI, DRCF, 2025 

Automated decision-making: While people can see benefits from ADM, such as increased efficiency, there are also concerns that ADM systems in recruitment may reinforce demographic bias.

“Depending on what information it’s been trained on, there’s a very, very high potential for bias, particularly if you look at the sort of biases that have already been introduced into AI… they still tend to quite like young, white men.” 

? Research participant, Understanding public perceptions towards automated decision-making in recruitment, ICO, 2025

Facial recognition technology: 49% of people believe FRT systems show bias against certain groups, including on the basis of gender or ethnicity.

“The problem I would see would be if it has been trained on predominantly white rather than interracial or different coloured faces.” 

? Research participant, Understanding the UK public’s views and experiences of biometric technologies, ICO, 2025

Risks of bias and discrimination are particularly acute in speculative AI systems that attempt to infer traits, emotions or intentions from physical or behavioural characteristics — such as emotion recognition tools used in recruitment. Research shows that the science underpinning these systems is highly contested and that their use has the potential to lead to harms.

Rights and redress

Public confidence in AI and biometric technologies is shaped most powerfully by what happens when things go wrong.

People are concerned about the serious consequences of inaccurate outputs and unfair outcomes, for example:

    • being wrongly identified by facial recognition;
    • being overlooked for a job due to flawed automation; or
    • losing access to benefits through error-prone ADM. 

In generative AI, models can reproduce sensitive personal information, including medical details or child sexual abuse material. These are not abstract risks — they affect lives and livelihoods.

People want to know that:

    • systems are accurate;
    • safeguards are in place; and
    • there are clear ways to challenge and correct outcomes when harm occurs.

What the evidence shows:

Generative AI: 70% of people believe generative AI developers must do more to prevent tools creating harmful content. 58% of people feel uncomfortable about their personal information being used to train these models. 

? Understanding consumer use of Generative AI, DRCF, 2025 

Automated decision-making: Concerns grow as ADM is used more extensively in recruitment, with fully automated decisions seen as impersonal and exclusionary.

“It can be exclusionary, could be unfair to someone with a neurological issue, for example… it can just rule out a lot of people.”

? Research participant, Understanding public perceptions towards automated decision-making in recruitment, ICO, 2025

Facial recognition technology: Trust in police use of FRT depends heavily on perceived accuracy. 83% of people who believe the technology is accurate are comfortable with its use; only 30% are comfortable if they believe it is not.

? Understanding the UK public’s views and experiences of biometric technologies, ICO, 2025

As AI systems become more agentic, the ability to trace decisions back to a human controller becomes harder — and people may struggle to understand how decisions were made or how to challenge them. This creates real risks around accountability and redress. 

Over the next year (2025/26), we will take targeted action across the types of cases we have prioritised. We will ensure that organisations can develop and deploy AI and biometric technologies with confidence and that people are safeguarded from harm.

Give organisations certainty on how they can use AI and ADM responsibly under data protection law

We will:

    • consult on an update to our ADM and profiling guidance by autumn 2025, reflecting proposed reforms in the Data (Use and Access) Bill; and
    • develop a  statutory code of practice on AI and ADM, providing clear and practical guidance on transparency and explainability, bias and discrimination and rights and redress, so organisations have certainty on how to deploy AI in ways that uphold people’s rights and build public confidence.

Ensure high standards of automated decision-making in central government, ensuring decisions that affect people are fair and accountable

We will:

    • learn from early adopters of ADM, such as the Department for Work and Pensions and others, and communicate our findings across central government to support the scaling of responsible use; and
    • set out regulatory expectations, securing assurance that departments are using ADM responsibly and with appropriate safeguards. 

Set clear expectations for the responsible use of automated decision-making in recruitment

We will:

    • scrutinise the use of ADM in recruitment by major employers and recruitment platforms, identifying risks related to transparency, discrimination and redress; and
    • publish findings and regulatory expectations, holding employers to account if they fail to respect people’s information rights.

Scrutinise foundation model developers to ensure they are protecting people’s information and preventing harm

We will:

    • secure assurances from developers that personal information used in model training is safeguarded, with appropriate controls to prevent misuse or reproduction of sensitive information, including child sexual abuse material; and
    • set clear regulatory expectations, where needed, to strengthen compliance (including for the use of special category data), and take action if unlawful model training creates risks or harm.

Support and ensure the proportionate and rights-respecting use of facial recognition technology by the police

We will:

    • publish guidance clarifying how police forces can govern and use FRT in line with data protection law, with advice on organisational and technical measures to minimise risks;
    • audit police forces using FRT and publish our findings, securing assurance that deployments are well-governed and people’s rights are protected; and
    • provide expert advice to government on proposed changes to the law, ensuring any future use of FRT remains proportionate and publicly trusted. 

Anticipate and act on emerging AI risks

We will:

    • engage with industry to assess the data protection implications of agentic AI, publishing a Tech Futures report examining issues such as accountability and redress, before consulting on emerging data protection challenges; and
    • set a high bar for the lawful use of AI systems that infer subjective traits, intentions or emotions based on physical or behavioural characteristics, conducting ongoing surveillance of use cases and taking action where such systems cause harm or infringe people’s rights.
    • Agentic AI refers to AI systems composed of agents that can behave and interact autonomously in order to achieve their objectives. These agents are small, specialised pieces of software that can make decisions and operate cooperatively or independently to achieve system objectives. Advances in agentic AI are driven by the integration of large language models (LLMs) with agent-based systems. By providing reasoning and discovery abilities, LLMs enhance an agent’s autonomy. This enables the agent to determine the most appropriate course of action to meet system objectives.

Artificial intelligence (AI) refers to a broad range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking. This processing falls within the scope of data protection law if personal information is used to train, test or deploy an AI system.

Automated decision-making (ADM) is the process of making a decision by automated means without any human involvement. Data protection law provides people with specific rights and protections if organisations are carrying out solely ADM that has legal or similarly significant effects on them. For example, automated decisions to award a financial loan or benefit or evaluations within recruitment tests.

Biometric technology refers to systems that use biometric data to recognise or identify a person’s identity. For example, facial recognition, fingerprint scanning and voice recognition systems.

Facial recognition technology (FRT) refers to the technologies that enable the automated recognition of people based on facial features extracted from digital facial images.

Foundation models are base models for AI systems that are trained on large amounts of data. They can generate outputs such as text, images and audio and be adapted to a range of tasks.

Generative AI refers to a type of AI that can generate outputs that resemble human-created content. Most of the current generative AI systems are based on the transformer architecture. This architecture is a type of deep learning model designed to process sequences of data, eg text, by focusing on the relationship between different parts of the input.

Large language models (LLMs) are a type of generative AI with the ability to produce human-like text, code and translations.

Leave a Reply