The UK Information Commissioner raises the concerns about the “staggeringly inaccurate” face recognition systems used by the police

May 16, 2018 |

Facial recognition technology has long been touted as an effective tool in crime prevention and investigation as well as important for national security.  It is also touted as a way of improving efficiency in business and through social media.  Unfortunately the hype does not match the facts.  The algorithms and the quality of images that power facial recognition technology are often below par leading to many false positives.  The technology is also plagued by the problem with racial bias, against minorities.

The Australian Government has caught the facial recognition bug, touting it as a cure all, wanting driving licence photos to combat a range of ills, such as fraud.  The support is far from universal but the quality of the debate has been abysmal.

In the United Kingdom the Information Commissioner has raised a real concern about the poor quality of the use of facial recognition technology used by the police. More to the point the Commissioner is likely to do something about it.

On Monday the Commissioner set out these concerns stating:

Technological advances in the last 20 years have rapidly increased the ability of online systems to identify individuals. These advances can make many transactions straight forward, such as passing through passport control or unlocking a mobile phone but they can also increase the risk of intruding into our privacy.

Technology represents both a risk and an opportunity and this is why I have recently published our first Technology Strategy which addresses these new technological developments and ensures the ICO can deliver the outcomes which the public expect of us.

One particular development is the use of biometric data, including databases of facial images, in conjunction with Automatic Facial Recognition Technology (FRT).  The technology has been available for some time but the ability of the technology to be linked to different online databases, with mobile and fixed camera systems, in real time, greatly increases its reach and impact.

In this blog, I want to focus particularly about how FRT is used in law enforcement. FRT is increasingly deployed by police forces at public events like the Notting Hill Carnival or big football matches, for example the last year’s Champions League final in Cardiff.

There may be significant public safety benefits from using FRT — to enable the Police to apprehend offenders and prevent crimes from occurring.

But how facial recognition technology is used in public spaces can be particularly intrusive. It’s a real step change in the way law-abiding people are monitored as they go about their daily lives. . There is a lack of transparency about its use and is a real risk that the public safety benefits derived from the use of FRT will not be gained if public trust is not addressed.

A robust response to the many unanswered questions around FRT is vital to gain this trust. How does the use of FRT in this way comply with the law? How effective and accurate is the technology? How do forces guard against bias? What protections are there for people that are of no interest to the police? How do the systems guard against false positives and the negative impact of them?

At another level, I have been deeply concerned about the absence of national level co-ordination in assessing the privacy risks and a comprehensive governance framework to oversee FRT deployment. I therefore welcome Baroness Williams’ recent confirmation of the establishment of an oversight panel which I, alongside the Biometrics Commissioner and the Surveillance Camera Commissioner (SCC), will be a member of.

I also welcome the recent appointment of a National Police Chiefs Council (NPCC) lead for the governance of the use of FRT technology in public spaces.

A key component of any FRT system is the underlying database of images the system matches to. The use of images collected when individuals are taken into custody is of concern; there are over 19 million images in the Police National Database (PND) database. I am also considering the transparency and proportionality of retaining these photographs as a separate issue, particularly for those arrested but not charged for certain offences. The Biometrics Commissioner has also raised these concerns.

For the use of FRT to be legal, the police forces must have clear evidence to demonstrate that the use of FRT in public spaces is effective in resolving the problem that it aims to address, and that no less intrusive technology or methods are available to address that problem. Strengthened data protection rules coming into law next week require organisations to assess the risks of using new and intrusive technologies, particularly involving biometric data, in a data protection impact   and provide it to my office when the risks are difficult to address.

I will also carefully consider the reports recently issued by Civil Society, Big Brother Watch in the UK and the Electronic Frontier Foundation in the US.

I have identified FRT by law enforcement as a priority area for my office and I recently wrote to the Home Office and the NPCC setting out my concerns. Should my concerns not be addressed I will consider what legal action is needed to ensure the right protections are in place for the public.

The BBC has reported on this today in its Face recognition police tools ‘staggeringly inaccurate’ providing:

Police must address concerns over the use of facial recognition systems or may face legal action, the UK’s privacy watchdog says.

Information Commissioner Elizabeth Denham said the issue had become a “priority” for her office.

An investigation by campaign group Big Brother Watch suggested the technology flagged up a “staggering” number of innocent people as suspects.

But police have defended the technology and say safeguards are in place.

Which police forces are using facial recognition?

Big Brother Watch submitted freedom of information requests to every police force in the UK.

Two police forces acknowledged they were currently testing facial recognition cameras.

The Metropolitan Police used facial recognition at London’s Notting Hill carnival in 2016 and 2017 and at a Remembrance Sunday event.

Its system incorrectly flagged 102 people as potential suspects and led to no arrests.

In figures given to Big Brother Watch, South Wales Police said its technology had made 2,685 “matches” between May 2017 and March 2018 – but 2,451 were false alarms.

Leicestershire Police tested facial recognition in 2015, but is no longer using it at events.

How does it work?

Police facial recognition cameras have been trialled at events such as football matches, festivals and parades.

High-definition cameras detect all the faces in a crowd and compare them with existing police photographs, such as mugshots from previous arrests.

Any potential matches are flagged for a police officer to investigate further.

How have the police forces responded?

South Wales Police has defended its use of facial recognition software and says the system has improved with time.

“When we first deployed and we were learning how to use it… some of the digital images we used weren’t of sufficient quality,” said Deputy Chief Constable Richard Lewis. “Because of the poor quality, it was identifying people wrongly. They weren’t able to get the detail from the picture.”

It said a “number of safeguards” prevented any action being taken against innocent people.

“Firstly, the operator in the van is able to see that the person identified in the picture is clearly not the same person, and it’s literally disregarded at that point,” said Mr Lewis.

“On a much smaller number of occasions, officers went and spoke to the individual… realised it wasn’t them, and offered them the opportunity to come and see the van.

“At no time was anybody arrested wrongly, nobody’s liberty was taken away from them.”

‘Checks and balances’

The Metropolitan Police told the BBC it was testing facial recognition to see whether it could “assist police in identifying known offenders in large events, in order to protect the wider public”

“Regarding ‘false’ positive matches – we do not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts,” it said in a statement.

“All alerts against the watch list are deleted after 30 days. Faces in the video stream that do not generate an alert are deleted immediately.”

But Big Brother Watch said it was concerned that facial recognition cameras would affect “individuals’ right to a private life and freedom of expression”.

It also raised concerns that photos of any “false alarms” were sometimes kept by police for weeks.

“Automated facial recognition technology is currently used by UK police forces without a clear legal basis, oversight or governmental strategy,” the group said.

What does Big Brother Watch want?

Big Brother Watch wants police to stop using facial recognition technology. It has also called on the government to make sure that the police do not keep the photos of innocent people.

Information Commissioner Elizabeth Denham said police had to demonstrate that facial recognition was “effective” that no less intrusive methods were available.

“Should my concerns not be addressed I will consider what legal action is needed to ensure the right protections are in place for the public,” said Ms Denham.

The Home Office told the BBC it plans to publish its biometrics strategy in June, and it “continues to support police to respond to changing criminal activity and new demands”.

“When trialling facial recognition technologies, forces must show regard to relevant policies, including the Surveillance Camera Code of Practices and the Information Commissioner’s guide,” it said in a statement.

 

Leave a Reply





Verified by MonsterInsights