New face verification service announced by Minister for Justice
November 16, 2016 |
Today the HOn Michael Keenan, Minister for Justice, has announced the first phase of a Face Verification Service. The claimed aim is to tackle identity crime.
The media release provides:
Today I announce that the first phase of Australia’s new biometric Face Verification Service (FVS) is now operational, providing the Department of Foreign Affairs and Trade and the Australian Federal Police access to citizenship images held by the Department of Immigration and Border Protection.
Other types of images such as visa, passport and driver licence photos will be added over time, with access expanded to other government agencies.
The FVS is not a new database but a secure means of sharing images between existing agency systems. The ability to match a person’s photo against an image on one of their government records, to verify their identity and to share these images between agencies, will strengthen identity checking processes.
While existing measures such as the Document Verification Service (DVS) are helping to prevent the use of fake identity documents, criminals are now producing high quality fraudulent identity documents. These documents contain personal information stolen from innocent and unknowing victims, but with someone else’s photo – documents that would pass a DVS check.
Preventing this type of fraud can be assisted by greater use of biometrics, such as the FVS.
The Government is currently in negotiations with the states and territories to provide access to driver licence images via the FVS.
This will further help to prevent organised crime and terrorists from using fraudulent identities, while protecting everyday Australians from identity theft and making it easier to prove their identities when transacting with government online.
In addition a Face Identification Service (FIS) is expected to commence in 2017 to determine the identity of unknown persons. It will be used for investigations of more serious offences, with access restricted to a limited number of users in specialist areas.
Identity fraud is one of the most common crimes in Australia, costing around $2.2 billion per year according to the latest Identity Crime and Misuse in Australia 2016 report which I am also releasing today.
The report reveals that Australians are falling victim to identity criminals at a growing rate, with around 1 in 20 people experiencing financial losses resulting from identity crime each year.
The link between fraudulent identities and organised crime was clearly demonstrated by a recent multiagency data matching exercise, led by the Fraud and Anti-Corruption Centre hosted within the Australian Federal Police.
Project Birrie examined how 1,700 fraudulent identity items seized in one police operation were used to commit other crimes. These fraudulent identities were linked to outlaw motor cycle gang members, other high profile individuals involved in illicit drug investigations and a few individuals of interest to counter-terrorism operations. Also discovered was over $7 million in serious fraud and more than $50 million laundered offshore.
There are obvious privacy issues arising out of the use of this technology. The obvious one is only using the data for the purpose for which it is collected. Following from that is the need to avoid function creep.
While it is very exciting to read about new technologies being rolled out to deal with old problems an element of caution is warranted. This technology has flaws. This was highlighted in the Atlantic articles Who Owns Your Face? and The Ultimate Facial-Recognition Algorithm. As those who want to circumscribe will probe for weaknesses such as facial camouflage as described in Anti-Surveillance Camouflage for Your Face. None of these issues are properly considered let alone discussed by authorities when they embrace such technologies.
The weaknesses are shown in a very recent article All it takes to steal your face is a special pair of glasses which provides:
Your face is quickly becoming a key to the digital world. Computers, phones, and even online stores are starting to use your face as a password. But new research from Carnegie Mellon University shows that facial recognition software is far from secure.
In a paper (pdf) presented at a security conference on Oct. 28, researchers showed they could trick AI facial recognition systems into misidentifying faces—making someone caught on camera appear to be someone else, or even unrecognizable as human. With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100% success rates. Researchers had the same success tricking software touted by Chinese e-commerce giant Alibaba for use in their “smile-to-pay” feature.
Modern facial recognition software relies on deep neural networks, a flavor of artificial intelligence that learns patterns from thousands and millions of pieces of information. When shown millions of faces, the software learns the idea of a face, and how to tell different ones apart.
In a test where researchers built a state-of-the-art facial recognition system, a white male test subject wearing the glasses appeared as actress Milla Jovovich with 87.87% accuracy. An Asian female wearing the glasses tricked the algorithm into seeing a Middle Eastern man with the same accuracy. Other notable figures whose faces were stolen include Carson Daly, Colin Powell, and John Malkovich. Researchers used about 40 images of each person to generate the glasses used to identify as them.
The test wasn’t theoretical—the CMU printed out the glasses on glossy photo paper and wore them in front of a camera in a scenario meant to simulate accessing a building guarded by facial recognition. The glasses cost $.22 per pair to make. When researchers tested their glasses design against a commercial facial recognition system, Face++, who has corporate partners like Lenovo and Intel and is used by Alibaba for secure payments, they were able to generate glasses that successfully impersonated someone in 100% of tests. However, this was tested digitally—the researchers edited the glasses onto a picture, so in the real world the success rate could be less.
The CMU work builds on previous research by Google, OpenAI, and Pennsylvania State University that has found systematic flaws with the way deep neural networks are trained. By exploiting these vulnerabilities with purposefully malicious data called adversarial examples, like the image printed on the glasses in this CMU work, researchers have consistently been able to force AI to make decisions it wouldn’t otherwise make.
In the lab, this means a 40-year-old white female researcher passing as John Malkovich, but their success could also be achieved by someone trying to break into a building or steal files from a computer.
[…] New face verification service announced by Minister for Justice […]
Peter,
I find this blog post has unsettled me ( and even the children ! ) somehow ? Surely If I am photographed for the purposes of a drivers licence (or whatever ) -and that is the understanding between myself and the relevant government agency – then surely they are in some kind of a breach of our arrangement …. our agreement …. our contract ?……. isn’t this misappropriation of my image without consent ? I am trying to find something to liken it to – ora case to liken it to – maybe the facebook add where they put your face on something to sell without expressed consent – but it is not even really that much of a comparison ….. its a start …. but – you know what it is – it is the fact that my image is in or will be inside of a suspect data base – – and the children’s images and everyone’s images – like we are all potential criminals – and the article highlighted with the Carnegi Melon trick the facial recognition program with the gasses – as human beings we are all in some big trouble. Great work !