China publishes security measures on the use of facial recognition technology

March 23, 2025 |

In one of those “one for the books” events the Chinese agencies of Cyberspace Administration of China, in collaboration with the Ministry of Public Security have published security measures for the use of facial recognition technology. The measures will take effect on 1 June 2025. Given how intrusive Chinese authorities have been in the past with surveillance and the use of facial recognition technology it will be interesting to see how much of a real change will result.

The measures apply to activities using facial recognition technology, which is individual biometric recognition technology that uses facial information to identify an individual’s identity, to process facial information within China.

Interestingly the do not cover the processing of facial information from their scope for research and development or algorithm training purposes.

Under the measures, facial recognition activities must comply with applicable laws and regulations and, inter alia:

  • have a specific purpose;
  • be necessary;
  • minimizes the impact on personal rights and interests; and
  • implement strict protection measures.

Personal information handlers must, inter alia:

  • before processing, inform individuals in a prominent manner and clear and understandable language of certain information, such as contact information and purposes and method of processing;
  • inform individuals of any changes to the information provided to them;
  • when the processing is based on consent, obtain voluntary and explicit consent, including providing the right to withdraw consent;
  • when processing minor’s information, obtain the consent of a parent or other guardians;
  • stored information on facial recognition devices and not transmit it through the internet;
  • conduct a Personal Information Protection Impact Assessment (PIPIA) and include the contents outlined in the measures; and
  • if processing data of more than 100,000 individuals, notify the provincial-level or higher cybersecurity and informatization department within 30 working days, and provide the information outlined in the measures.

The measures require personal information handlers to not use facial recognition as the only verification method.  Facial recognition equipment may be installed in public places if it is necessary for public safety, and the collection area is reasonably determined under the law and includes prominent warning signs.

Interestingly the measures require adopting security measures, including data encryption, security auditing, access control, authorisation management, and intrusion detection.

Any organisation or individual can file a complaint with the department responsible for personal information protection regarding an illegal use of facial recognition technology.

Reuters’ article on the subject provides:

China’s cyberspace regulator on Friday published regulations governing the use of facial recognition technology, separately stating that individuals should not be forced to verify their identity using such technology.
China is at the forefront of facial recognition technology, which is deployed by all levels of its public security apparatus to track down criminals, as well as monitor dissenters, petitioners and ethnic minorities. The new rules do not mention security authorities’ use of facial recognition technology.
The Cyberspace Administration of China (CAC) said the regulations were published in response to growing concerns within society about the risks the widespread use of facial recognition technology posed to data privacy.
“Individuals who do not agree to identity verification through facial information should be provided with other reasonable and convenient options,” CAC said on its website.
It specified that the regulations were aimed at curbing increasingly common practices such as using facial recognition technology for hotel check-ins or to enter a gated community.
The regulations, approved by China’s Ministry of Public Security and due to take effect in June, emphasise the need for companies collecting data from facial recognition cameras to ensure they only process an individual’s facial data after obtaining their consent.
The regulations did not specify how this would apply in public spaces but noted that signs should be on display wherever facial recognition technology is deployed, a practice already widespread in Chinese cities.
Home-grown companies like Sensetime and Megvii invest tens of millions of dollars every year researching and developing the latest AI-driven visual imaging technologies that are fuelling increasingly sophisticated facial recognition software.
 
The spread of facial recognition technology into everyday life in China has led to an increase in societal anxiety about privacy in recent years.
A survey conducted in 2021 by a think tank affiliated with state-run media outlet The Beijing News found that 75% of respondents were concerned about facial recognition and 87% opposed the use of the technology in public places of business.
In July 2021, China’s Supreme Court banned use of the technology to verify identities in public places like shopping malls and hotels, and allowed for residents to request alternative methods of verification to enter their neighbourhood.
In November that year, the Personal Information Protection Law took effect, mandating user consent for the collection of facial data and imposing heavy fines on non-compliant companies.

The South China Morning Post article provides:

 
 
 
The use of facial recognition identification should not be forced upon people, and service providers will be required to offer alternative ID methods, under regulations due to come into effect in China on June 1.

The new rules mark Beijing’s first major attempt to regulate facial recognition, a technology widely adopted around the country – such as at hotel check-ins, entrances to gated communities and to make digital payments.

Jointly released by the Cyberspace Administration of China and Ministry of Public Security on Friday, the final version of “regulations for the safe application of facial recognition technology” comes nearly two years after a public consultation on creating comprehensive guidelines

The regulations aimed to address “growing concerns” among the public about the risks posed to personal data privacy and security, the authorities said.

China is a global leader in the adoption of facial recognition technology, driven by its robust internet industry and relatively lax regulatory environment on privacy protection. It has also heavily integrated facial recognition into its security surveillance network.

The new regulation mandates that “voluntary and explicit consent made on the premise of full knowledge” must be obtained “when processing facial information based on individual consent”.

Individuals shall also have the right to withdraw consent, and the body that processes the personal information should provide “a convenient way” for such withdrawal.

Also, when alternative methods to achieve the same ID verification are available, facial recognition shall not be offered as the only option. If someone refuses facial verification, “reasonable and convenient” alternatives shall be provided.

On data security, the new regulations specify that facial information shall not be transmitted externally through the internet, unless otherwise provided by laws and administrative regulations or with the individual’s separate consent.

The retention period of facial information shall also not exceed the shortest time necessary for processing.

Further, facial recognition applications shall adopt necessary security measures such as data encryption, security auditing, access control, authorisation management and intrusion detection to ensure data security.

Facial ID processors are also required to register with their provincial cyber administration body within 30 working days when they hold more than 100,000 facial data sets.

In strict moves on privacy protection, the regulations ban facial recognition equipment in private spaces such as hotel rooms, public bathrooms and dressing rooms.

The pervasive use of facial recognition technology in daily life in China has prompted increasing concerns about privacy and security.

In July 2021, the Supreme People’s Court issued a judicial interpretation that effectively banned the use of the technology to verify identities in public places like shopping malls and hotels without consent. The ruling also allowed residents to request alternative methods of verification to enter their neighbourhoods, emphasising the need for consent and providing options for those who refuse facial recognition.

That November, China’s personal information protection law took effect, mandating consent for the collection of facial data and imposing heavy fines on companies that fail to comply.

In 2022, a resident of the northern city of Tianjin sued his estate management company over making facial recognition the sole ID method for entry. The court ruled in favour of the resident and ordered the company to provide alternatives.

Facial recognition is topical in the Australian context given the Privacy Commissioner in a determination dated 19 November 2024 found that Bunnings interfered with customers’ privacy when deploying facial recognition technology. On that date the Privacy Commissioner issued a guide on the use of facial recognition technology.

Bunnings promptly announced it was seeking a review in a somewhat maudlin, self serving statement where it recites justifications often heard when criticised for wrongly use of surveillance. Including the tried and true, ends justifying the means.  It skates around the Privacy Commissioner’s findings about consent (and most other key findings).  It provides:

Bunnings will seek review of the Privacy Commissioner’s Determination, before the Administrative Review Tribunal following its investigation into our trial of facial recognition technology (FRT).

We had hoped that based on our submissions, the Commissioner would accept our position that the use of FRT appropriately balanced our privacy obligations and the need to protect our team, customers, and suppliers against the ongoing and increasing exposure to violent and organised crime, perpetrated by a small number of known and repeat offenders.

The Commissioner acknowledged that FRT had the potential to protect against serious issues, such as crime and violent behaviour. This was the very reason Bunnings used the technology.

Our use of FRT was never about convenience or saving money but was all about safeguarding our business and protecting our team, customers, and suppliers from violent, aggressive behaviour, criminal conduct and preventing them from being physically or mentally harmed by these individuals. It was not used in isolation but in combination with various other security measures and tools to deliver a safer store environment.

FRT was trialled at a limited number of Bunnings stores in Victoria and New South Wales between 2018- 2021, with strict controls around its use, with the sole and clear intent of keeping team members and customers safe and preventing unlawful activity. We know that some 70 per cent of incidents are caused by the same group of people. While we can physically ban them from our stores, with thousands of daily visitors, it is virtually impossible to enforce these bans. FRT provided the fastest and most accurate way of identifying these individuals and quickly removing them from our stores.

The trial demonstrated the use of FRT was effective in creating a safer environment for our team members and customers, with stores participating in the trial having a clear reduction of incidents, compared to stores without FRT. We also saw a significant reduction in theft in the stores where FRT was used.

We believe that customer privacy was not at risk. The electronic data was never used for marketing purposes or to track customer behaviour. Unless matched against a specific database of people known to, or banned from stores for abusive, violent behaviour or criminal conduct, the electronic data of the vast majority of people was processed and deleted in 0.00417 seconds – less than the blink of an eye.

Every day we work hard to earn the trust of our team, suppliers, and customers and this includes keeping people safe in and around our stores. It’s our highest priority and a responsibility we take very seriously.

Across the retail sector, abuse, threats and assaults in stores continue to rise, with a 50 per cent increase at Bunnings last year alone. Statistics don’t convey the real impact it has on the lives of our team and our customers, and we provided the OAIC with numerous examples of violent and abusive situations in our

stores. We are deeply disappointed with the Commissioner’s determination, given the significant amount of information shared which illustrated the risks to our team and customers from anti-social behaviour.

Everyone deserves to feel safe at work. No one should have to come to work and face verbal abuse, threats, physical violence or have weapons pulled on them.

FRT was an important tool for helping to keep our team members and customers safe from repeat offenders. Safety of our team, customers and visitors is not an issue justified by numbers. We believe that in the context of the privacy laws, if we protect even one person from injury or trauma in our stores the use of FRT has been justifiable.

The significant challenges facing front-line workers at Bunnings, and other retailers, is now widely understood, supported by our union leaders, and appreciated by state and territory governments around the country. Bunnings, along with other retailers and industry associations, has been consulting with state governments to amend legislation to provide better protection for our team and customers.

On background:

    1. The technology complemented extensive training, resources, leadership tools and policies Bunnings has in place to equip its team to handle threatening situations. Only a small team had access to the data and on positive identification, there was a clear process whereby it was checked to avoid false
    2. To the extent that Bunnings does collect personal information in the course of our business, this is explained in our Bunnings Privacy Policy, which is available on our website. In addition, we let our customers know how we handle that information through signs at the various entry points to our stores, this includes a conditions of entry notice and a privacy information poster.
    3. We acknowledge that when we first started using FRT we did not specify this on our conditions of entry poster, however, we did make changes during the trial to refer to our use of FRT on both our entry sign and in our privacy policy.
    4. For the 12 months ending April 2024, there were about 700,000 retail crime events recorded by Australian retailers with 16% of those constituting threatening or violent behaviour, and 60% of store thefts are conducted by the same 10% of people.
    5. Theft is a major driver of abusive or threatening encounters, with one in five instances of recorded theft in Bunnings stores also involving verbal or physical abuse towards team.
    6. We would never act in a way that we believe would jeopardise customer

Under the Privacy Act 1988 the review will be undertaken by the Administrative Review Tribunal in a hearing de novo.  This process is one of the weaknesses of the legislation. The determination should be the subject of appeal to the Federal Court.  That is where it is likely to end up.  On a practical note, the predecessor to the Administrative Review Tribunal, the Administrative Appeals Tribunal, had a dismal record its review of privacy related cases.  For example, one of more than a few, its interpretation of personal information in the Ben Grubb decision.  It was completely at odds with overseas jurisdictions.  How it could find mobile network data from an individuals activity as not constituting personal information is still baffling.  This led to a less than impressive Federal Court decision, with the rejection of the Privacy Commissioner’s appeal in Privacy Commissioner v Telstra Corporation Limited [2017] FCAFC 4.  To be fair, as an appeals court it’s role was restricted to grounds of law. 

It will be some time before the Bunnings case is fully resolved. 

Leave a Reply