Facial Recognition Laws in China


In the last part of this series, we will take a look at the existing laws in China which relate to facial recognition technology (FRT) and how they fail to adequately protect the human rights of Chinese citizens. We will also look at how FRT developed in China is being exported throughout the world, specifically India, and what this means for Indian citizens.

The Cybersecurity Law of the People’s Republic of China, which came into force on June 1, 2017, was enacted to ensure cybersecurity and to safeguard cyberspace sovereignty & national security. The law states that, “Network operators carrying out business and service activities must follow laws and administrative regulations…”. These legal obligations lay down certain requirements for the collection, use and protection of personally identifiable information (PII), which includes ‘biometric data’. However, the law is focused more on cybersecurity, in the context of national security, than on protection of ‘biometric data’, which finds a mention only in the definition section (Article 76).

Article 76 Clause (5) of the law defines ‘personal information’(PI) as “all kinds of information, recorded electronically or through other means, that taken alone or together with other information, is sufficient to identify a natural person’s identity, including but not limited to natural persons’ full names, birth dates, national identification numbers, personal biometric information, addresses, telephone numbers, and so forth.” (emphasis added)

The regulation which relates more specifically to data protection is China’s Personal Information Security Specification, which came into force on October 1, 2020. As per the introduction to the specification, it aims to “targets security challenges to PI, and regulates related behaviors by PI controllers during information processing such as collection, retention, use, sharing, transfer, and public disclosure” while also “protecting individuals‘ lawful rights and interests and society’s public interests to the greatest degree” by laying down guidelines for data-handling and protection.

Under the specification, ‘personal information’ includes “names, dates of birth, identity card numbers, biometric information, addresses, telecommunication contact methods, communication records and contents, account passwords, property information, credit information, location data, accommodation information, health and physiological information, transaction data, etc.” However, the specification is non-binding and does not impose any penalties. China is also reportedly working on a new data privacy law with a stronger focus on biometrics, however, enforcement of these laws remains an issue.

Additionally, on April 23, 2021, China published a draft standard on Security Requirements of Facial Recognition Data which lays down requirements for the collection, processing, sharing and transfer of facial recognition related data and is open for public consultation. This standard, however, is also non-mandatory. The requirements laid down in the standard are that:

  1. use of this data should only be for identification purposes and no predictions should be made on its basis,

  2. FRT should be used only when no alternative technology is available to fulfil the purpose of security or convenience,

  3. Individuals under 14 should not be identified on the basis of FRT,

  4. FRT data should not be stored without obtaining consent and

  5. FRT data generated or collected in China should be stored locally.

Is something better than nothing?: Reality of FRT development and use in China

The laws and regulations discussed above which are in force do not explicitly regulate FRT and those that do are either non-binding or in the draft stage. Thus, while China claims to regulate FRT and respect citizens’ rights, the reality of surveillance in China, especially FRT surveillance, is in stark contrast.

According to a 2018 BBC news report, “(a)n estimated 170 million CCTV cameras are already in place and some 400 million new ones are expected (to) be installed in the next three years.” According to a study of 6100 Chinese citizens conducted by the Nandu Personal Information Protection Research Center in 2019, “83% of respondents indicated that they would like to have more control over their data and 75% would prefer the option to have traditional methods of identification over FRT”.

More troublesome are the reports which provide insight into how this technology is being developed and used in China against ethnic minorities such as the Uighurs, most of whom are Muslim. A software engineer told the BBC, “The Chinese government use(s) Uighurs as test subjects for various experiments just like rats are used in laboratories.” There are multiple reports of Chinese companies marketing or obtaining patents for FRT which specifically picks out the Uighur minority:

  1. Patent filed in July 2018 by Huawei and the China Academy of Sciences describes a face recognition product that is capable of identifying people on the basis of their ethnicity.

  2. Huawei and Megvii, another Chinese technology company, started collaborating in 2018 to “test an artificial-intelligence camera system that could scan faces in a crowd and estimate each person’s age, sex and ethnicity” and could set off a 'Uighur alarm' to the Chinese government when the software identified someone from the persecuted minority group.

  3. Similarly, Alibaba's FRT was also reported to specifically pick out the Uighur minority.

  4. Hikvision has also been reported to have developed and deployed FRT capable of minority analytics in China.

China has justified its surveillance actions on the basis that they have been trying to weed out extremist factions from the majorly Muslim Uighur community to protect public security. According to Alim, a Uighur man in his 20s, who was detained after a facial recognition software identified him as Uighur, “(c)ontrolling the Uighurs has also become a test case for marketing Chinese technological prowess around the world. A hundred government agencies and companies from two dozen countries, including the US, France, Israel and the Philippines, now participate in the highly influential annual China-Eurasia Security Expo in Urumqi, the capital of the Uighur region. The ethos at the expo, and in the Chinese techno-security industry as a whole, is that Muslim populations need to be managed and made productive.” A reported 5 lakh facial scans were run by the law enforcement in the central Chinese city of Sanmenxia that, over the course of a month in 2019, screened whether residents were Uighurs.

Here, it is also important to look at the concept of ‘Potemkin AI’, coined by Jathan Sadowski, in which the author theorises that many instances of “artificial intelligence” are just artificial displays of its power and potential wherein, even though the AI purports to be powered by sophisticated software, it actually relies on humans acting as robots. According to him, “(w)hether it’s content moderation for social media or image recognition for police surveillance, claims abound about the effectiveness of AI-powered analytics, when, in reality, the cognitive labor comes from an office building full of (low-waged) workers.” “While facial recognition technology uses aspects like skin tone and face shapes to sort images in photos or videos, it must be told by humans to categorize people based on social definitions of race or ethnicity. Chinese police, with the help of the start-ups, have done that.” The point to be noted here is that “it matters less if the system actually works that way than if people believe it does and act accordingly”. Thus, FRT surveillance in China can be said to also be more about self-regulation by the Uighurs to stamp out their religious identity, than about accurately identifying any links to extremism.

This has also led to China’s emergence as a leader in “terror capitalism”, a fairly new term coined by Darren Byler, under which exploitation of subjugated populations is justified by defining them as potential terrorists or security threats. According to Byler, terror capitalism generates profits by the following process:

  1. Profitable government contracts are given to private companies in order to build and deploy policing technologies that surveil and manage target groups.

  2. Then, using the vast amounts of biometric and social media data extracted from those groups, the private companies improve their technologies and sell retail versions of them to other states and institutions, such as schools.

  3. Finally, all this turns the target groups into a ready source of cheap labor – either through direct coercion or indirectly through stigma. (Source)

Note: Darren Byler is a postdoctoral researcher in the ChinaMade project at the University of Colorado, Boulder. He received his PhD from the Department of Anthropology at the University of Washington in 2018. His research focuses on Uyghur dispossession, infrastructural power and "terror capitalism" in the city of Ürümchi, the capital of Chinese Central Asia (Xinjiang).

How does this affect India?

China is the biggest supplier of surveillance technologies worldwide, with technology linked to Chinese companies, particularly Huawei, Hikvision, Dahua, and ZTE, supplying AI surveillance technology in sixty-three countries, of which Huawei is responsible for 50 countries. According to Kai-Fu Lee, who is an investor in the chinese company Megvii and a supporter of the expansion of Chinese AI, “China has an advantage in developing A.I. because its leaders are less fussed by “legal intricacies” or “moral consensus””.

As stated above, these systems are developed by the Police, while working with these companies, wherein the Police supplies the database of criminals. Efforts to create such databases in India have accelerated in the past couple of years (CCTNS, NatGrid). The Delhi Government, in 2019, hired HikVision to set up 1,50,000 CCTVs in the city. Inaccurate technology systems which have been developed by and for committing violations of privacy and human rights, should not be implemented in India.

Further, the cost of implementing these systems is enormous. According to IFF’s Project Panoptic, almost 1,248.82 Cr INR has been spent on FRT systems in the country to date. With the country being in the throes of a once in a century pandemic, which has devastated millions, resources should not be diverted towards building a surveillance regime that will further traumatise Indians but towards improving the quality of life in the country, especially the crumbling public health infrastructure. Finally, remember that all this is being done in the absence of a data protection law in India.

Published By
Anushka Jain

Anushka Jain

Policy Counsel, IFF

Share this Case Study