Brief explainer on Niti Aayog's discussion paper on facial recognition technology


The Niti Aayog has published the third paper in its Responsible AI series which focuses on AI-based facial recognition technology. The paper, titled “Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology”, puts forth recommendations for applications of facial recognition technology within India and contains a case study of the Ministry for Civil Aviation's DigiYatra Programme. The deadline for submission of comments on the paper is November 30, 2022 and they have to be sent to Anna Roy, Senior Adviser, NITI Aayog at [email protected].

What does the paper say?

The paper is divided into two parts; with the first part setting the context and the second part providing a deep dive into the DigiYatra Programme’s use of facial recognition technology (FRT) to enable paperless travel.

Part 1

The first section in this part underlines the need for “Responsible AI” as the increasing use of AI and algorithmic functions in both the public and the private sectors necessitate a discussion on the ethical risks emanating from these use cases. The second section then provides insight into how FRT operates and states that FRT “primarily seeks to accomplish three functions- facial detection, feature extraction, and facial recognition”. It identifies two formats in which FRT is used, 1:1 FRT systems (verification of identity) and 1:n FRT systems (identification). Recognising the growing use of FRT throughout the world, especially in India, the paper lists the generation of vast amounts of facial images and video data in general, advancements in image recognition technology, and the ubiquitous presence of closed-circuit television (CCTV) cameras as some of the probable causes for this uptick.

The paper then goes on to split FRT use into two categories; non-security and security. Examples of non-security use include verification and authentication of the identity of an individual, providing greater ease of access to certain services (contactless onboarding at airports), or to ease usability (unlock smartphone). On the other hand, examples of security use include general law and order considerations like investigation, identification of missing persons, identifying persons of interest to the law enforcement, monitoring of crowds, and screening of public spaces for finding violations of masking protocols given the COVID-19 pandemic. Within this category, there are further sub-categories: automated FRT (identification of persons for offences against witness sketches or an existing set of suspects may constitute automated FRT) and live FRT (monitoring for crowd control). Here, the paper states that “(e)ven in surveillance, it is the use of live FRT, which is increasingly being debated from legal and ethical standpoints, globally”. This statement presents the notion that non-live FRT use is not as concerning, however, this would be an incorrect assertion.

The third section of this part contains illustrative examples of FRT use in India and abroad. The fourth section discusses the risks associated with the use of FRT systems and categorises them into design-based risks and rights based risks.

Design based risks Rights based risks
Inaccuracy due to technical factors Issues of informational autonomy
Inaccuracy due to bias caused by underrepresentation Threat to non-participants in deployment of FRT systems
Inaccuracy due to lack of training of human operators Legal thresholds applicable to FRT systems as per the Supreme Court’s decision in Puttaswamy v Union of India (Right to privacy decision)
Inaccuracy due to glitches or perturbations Anonymity as a facet of privacy
Security risks due to data breaches and unauthorised access
Accountability, legal liability and grievance redressal
Opaque nature of FRT systems

The last section of this part looks at how FRT regulation is evolving in jurisdictions such as the European Union, the United Kingdom, the United States, Australia, and Canada.

Part 2

The first section in this part gives an overview of the DigiYatra scheme, which is a biometric boarding system involving the authentication and creation of a digital identity of a passenger, and the subsequent verification of this identity at different checkpoints in an airport. It states that the scheme is “purely voluntary” and contains alternatives at all stages. The second section lists the potential benefits of the scheme such as lower congestion at airports, seamless, paperless, and contactless passenger experience, and lower operational costs and enhanced civil aviation capabilities.

The third section discusses certain legal aspects of the scheme which includes how the data privacy of users will be ensured, how Aadhaar based authentication will be carried out under the scheme, and how information security will be maintained.

The fourth section contains analysis based on the application of the Responsible AI principles on the scheme as well as recommendations that arise from the analysis such as including specific security-based exceptions to ensure the principle of privacy and security is not hampered by wide national security requirements. The fifth section contains certain actionable recommendations to ensure responsible use of FRT in future applications with specific recommendations for governing legislation and policy, developers and vendors of FRT systems, procurement, and impacted consumers. The recommendations made are expansive and we will be responding to them specifically in our consultation response.

Our concerns

It is commendable that the Niti Aayog has published this paper to specifically address the concerns surrounding the use of FRT in India. However, after a perusal of the paper, various concerns still remain. The paper fails to adequately address the ethical issues surrounding the use of FRT by law enforcement agencies and suggests “explainable FRT systems” as a solution to issues of bias and inaccuracy without considering whether this solution can be implemented on scale. We will be sending our response and recommendations on the paper to Niti Aayog as part of the consultation process to highlight these and other concerns that still remain to be addressed.

Published By
Anushka Jain

Anushka Jain

Policy Counsel, IFF

Share this Update