Emotion Recognition Technology: An explainer

tl;dr

In this explainer, we will be looking at emotion recognition technology and how it works as well as the issues related to its usage by different authorities. We will also be looking at the rights which are involved in this scenario as well as any laws in place to regulate its use and misuse.

What is emotion recognition technology?

In the short-lived but extremely engaging television series, “Lie to me”, the protagonist is the world's leading deception expert who studies facial expressions and involuntary body language to identify when a person is lying. He does this by studying “micro-expressions”, which are said to be involuntary facial expressions that last on a person’s face for a short time that give away their actual feelings before the person masks these feelings in an attempt to deceive or lie.

The protagonist of this show was based on and inspired by the real life work of American psychologist Paul Ekman, who is renowned for having come up with a theory of universal emotions which holds that, “(o)f all the human emotions we experience, there are seven universal emotions that we all feel, transcending language, regional, cultural, and ethnic differences”. This theory identifies seven universal facial expressions for these emotions, which are anger, disgust, fear, surprise, happiness, sadness and contempt, and is the basis for most of the recent development in artificial intelligence (AI) based emotion recognition technology.

Emotion recognition technology uses AI to identify and categorise emotions into these seven universal emotions or a combination thereof based on facial expressions it perceives from the subject and is used in conjunction with facial recognition technology. We recently came across this technology when the Lucknow Police announced its intention to use emotion recognition technology to track expressions of “distress” on the faces of women who come under the gaze of the AI enabled cameras in public places. The cameras would then automatically alert the nearest police station even before the woman in question takes any action to report any issue herself. Another troubling instance of the use of this technology was when a Chinese subsidiary of Japanese camera maker Canon, Canon Information Technology, last year unveiled a new workspace management system that only allows smiling employees to enter the office and book conference rooms.

Why is it an issue?

Emotion-recognition systems share a similar set of blueprints and founding assumptions: that there is a small number of distinct and universal emotional categories, that we involuntarily reveal these emotions on our faces, and that they can be detected by machines.” However, the biggest criticism levelled against the use of emotion recognition technology is that facial expressions do not accurately correspond to a person’s emotional state. A number of factors play an important role in displaying how a person is feeling which includes body language, tone of voice, changes in skin tone, personality as well as the context in which these emotions are generated and expressed. These factors, however, cannot be recognised by the technology thereby making it difficult for it to be accurate. It is also important to note here that while facial recognition technology, which is used in conjunction with emotion recognition technology, can theoretically be developed to be more accurate, complete accuracy in emotion recognition technology cannot be achieved. The game linked here will help you to experience how emotion recognition works and how easy it is to hoodwink the technology.

This is because the science behind emotion recognition, i.e., the theory of universal emotions itself has been at the receiving end of criticism. After reviewing more than 1,000 studies, a group of scientists brought together by the Association for Psychological Science, concluded that the relationship between facial expression and emotion is nebulous, convoluted and far from universal. Emotion recognition also stems, in part, from the problematic pseudo-science of physiognomy, which purports to judge a person’s character from their facial features or expressions. The idea behind physiognomy was that, “if you looked like an animal, say, a sheep, you also had its personality”. This philosophy of basically judging a book by its cover, was seen to be wildly racist and was largely discredited in the 15th century. However, a few fringe “experts” continued to work on it and developed linkages with criminology like the criminologist Cesare Lombroso, who “argued that you could pick a criminal out of a crowd by analyzing their features”. (Read a brief history of physiognomy here)

However, use of emotion recognition is still picking up pace and the emotion detection industry is projected to almost double from $19.5bn in 2020 to $37.1bn by 2026. Use cases for emotion recognition have been propping consistently in the past year such as use by/for:

  1. Police;

  2. Hiring processes;

  3. Workplace surveillance;

  4. Classroom monitoring and

  5. Mass surveillance.

However, use of emotion recognition technology by the Police or similar agencies has been criticised for displaying racial bias. Use of emotion recognition has also been criticised when utilised for hiring purposes as it leads to a certain candidates, such as those who are visibly nervous or conversing in their second language, being unfairly disadvantaged.

Do we have a right to privacy for our emotions?

The question that arises next is are emotions protected under the right to privacy? Privacy includes bodily integrity, personal autonomy, informational self-determination, protection from state surveillance, dignity, confidentiality, compelled speech and freedom to dissent or move or think. In short, the right to privacy has to be determined on a case-by-case basis.If we assume that privacy is about exercising the choice to withhold information, which others have no need to know, then emotions can be categorised as protected under the right to privacy.

Additionally, under the draft Personal Data Protection Bill, 2019, biometric data, which is classified as sensitive personal data, is defined as, “facial images, fingerprints, iris scans, or any other similar personal data resulting from measurements or technical processing operations carried out on physical, physiological, or behavioural characteristics of a data principal, which allow or confirm the unique identification of that natural person”. Thus, it can be surmised that we do have a right to privacy for our emotions and thus any intrusion into it, at least by the State, has to satisfy the thresholds of legality, necessity and proportionality as per the decision of the Hon'ble Supreme Court of India in the Right to Privacy judgment.

Published By
Anushka Jain

Anushka Jain

Policy Counsel, IFF

Share this Case Study