
Emotion recognition – The good, the bad, and the ugly
Emotion detection and recognition is a $20 billion industry. But, is it being used for social good, or to emotionally manipulate consumers, or even to embed prejudice into devices?
Emotion recognition technology has the potential to assist those on the autistic spectrum; it can allow companies access to people’s emotional lives; it can also embed human prejudices relating to race and sexuality into everyday devices.
How this technology is developed and used, and what facet is prioritised, is in the hands of researchers and big business.
But, are they prioritising the good, the bad, or the ugly side of this technology
What is emotion detection and recognition?
Emotion detection and recognition (EDR) is a form of artificial intelligence (AI) that uses computers to scan faces alongside machine learning algorithms, which interpret facial expressions and assign them an emotional label.
The research on emotion detection was pioneered by American psychologist Paul Ekman, who proposed that the detection of deceit was possible by analysing people’s micro expressions.
Ekman proposed that certain emotions like anger, surprise, fear, disgust, sadness, and happiness are universal and would show in micro expressions before a person had time to hide them.
Ekman’s research was used to develop the Spot program, which was utilised from 2007 by the US Transportation and Security Administration to identify terrorists.
Although, Spot was considered problematic following reports that the program was being used arbitrarily, or worse, for racial profiling.
However, the fundamentals of this research have been developed further by researchers like Rana el Kaliouby, and advances in AI capabilities mean that computers can learn to recognise emotions from people’s facial expressions.
This, according to Kaliouby, has the potential to transform a cold, detached connection to devices to an emotionally responsive one.
Lisa Feldman Barrett, Professor of Psychology at Northeastern University, warns that the technology does have its limitations.
Her research discovered that emotions are extremely complex and display according to subtle cultural differences, making them difficult for an algorithm to learn.
The good…
Kaliouby’s research led her to develop an emotional hearing aid for children with Asperger’s.
The technology she has developed reads cognitive and emotional states from another’s face and suggests appropriate ways of responding – giving people with Asperger’s the opportunity to engage empathetically.
For those individuals and families who struggle to experience empathy within their relationships, this technology has the potential to be a real life-line to meaningful connection.
However, this kind of niche application – whilst providing a true social good – does not lead to the same financial gain as its marketing applications.
The bad…
Despite the cultural complexities of emotions and despite Kaliouby admitting that, “the technology is not 100% foolproof,” companies like Facebook have taken an interest in the technology and its applications.
Andrew McStay, Reader in Advertising and Digital Media at Bangor University, found that Facebook has plans to tailor content according to user’s emotional responses.
He also found that a sample of the general public had concerns about companies reading their emotions, with over 50% not being happy with any form of emotion recognition technology.
With other companies like Microsoft, IBM, Amazon, and Apple also investing in emotion recognition technology, the reality of this tech meeting public resistance is potentially high.
And the ugly
A recent study by Inioluwa Deborah Raji and Joy Buolamwini, has found that many companies using facial recognition software (a key component of emotion recognition) had biases towards race and gender imbedded into algorithms – with particular difficulty recognising black women.
When presented with the algorithm biases, the companies took steps to increase accuracy in their software.
However, Raji and Buolamwini warn that:
[T]he potential for weaponization and abuse of facial analysis technologies cannot be ignored nor the threats to privacy or breaches of civil liberties diminished even as accuracy disparities decrease.
Equally concerning is a study that reveals how facial recognition software can potentially be used to attempt to identify sexual orientation and the potential abuses of this kind of tech.
Whilst companies like Faception claim to be able to read a person’s entire personality from their facial expressions, with the aim of detecting hidden criminality so people can be stopped before they commit a crime – á la the film Minority Report.
With the potential for exploitation and abuse, along with the reality that emotion recognition is not infallible, caution is needed as research and big business push forward with this technology.
As Andrew McStay suggests
There’s nothing definitively wrong with technology that interacts with emotions. The question is whether they can be shaped to serve, enhance and entertain, rather than exploit.
The limitations of emotion recognition and the possible abuses are perhaps more important than this extraordinary technology’s potential for profit.
In order for it to be used for the benefit of all, accountability needs to be at the heart of its continued progress and development.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.
Follow Us