Artificial Intelligence, News, Technology

Clearview AI: The Debate of Privacy vs Security

Abhinav Raj

Abhinav Raj, Writer

Manhattan-based start-up Clearview AI providing facial recognition software for law enforcement sparks controversy after a data breach

On February 26, 2020, Forbes reported that Manhattan-based Clearview AI’s databases had suffered a massive breach, potentially compromising details of the firm’s customer list, the number of searches they had made and accounts they had set up. 

The firm’s attorney Tor Ekeland in a lax statement said that data breaches were a part of life in the 21st Century; the firm had patched flaws and was working to strengthen the security. 

Data breaches have indeed been on the rise with the paradigm shift as more and more businesses have begun relying on digital data. Clearview AI’s breach would not have been as disconcerting if it wasn’t for its nature of business: the Manhattan-based firm allows clients to match faces to an estimate of 3 billion photos by indexing the open web, a feat achievable only through the collection of sensitive information. 

Clearview AI’s driving principle is to provide law enforcement with a powerful, efficient facial recognition tool to identify and track down perpetrators of violent crimes, including but not limited to sex traffickers, terrorists, paedophiles and child molesters. Clearview AI works by comparing uploaded images to its vast repository consisting of over 3 billion publicly available images ‘scraped’ from millions of websites and social media platforms to an astonishing 99.6% accuracy. Clearview is currently being used by over 600 law enforcement agencies to track criminals at large.

Permissible until Invasive

On the subject of public distrust in the handling of the application, Clearview AI CEO Hoan Ton-That stated the technology will not be made available to the general public as long as the former remained in charge, in an interview with CBS news. Ton-That further reassured that it is only intended to be used as an after-the-fact research tool, and not a 24/7 surveillance system. However, if history is any guide, technology more often than not strays away from its intended use after a while. 

The conception of Clearview AI has stirred many ethical and moral debates surrounding its use, sparking multiple fires along the way. Recently, Twitter sent the firm a cease and desist notice stating the app’s modus operandi of ‘scraping’ violated their terms of service, and a few weeks after Facebook, YouTube and Google followed suit. 

Ton-That argued that the firm reserved a first amendment right to public information, however, this defence was discorded by multiple lawyers. 

“It’s really frightening if we get into a world where someone can say, ‘The First Amendment allows me to violate everyone’s privacy'”, said Tiffany C. Li, a technology attorney based in New Haven, Connecticut. 

Among the 600 law enforcement agencies using Clearview AI was the New Jersey State Police Department, but that changed when New Jersey attorney general Gurbir S. Grewal grew cognisant of the development. Grewal ordered state prosecutors to immediately stop using the facial recognition application until more information became available. 

“I’m not categorically opposed to facial recognition technology. I think used properly, it can help us solve criminal cases more quickly. It can help us apprehend child abusers, domestic terrorists”, said Grewal to CBS.

“What I am opposed to is the wide-scale collection of biometric information and the use of it without proper safeguards by law enforcement”, he added. 

The firm has been under a lot of fire for its scraping policies that invite for important discourse over its ethicality. Earlier this year a class-action lawsuit was filed against the company in Illinois, claiming the former violated the state’s Biometric Information Privacy Act (BIPA) that safeguards the residents against the utilisation of their biometric data without consent. Needless to say, the recent data breach has only added fuel to the flame that was sparked not too long ago. 

While Clearview AI is a ground-breaking tool that may redefine the ways of criminal investigation, strengthen law enforcement and ensure the prevalence of justice, the apprehensions surrounding its use and possible abuse are not unfounded.

CEO Hoan Ton-That asked law enforcement and the general public to put their trust in the firm but failed to keep their data secure, raising the question of whether an application like Clearview AI could mean more harm than good, or if an application like that should exist. Firms like Clearview AI must realise that trust in 2020 is expensive, and if misplaced, individuals often tend to pay a hefty price. 

Privacy and security often tend to be seen as a zero-sum game, but it does not necessarily have to be so. In the advancing world of tech, the future should be one whose focal point is the consolidation of security of the individual and the society, without compromising or posing a threat to their privacy. 

Perhaps the words of Marlon Brando will be recognised with the varying, yet ever so relevant context: “Privacy is not something that I’m merely entitled to, it’s an absolute prerequisite.”

Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.

Follow Us