ICE Strikes a Deal With Controversial Facial Recognition Firm Clearview AI
The Immigration and Customs Department paid $224K for “Clearview Licences” to investigate online child exploitation cases
Privacy rights advocates would wake up only to realise their nightmares have taken form—overnight.
The U.S. Immigration and Customs Enforcement (ICE) signed a contract with Manhattan-based facial recognition firm Clearview AI last week despite unremitting uproar from privacy activists protesting its data scraping practices.
The ICE paid $224,000 to license the firm’s technology, according to a tweet from tech watchdog Tech Inquiry. The tech watchdog also filed a Freedom of Information Act request to the ICE for the contract in question, including associated emails with the award, any modifications to it and sub-awards, if any.
Clearview AI can identify a person with 99.6% accuracy by matching and comparing an image of them against its vast repository of over 3 billion images scraped from across the open web.
An ICE spokesperson told Business Insider that Clearview AI’s facial recognition software is utilised in the investigation of child exploitation and related cybercriminal activity by the Homeland Security Investigations’ Child Exploitation Unit.
“HSI’s Child Exploitation Investigations Unit employs the latest technology to collect evidence and track the activities of individuals and organized groups who sexually exploit children using websites, chat rooms, peer-to-peer trading, and other internet-based platforms,” said the spokesperson.
“This is an established procedure that is consistent with other law enforcement agencies.”
Another federal purchase order from December 2019 uncovered by Tech Inquiry revealed that Clearview entered into a contract with the U.S. Air Force worth $50,000 responding to a call for ‘innovative defense-related dual-purpose technologies/solutions with a clear airforce [sic] stakeholder.’
Privacy rights advocates have always been sceptical about the tryst of third-party facial recognition tech and law enforcement, however, the episode of Clearview AI became a cause célèbre affair following a New York Times report stressing upon the grave threat the utility posed to privacy, describing it as ‘a tool that could end your ability to walk down the street anonymously’.
Surveilling or Unveiling?
An array of lawsuits lined up against Clearview shortly after the NYT’s January article went live. Currently, the American Civil Liberties Union is suing the company, allegedly for the collection and storage of data of the citizens without prior intimation or due consent and selling access to law enforcement. The ACLU has cited the firm’s violation of Illinois Biometric Information Privacy Act (BIPA).
The facial recognition company intends to invoke the First Amendment in the court of law and has appointed one of the most preeminent First Amendment lawyers in America, Floyd Abrams, to defend themselves.
This only marks the beginning of the list of governmental and non-governmental bodies Clearview has earned the scepticism of. Last month an international probe was launched into the firm as a joint endeavour by privacy regulators in the U.K. and Australia.
“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview AI Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” read a July 9 statement on UK’s Information Commissioner Office website.
The European Data Protection Board articulated the incompatibility of Clearview AI’s services in tandem with EU data protection provisions, according to Reuters.
“The EDPB is therefore of the opinion that the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime,” said the EDBP.
The probe deepens, and Clearview has none to blame but its callousness. The February 26 data breach may have had something to contribute to the mistrust the firm has been on the receiving end of.
In retrospect, Clearview AI is a company that claims to strengthen law enforcement by providing a facial recognition service that functions on stockpiles of data it collects through practices of questionable legality and has proven itself incompetent to protect the said data. The notes of irony can be heard playing themselves in the case in question.
In ‘Clearview AI: The Debate of Privacy vs Security’ I wrote that privacy and security often tend to be seen as a zero-sum game, but to compromise the privacy of the individual to strengthen the security of the society seems to be the incorrect formula in a lengthy system of equations. The inverse proportionality between privacy and security is an ostensible illusion, and it’s not paradoxical to conclude it’s possible to consolidate both.
Clearview AI and the debate of privacy and security is not over just yet, and multiple questions remain unanswered. It would be foolhardy to assume one could arrive at the conclusion with just a fraction of the picture.
However, it seems that we already have the ICE’s verdict.
What is yours?
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.
Our UX team designs customer experiences and digital products that your users will love.