IBM Calls for Stringent Export Policies on Facial Recognition Tech
The information technology company asserted the U.S. should impose new export limits on facial recognition tech to repressive regimes
With great power, comes a great prospect of abuse. Modern facial recognition technology is one such power.
On Friday IBM Corp urged the U.S. Department of Commerce for the adoption of new mechanisms to restrict the export of facial recognition technology to repressive regimes, impeding the reach of police states.
In a statement, the information technology firm called for new export restrictions on “the type of facial recognition system most likely to be used in mass surveillance systems, racial profiling or other human rights violations.”
IBM renounced its facial recognition and analysis software development earlier this year.
In a letter to the United States Congress in July, Chief Executive Officer Arvind Krishna announced the firm’s decision to cease the research and development of facial recognition technology, citing apprehensions pertaining human rights violations and infringement of privacy.
“IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency”, wrote Krishna.
In its recommendations to the U.S. Department of Commerce, IBM adjured focusing on the export of facial recognition technologies employing “1-to-many” matching, commonly having application in law enforcement. “1-to many” or 1:N facial recognition systems, according to IBM, are ‘most likely to be used in mass surveillance systems, racial profiling or other human rights violations.’
The firm has also urged the regulating the ‘export of both the high-resolution cameras used to collect data and the software algorithms used to analyze and match that data against a database of images’ to impede the rapid collection of high fidelity images and the ability of facial recognition algorithm to train on large datasets. IBM has proposed limiting ‘the ability of certain foreign governments to obtain the large-scale computing components required to implement an integrated facial recognition system’ in its blog, albeit being ambiguous about the governments in question.
In a July 2018 hearing, United States’ independent human rights monitoring agency, the Congressional-Executive Commission on China found that “Uyghurs and other primarily Muslim ethnic minorities in the Xinjiang Uyghur Autonomous Region (XUAR) have been subjected to arbitrary detention, torture, egregious restrictions on religious practice and culture, and a digitized surveillance system so pervasive that every aspect of daily life is monitored—through facial recognition cameras, mobile phone scans, DNA collection, and an extensive and intrusive police presence.”
An August 2020 report by the Guardian unearthed the heinous human rights violations committed by the CCP against the Uyghurs, featuring a minute-long clip shot by an Uyghur model being held in a Chinese detention centre under unsanitary and dangerous conditions posing a mortal threat to life.
Over the last few years, the employment of facial recognition tech in mass surveillance grew rampantly in China that it became ubiquitous.
The number of facial recognition cameras in use grew at an exponential rate, from 176 million in 2017 up to 626 million in 2020, according to a report from the Diplomat. The CCP has capitalized on the situation inflicted by the COVID-19 outbreak to increase the deployment of a mass surveillance system, subjecting residents to ‘unprecedented monitoring’.
If there is anything to learn from the tragic tales of state suppression of dissidents recounted by the media, it’s that innovations in technology must be subjected to ethical and moral diligence.
Technology isn’t inherently biased, but so aren’t humans. Biases are instilled, inherited, and learned; and we’re no strangers to the fact that algorithms too, can exhibit bias.
AI and ML algorithms aren’t separate entities, but rather, a reflection of who we are, and our view of the world.
It is us who drive the future of the technology we innovate, and with the same hands that write code, we’re empowered to write the future we step into.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.
Our UX team designs customer experiences and digital products that your users will love.