IBM Shuts Down Facial Recognition Tech Development Due to Inaccuracy Issues
Tech giant IBM will no longer develop and sell facial recognition software for mass surveillance. The move came in response to the death of George Floyd, which raised concerns about the accuracy of face-scanning software in terms of race and gender, as well as about how the police use facial recognition technology (FRT) to track demonstrators and monitor neighborhoods.
IBM CEO, Arvind Krishna, made a dedicated announcement in a letter to a group of Democrats working on police reform legislation in the United States Congress. Krishna questioned whether FRT should be deployed by the police at all, saying in the letter:
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Just recently, tests performed by Comparitech showed that Amazon’s face recognition technology incorrectly matched over a hundred of the U.S. and United Kingdom politicians with photos of arrested people.
Comparitech noted the racial issue as Rekognition turned out to be racially biased, stating that “out of the 12 politicians who were misidentified at a confidence threshold of 90 percent or higher, six were not white. That means half of the misidentified people were people of color, even though non-whites only make up about one-fifth of the U.S. Congress and one-tenth of the U.K. parliament.”
Issues Surrounding FRT
In the meantime, FRT use cases are growing in number as it is used by casinos, dating sites, law enforcement, and security agencies, credit card companies, hotels, bars and restaurants, and even social media, among others.
A survey on policy and implementation issues of FRT from the Center for Catastrophe Preparedness and Response pointed out the need to ensure that the benefits of FRT must be weighed against the possible adverse effects it may have on subjects’ freedom and autonomy.
The report suggested that before using FRT, entities should consider how it addresses issues such as performance, evaluation, operation, policy concerns, and moral and political considerations.
The deployment of FRT should reportedly address issues such as possible creation of new security risks, protection of data, including gallery images, probe images, and data associated with these images, and secure transmission of related information. The report concluded:
“There are good reasons to believe that it will still be some time before FRT will be able to identify ‘a face in the crowd’ (in uncontrolled environments) with any reasonable level of accuracy and consistency. It might be that this is ultimately an unattainable goal, especially for larger populations. Not because the technology is not good enough but because there is not enough information (or variation) in faces to discriminate over large populations—i.e. with large populations it will create many biometric doubles that then need to be sorted out using another biometric.”
Subscribe to our Newsletter<
- Data Brokers: How Law Enforcement Rely on Inaccurate Data to Supplement Investigations
- UK Supplies Spyware and Telecoms Interception Devices to Countries With Repressive Regimes
- Stalkerware Usage in on the Rise as Domestic Violence Rates Surge During Lockdown
- ‘TikTok Spies On You and Transfers Data to Chinese Authorities.’ But Is It All That Bad?
- U.S. Senators Introduce Ultimate Backdoor Bill Banning the Use of Strong Consumer-Grade Encryption
- Amnesty Tech Exec: NSO Group’s Malicious Spyware Is Enabling State-Sponsored Repression of Human Rights Defenders
- Google Chrome Extensions With 32M Downloads Have Malicious Add-Ons that Steal Data, Report
- The U.S. Flies Drones and Spy Planes Over American Cities to Surveil Protesters and Maybe More