IBM has announced that the company will no longer develop, research, or offer facial recognition technology, a massive step towards a less dystopian future.
Facial recognition has seen its fair share of controversy and backlash in recent years. From citywide bans to contentious launches, the technology has raised eyebrows and ruffled feathers across the political landscape, and rightfully so given the potentially dangerous nature of the software.
Now IBM, one of the oldest, most influential tech companies in the world, has drawn a line in the sand on the research of the technology, setting the stage for a global discussion on its use throughout the world.
IBM Sunsets Facial Recognition Technology
In a letter to Congress, Arvind Krishna, CEO of IBM, outlined a number of specific policy proposals that he believes will help the country achieve at least a modicum of racial equality. Among those policy proposals was a call to establish more “responsible technology policies,” in which he stated that the company “no longer offers general purpose IBM facial recognition or analysis software.”
“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.”
Krishna goes on to explain the value of technology like facial recognition and artificial intelligence in terms of its potential to keep citizens safe when used properly. Unfortunately, due to a lack of accuracy, the technology is simply not ready to be used to that end until some changes are made as far as oversight is concerned.
“Vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”
Unfortunately, that responsibility has been largely ignored by the tech industry, which is why Krishna called for “a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” And we can all agree, it's long overdue.
The Problematic Nature of Facial Recognition
As is the case with most technology, facial recognition is far from perfect. And while a lack of perfection is fine when it comes to laptops and TVs, facial recognition technology requires at least near-perfect functionality to be even remotely useful. Unfortunately, that's not even close to the case.
Study after study after study has shown that facial recognition technology is notably biased along gender and racial lines. In fact, many of these software programs misidentify suspects more often than they correctly identify them. This means facial recognition technology has the potential to exacerbate the already questionable practices of law enforcement agencies towards people of color rather than improve them.
Unfortunately, whether it be in the pursuit of innovation or wealth, tech companies have been decidedly hesitant to welcome regulation into their industry. However, unlike with laptops and TVs, the wide-reaching implications and dangerous consequences of integrating facial recognition technology into law enforcement agencies makes allowing oversight less a matter of dollars and cents and more a matter of accountability.
Facial Recognition and Protesting
Despite the letter from IBM not making a single mention of the Black Lives Matter movement, you'd be hard-pressed to convince anyone that this move wasn't primarily fueled by the police reform protests around the world. Given the dozens of questionable actions taken by law enforcement agencies in the US to dismantle protests, it's safe to assume that IBM recognized the potentially dangerous implications of facial recognition technology on that scale.
Misidentifying suspects is one of the more pressing concerns when it comes to facial recognition technology, but it's certainly not the only one, particularly given the state of police-citizen relations today. With combative protests taking the world by storm, police have shown little concern for appropriate operating procedure, and the use of facial recognition technology could make it much worse.
Seattle PD dragging mother out of her car while her 9 year old is in the backseat. Officer tells her she has “multiple counts of assaulting an officer. You assaulted me and you assaulted another officer” at a protest couple days ago.
Cops are seeking revenge on protestors. pic.twitter.com/3aZGVowwLd
— DÆ (@daeshikjr) June 6, 2020
Concerns about being identified and prosecuted for protesting are very real in the movement, with organizers imploring protesters to limit social media sharing when people's faces are visible. There's even been a strong push by some to specifically blur faces and scrub metadata from images, in hopes of preserving the right to protest without fear of prosecution after the fact. Had facial recognition technology been more wholly implemented in law enforcement agencies before these protests, the risk could have been much higher.
Tech companies have taken action amid the global police reform protests in a number of ways – from limiting offensive content, to donating to charitable causes that promote the voices of people of color. However, with IBM scrapping an undeniably lucrative initiative in pursuit of a less divided world, the tech company has raised the bar on what it means to be part of the movement.