It's 2019 and robots are everywhere. They're in our pockets, our cars, our coffee makers, and yes even our toilets. But there are some jobs that robots just aren't cut out for, and our societal inability to tell the difference has caused some serious problems.
Technology evolves pretty fast. In fact, the only thing that evolves faster is our ability to integrate new technology into our daily lives. Smartphones have almost become a greater necessity than oxygen in their short decade of existence, virtual assistants are bringing the robot helper dream into reality for millions of households worldwide, and artificially intelligence software is automating more jobs than ever before.
However, while robots are particularly adept at answering trivia questions and reporting the weather, a lot of jobs require empathy, common sense, and other personality traits found exclusively in humans. Sometimes, in our drive to be as efficient as possible and use tech to ease our own workloads, we forget that.
Looking for a job is an incredibly stressful process, but hiring someone can be just as intense. After reading through hundreds of resumes and conducting dozens of interviews, it's understandable that recruiters would eventually turn to robots to lighten the load. Unfortunately, as it became apparent last year, relying on robots to be unbiased is a much bigger ask than you might realize.
According to Reuters, Amazon has to scrap its artificially intelligent recruiting software for being biased against women candidates. The software, originally developed in 2014, was trained on information submitted by applicants over a 10-year period of time. Because of the tech industry's infamous gender gap problem, most of those candidates were men, which led the software to penalize candidates with the word “women” anywhere on their CV.
This is one of the largest obstacles for artificial intelligence. Because artificial intelligence relies on societal patterns and data, and our society has a tendency to be a bit sexist, robots like Amazon's recruiting software inherently adopt sexist practices. To make this software practical, developers need to find a way to write bias out of the programming. Unfortunately, we haven't figured out how to do that with humans yet, let alone robots, so maybe we should stick to good old fashioned human hiring.
Getting fired sucks. Losing your source of income is enough to make some people get seriously worried about what their future holds, and there's no robot that can replicate the human emotions needed to calmly and reasonably guide them through the transition. But that doesn't mean some companies haven't tried.
One story in particular illustrates the problem with an automated firing system. The saga saw Mr. Ibrahim Diallo fired from his job without reason or explanation. When he arrived at work, his key card wasn't functioning properly. After being let in by a security guard, he found himself locked out of all work stations and computers. He talked to his manager, who assured him it was a mistake and that he was not fired. Shortly after that, two men arrived at his office with instructions to remove him from the building.
“I was fired. There was nothing my manager could do about it. There was nothing the director could do about it. They stood powerless as I packed my stuff and left the building,” wrote Diallo in a blog post detailing the situation.
Diallo's company employed an automated system that, once an employee was fired, activates a number of processes, including deactivating key cards, removing access to work stations, and alerting security to a now-ex-employee that needs to be removed. Each process begins as soon as another ends, with zero human interaction necessary.
“The system was out for blood and I was its very first victim.”
However, as Diallo's manager explained to him, he wasn't fired. A former manager had forgotten to file his paperwork correctly, which had caused his contract to lapse and be listed as a former employee. Subsequently, the “firing” system activated and there was nothing anyone could do. It took three weeks for managers and directors with the company to undo the mistake, costing Diallo three weeks salary and the trust of an automated firing system.
Doctors are some of the most well-trained professionals in the world, but technology has certainly improved their ability to do their job. In fact, robotic surgeries have become fairly common in hospitals around the country, helping skilled surgeons to perform even more life-saving procedures. However, bedside manner is certainly a tool in a good doctor's utility belt, and robots just aren't there yet. We just wish someone had told this to the following doctor…
At the Kaiser Permanente Medical Center in Fremont, California, one doctor thought it would be a good idea to deliver some bad news to a patient by way of a robot attached to a video-link screen. After telling the 78-year-old man and his family that he had a few days to live, the family expressed their extreme and understandable frustration at, what they called, “an atrocity of how care and technology are colliding.”
Robotic surgery is one thing. But telling someone that they're going to die is not a job for a robot, no matter how creepily realistic some of them get. Human empathy, compassion, and even touch aren't yet replicable in robots, and they are absolutely necessary when it comes to these kinds of medical duties.
Judging and Sentencing
Much like hiring new candidates, firing old employees, and practicing medicine on scared patients, being a judge is a particularly nuanced profession that requires a human touch to be done right. However, with the criminal justice system experiencing notable problems with speed, employing an algorithm to help with the smaller details can be an attractive alternative to month-long wait times. It also, however, has some pretty serious consequences if you don't check up on it.
In a ProPublica report titled Machine Bias, researchers looked into a computer program designed to evaluate criminals on their potential to commit another crime, specifically in Florida. They found that, not only were these assessments wildly inaccurate, they also skewed significantly racist, falsely flagging black defendants twice as often as white defendants.
These risk assessments are used in at least 10 states in the US to help judges decide on things like assigning bond amounts to length of prison stay. And had ProPublica not taken the initiative to look into these risk assessments, the system would be an even more biased mess.