August 21, 2017
By 2020, it’s projected that 85 percent of customer interactions will be managed without a human. Currently, machine learning provides a wide range of functions in our everyday life. Artificial intelligence (AI) algorithms assist trend analysis in industries such as finance and healthcare. AI helps us navigate busy roads in order to reach our destination quickly and safely, and in the near future, it may drive our cars for us autonomously.
AI is also transforming Google Search in order to provide better answers to search queries. Many have heralded AI as an extremely disruptive technology, and have predicted that it will fundamentally change the way that we live our lives – something akin to the industrial revolution. However, amidst the wonderful predictions for how AI will revolutionize industries and save lives, there are some concerns AI technology advances.
According to the technological singularity hypothesis, artificial intelligence could one day evolve to a super intelligence that surpasses the intellect of the greatest human minds. When this happens, technological advancement will become a runaway train, dramatically altering the face of the Earth and potentially annihilating the human race.
The fear of a technological singularity and its cataclysmic aftermath has been aided by popular culture, with films like iRobot and Terminator achieving notoriety. While the prospect of Skynet taking over the Earth remains firmly in the fantasy realm of sci-fi, great thinkers like Elon Musk, Bill Gates and Stephen Hawking have suggested that rapidly learning AI machines could cause more harm than good.
In a Q&A session, Stephen Hawking used the example of an AI-powered hydroelectric project. In flooding an anthill, the AI would be acting out of competence, not out of malice – but if we replace ants with humans, then we could have a serious problem. While there certainly are concerns about militarized drones going rogue and machines causing injuries in the workplace, one under-discussed fear of AI technology is informational security.
Telemedicine allows doctors and patients to communicate using videoconferencing and email. With devices that relay a patient’s vital signs, chronic conditions can be managed remotely. In the field of telemedicine, AI can be used to analyze data and help doctors make better decisions.
In Chesterfield, Missouri, a team of doctors and nurses work in a Virtual Care Center, provide remote support to intensive-care units and emergency rooms in smaller hospitals. With data on ICU patients constantly being transmitted, doctors in the Virtual Care Center can immediately spot problems and request assistance at the remote location.
While this has lead to a 35 percent decrease in the average length of stay for patients and 30 percent fewer deaths than expected, security concerns may plague the mass adoption of telemedicine. Author Michael Magrath states in an eSignLive blog:
“There is danger, however, that the exchange of data between a patient at home and the remote clinician is lacking even the very basic security mechanisms put in place by other heavy lifters in the digital services community such as banking and personal finance.”
In addition to the threat of data leakage, Magrath suggests that imposters posing as physicians could be another problem to overcome.
In the aftermath of the 9/11 terrorist attacks, there was a call for fingerprint recognition to be widely implemented. With so much biometric data on record, it’s not only important to speculate what would happen if the data was leaked, but if the data was modified.
In the case of telemedicine, the accidental or malicious modification of someone’s health data could lead to a misdiagnosis and improper treatment – putting lives at risk. When AI learning is used to forecast financial markets, modified data could have catastrophic consequences for investors and entire economies.
AI algorithms, just like other software, are susceptible to cyberattacks. Before we allow artificial intelligence to autonomously make high-stakes decisions, it’s critical that cyber security has evolved to defend these attacks.
Fortunately, AI is being used to enhance cyber security.
In April 2016, researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory announced an AI-powered system for detecting Malware. Utilizing both human expertise and machine learning, the system can detect 85 percent of cyberattacks – which is 3X better than current benchmarks.
First, the system autonomously combs through data and identifies suspicious activity by clustering the data into meaningful patterns. Next, the data is given to human experts who confirm whether or not the suspicious events were real attacks
This feedback is incorporated into the system’s algorithm which makes it more effective at identifying threats in the future. Google is also experimenting with AI to enhance security.
Most people hate filling out CAPTCHA forms to prove whether they’re human or not. As a more convenient option, Google have been innovating Invisible CAPTCHA – an AI-powered system that determines your humanity without you having to interact with it.
Other AI-infused security solutions such as Harvest.AI, Darktrace and PatternEx are achieving recognition at the moment, and may prove invaluable when it comes to defending against the next generation of cyberattacks.
So long as cybersecurity continues to innovate and people feel safe, it’s unlikely that the potential security risks will limit the widespread consumer adoption of AI technologies.
Read more about advances in artificial intelligence at TechCo
Did you like this article?
Get more delivered to your inbox just like it!
Sorry about that. Try these articles instead!