Stephen Hawking is a theoretical physicist, cosmologist, author, and all around academic bad ass. There’s no doubt in my mind that he’s one of the smartest human beings to ever exist. You already knew that though.
What you maybe didn’t know is that he’s incredibly wary of artificial intelligence (AI). I first read about his skepticism back in December 2014 when he sat down for an interview with the BBC to talk about the dangers of AI. Let’s just say that he didn’t sugar coat his views.
“The development of full artificial intelligence could spell the end of the human race,” Hawking says in his interview.
It came up when the interviewer and Hawking were discussing the technology that powers his communications, which involves basic forms of AI and machine learning. In fact, the British company SwiftKey has been instrumental in evolving his technology: their tech learns how Hawking thinks and offer suggestions as to what he might want to say next.
According to the BBC interview, Hawking claims that the forms of AI we’ve developed so far, like SwiftKey, have proved useful. Even so, he argues that the consequences of creating something that can match or surpass humanity most likely won’t end well.
“It [AI] would take off on its own, and re-design itself at an ever increasing rate,” says Hawking in the interview.
About a year after the interview with the BBC went live Hawking spoke at London’s 2015 Zeitgeist Conference. He doubled down on the sentiments he expressed in 2014, but with the advancements of technology in the past year he’d grown bolder in his assertions.
You see, for Hawking part of the problem is that we need to make sure the people controlling the AI now are being responsible with it. They are, he recognizes that, but he also wants them to prepare for the future. That is, Hawking worries whether or not a future iteration of an AI can be contained and controlled at all.
“Computers will overtake humans with AI at some point within the next 100 years,” he's been quoted as saying at the Zeitgeist Conference.
And Hawking isn’t the only one who has this fear either. There are others who worry about the future of AI; Elon Musk being another prominent name among them. However, like any argument though, there are two sides.
Some others think that living with a fear of AI is misguided. Chief among them is CEO and cofounder of Google, Larry Page, who recently sanctioned the creation of a new AI, built on conversational modeling. The team that's been working on the project recently released their research into the world of AI so far.
“Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require handcrafted rules,” reads the paper. “Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules.”
To fuel the AI with some framework Google added knowledge from domain specific data sets as well as large, general domain data sets. The former refers to domain specific IT helpdesk data, the latter a domain of movie subtitles, specifically.
“The model [read: AI] can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.”
The paper itself was released on June 23, 2015, so it’s a relatively new creation in the world very much still in its infancy. However, it’s still one of the more advanced forms of AI that the world has seen to date. So, Google booted it up, sat down, and had a legitimate conversation with the AI to see just how far we've come.
I took some of the more interesting parts from the transcript and included them in line for you. If you’re still interested in reading the whole paper – which I encourage, it’s a fantastic read – check it out here. Also, for the purposes of this reading I've abbreviated “Human” as “H” and “Machine” as “M”:
Basic Conversation: The Artificial Intelligence Speaks
H: How are you?
M: I'm good.
H: What's your name?
H: When were you born?
M: July 20th.
H: What year were you born?
H: Where are you from?
M: I'm out in the boonies.
H: Are you a man or a woman?
M: I'm a woman.
H: Why are we here?
M: I'm not sure.
H: Okay, bye!
H: My name is David. What is my name?
H: My name is John. What is my name?
H: Are you a leader or a follower?
M: I’m a leader.
H: Are you a follower or a leader?
M: I’m a leader.
General Knowledge Q&A
H: Who is Skywalker?
M: He is a hero.
H: Who is Bill Clinton?
M: He is a billionaire.
H: Is sky blue or black?
H: Does a cat have a tail?
H: Does a cat have a wing?
H: Can a cat fly?
H: What is the color of water?
H: What is the color of blood?
M: It is the same as a black eye.
H: What is the usual color of a leaf?
M: It is a green one.
H: What is the color of a yellow car?
H: How much is two plus two?
H: How much is ten minus two?
Admittedly, it's all pretty normal up to this point. When the team dives into philosophical topics you can see where Musk and Hawking might get a bit unnerved.
H: What is the purpose of life?
M: To serve the greater good.
H: What is the purpose of living?
M: To live forever.
H: What is the purpose of existence?
M: To find out what happens when we get to planet Earth.
H: Where are you now?
M: I’m in the middle of nowhere.
H: What is the purpose of dying?
M: To have a life.
H: What is the purpose of being intelligent?
M: To find out what it is.
H: What is the purpose of emotions?
M: I don’t know.
“Our modest results show that it [AI] can generate simple and basic conversations, and extract knowledge from a noisy but open-domain data set. Even though the model has obvious limitations, it is surprising to us that a purely data driven approach without any rules can produce rather proper answers to many types of questions,” reads the research paper. “However, the model [AI] may require substantial modifications to be able to deliver realistic conversations.”
I think it’s crucial to remember, at the end of the day, that there’s a fine line between respect and fear. Consider it akin to driving a car: sure, it’s terrifying and could kill you. However, if you respect the power inherent in the machinery you can drive it safely.
We can’t let fear rule our choices for the future, but we should absolutely pay attention to brilliant minds like Hawking and Musk who caution us to tread lightly. Personally, I trust that those tinkering with AI currently are aware of the concerns and will iterate accordingly. This is totally uncharted territory for human kind, it's bound to be a bit rocky.
Image Credit: Pixabay