Artificial intelligence isn’t going to take over the world just yet, with the president of Microsoft insisting that the risk of a super-intelligent AI breakthrough is “years” or even “decades” away from being a reality.
For the last year, the meteoric rise of generative AI technology like ChatGPT has rocked the business world, leading to a wide range of advancements that have some users worried about a dystopian future of robot overlords.
However, many experts of the technology have noted that this kind of dystopian future is not nearly as close as the average user might think, no matter how many times you’ve seen The Terminator.
Artificial General Intelligence Still a Ways Off, According to Microsoft President
According to a Reuters report, the president of Microsoft, Brad Smith, assuaged reporters on Thursday that a dangerous AI breakthrough was unlikely in the near future.
“There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades, but I still think the time to focus on safety is now.” – Brad Smith, Microsoft president
The potentially dangerous breakthrough in question relates to artificial general intelligence (AGI), an evolution of AI technology that could have some dire impacts on the world as a whole without the proper safeguards in place.
What Is Artificial General Intelligence (AGI)?
Artificial General Intelligence, colloquially known as AGI, is the next step beyond the current AI standard, also known as Artificial Narrow Intelligence (ANI). Services like ChatGPT and Google Bard are ANI platforms, because they are designed to perform a single task, like write content or create images.
AGI, on the other hand, is more complex, learning like humans do to perform a wide range of different tasks better than any human could possibly hope to. Essentially, without the proper safeguards, AGI is the robot overlord in sci-fi movies that can control everything and make your life miserable in a connected world.
This just in! View
the top business tech deals for 2024 👨💻
Fortunately, most experts note that this level of AI advancement is a ways off, and as long as safety is prioritized in the early stages, shouldn’t lead to the extinction of all humanity.
AI Safety vs Innovation
If you followed all the madness over at OpenAI last week, you know that AI safety versus innovation has been a big debate in the tech community over the past year. And while the technology has the potential to improve productivity across the business world, it’s worth noting that a bit of hesitance for the sake of safety could go a long way.
“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control.” – Brad Smith, Microsoft president
As the exponential growth of technology over the last few decades has shown, there’s no telling what kind of breakthrough is right around the corner. And if you don’t prepare for the bad with the good, you could end up with a lot more problems than solutions.