As the dust still settles on OpenAI’s latest drama, a letter has surfaced from several staff researchers citing concerns about an AI superintelligence model under development that could potentially pose a threat to humanity, according to those close to the source. The previously undisclosed letter is understood to be the real reason behind why Sam Altman was fired from the company.
The model, known internally as Project Q*, could represent a major breakthrough in the company’s pursuit of artificial general intelligence (AGI) – a highly autonomous branch of AI superintelligence capable of cumulative learning and outperforming humans in most tasks. And you were worried about ChatGPT taking all our jobs?
With Sam Altman now firmly back at the company and a new OpenAI board in place, here are all of the details of Project Q*, as well as the potential implications of AGI in the bigger picture.
What is Project Q* AGI?
Project Q* – pronounced ‘Q-Star’ – refers to a new AI model being developed by AI powerhouse and ChatGPT owner OpenAI.
According to people close to the source, Q* could mark a major leap forward in the field of artificial intelligence by radically improving AI reasoning, and inching the company closer to a major breakthrough in the development of AGI.
This just in! View
the top business tech deals for 2024 👨💻
Unlike current generative AI models, which are able to create responses based on information it’s previously learnt, AGI is an autonomous system that can apply reason to decisions, granting it human-level problem solving capabilities.
While AGI is still yet to be fully realized, many experts believe the technology will also be capable of cumulative learning, another trait of human cognition that could give it the power to improve upon itself exponentially.
Several sources claim that Q*’s has been able to demonstrate these attributes when solving mathematical problems. Thanks to the model’s vast computing power, Q* was able to outperform grade-school students, implying that it’s reasoning skills and cognitive capabilities supersede the functions of current AI technology.
It’s unclear how long Project Q* has been under development for, and what it’s applications may be, but OpenAI notified staff members and board members about project Q* before the events of last weekend.
OpenAI Researchers Express Ethical Concerns About Project Q*
While OpenAI’s CEO Sam Altman feels confident AGI technology will drive innovation forward, several researchers were quick to cite the potential dangers of Project Q*.
After the researchers discovered the project, they wrote a letter to OpenAI’s board of directors, warning them about the powerful algorithm’s potential danger to humanity. Specific concerns around AI ethics noted in the letter weren’t revealed, but the warnings were foreboding enough for the board to dismiss Altman from his position as Chief Executive last Friday.
Following Altman’s shock departure, which was originally thought to be on the grounds of “poor communication”, he found a new position at Microsoft. This action prompted 700 out of OpenAI’s 770 staff members to threaten to do the same if the CEO wasn’t reinstated as the start-ups commander in chief.
With the company at risk of crumbling, OpenAI’s board was forced to reappoint Altman to the top position – resulting in a massive overhaul of the company’s committee and highlighting deep divisions among its leadership.
Now with Altman back in the helm, and Project Q* likely to get the green light, what does this mean for those at the bottom of the AI food chain?
Should We Be Concerned About AGI and Project Q*?
While this is the closest OpenAI has ever got to bringing the concept of AGI to existence, it isn’t the first time that Altman has hit headlines when talking about the technology.
In September, the Silicon Valley king pin landed himself in hot water after comparing AGI to a “median human you could hire as a co-worker”, chiming with comments he made last year about how this AI could “do anything that you’d be happy with a remote co-worker doing” including learning how to go be a doctor and a very competent coder.
“For me, AGI…is the equivalent of a median human that you could hire as a co-worker.” – Sam Altman, CEO of OpenAI
While comparing AGI to the intelligence of a ‘median human’ isn’t anything new, Altman’s use of the term was deemed “icky” by AI ethicists and professor at the University of Cambridge Henry Shevlin, as concerns around AI’s impact on job security escalate.
Potential breakthroughs into AGI are also ringing alarm bells for other AI researchers that claim the technology is being developed faster than it’s full impact is able to be realised.
OpenAI believes that the positive upshots of AGI make the potential minefield of risks worth it. However, as the disruptive startup continues to push forward with its development, many fear the Altman’s company is prioritizing commercial success over the good of it’s users, and broader society.