OpenAI’s questionable security profile has been thrust into the spotlight once more, with the New York Times recently revealing that the company fell victim to a clandestine hack in April 2023.
While no consumer or partner data was compromised in the attack, the bad actor did retrieve private information about the company’s AI technology through an online forum created for OpenAI employees.
OpenAI decided to keep this breach secret, as they didn’t believe it posed a risk to national security. However, with company insiders claiming that OpenAI is not currently well equipped to ward off attacks from entities like the Chinese Government, we explore how concerned ChatGPT users should be about OpenAI’s seemingly lackluster approach to data strategy.
OpenAI Secrets Were Stolen In 2023, But the Company Kept Quiet
Last April, a hacker stole private details about the design of Open AI’s technologies, after gaining access to the company’s internal messaging systems.
While the hack compromised information from the employee forum, no customer or partner data was compromised or stolen, and the threat didn’t infiltrate systems used to build artificial intelligence, two anonymous sources told The New York Times.
This just in! View
the top business tech deals for 2024 👨💻
OpenAI executives revealed the incident to staffers in a company all-hands meeting the same month. However, since OpenAI did not consider it to be a threat to national security, they decided to keep the attack private and failed to inform law enforcement agencies like the FBI.
What’s more, with OpenAI’s commitment to security already being called into question this year after flaws were found in its GPT store plugins, it’s likely the AI powerhouse is doing what it can to evade further public scrutiny.
OpenAI Isn’t Doing Enough To Protect Itself From Foreign Governments, Says Former Staffer
It turns out that OpenAI’s decision to keep the attack concealed from the public wasn’t enough to assuage security concerns within its own company. After the incident took place, several employees expressed fears about the company’s potential risks to US national security, including former technical program manager Leopold Aschenbrenner.
In a memo sent to OpenAI’s board of directors, Aschenbrenner advised the company that they weren’t taking enough preventive measures to stop foreign adversaries like China from stealing its confidential data. Aschenbrenner also expressed that in the event of an attack, OpenAI’s security wasn’t strong enough to stop foreign infiltrations from taking place.
“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” – OpenAI spokeswomen Liz Bourgeois told The New York Times
The former technical program manager alleged that OpenAI fired him for politically motivated reasons after he leaked other company information in Spring. OpenAI was quick to refute these claims, however, announcing that these factors weren’t what led to his separation, and that Aschenbrenner’s characterizations of OpenAI’s security were false.
So, Does OpenAI Really Pose a Risk to National Security?
With so many security concerns swirling around OpenAI, it seems logical to wonder if the company really does pose a threat to the nation’s security.
According to Anthropic’s co-founder Daniela Amodei, Americans shouldn’t start worrying yet. “If it were owned by someone else, could that be hugely harmful to a lot of society? Our answer is ‘No, probably not,’” Amodei told The Times last month, when speaking about the possible event of OpenAI being targeted by a politically motivated data breach.
However, she didn’t argue that the incident would be completely risk-free. “Could it accelerate something for a bad actor down the road? Maybe. It is really speculative” she added.
OpenAI claims they are taking major measures to improve their security profile, with the company recently creating a Safety and Security Committee to focus on how it would be able to handle potential risks caused by AI technologies.
However, despite positive steps being made by US AI powerhouses, China’s rapid acceleration in the industry is becoming hard to ignore. According to some metrics, China has recently overtaken the US in becoming the biggest producer of AI talent, with the nation generating almost half of the world’s top AI researchers.
While it’s impossible to predict whether OpenAI will fall victim to another attack anytime soon, there are steps consumers can take to protect their data on platforms like ChatGPT safe today. As a general rule of thumb, we’d recommend avoiding sharing sensitive or private information with the chatbot. Thanks to a recent update, you’re also able to opt out of ChatGPT collecting your information for training purposes.
Learn more about this update, and find out how to temporarily or permanently disable ChatGPT from training on your data here.