Microsoft has just rolled out its first generative AI model that’s fully isolated from the internet, giving the US intelligence community a way to analyze top-secret information without compromising their security.
The new AI model, which is already being used to answer questions, was released on Thursday – a year after the CIA canned its own ChatGPT-like chatbot for failing to protect sensitive data adequately.
While doubts about the model’s accuracy remain, in light of GPT-4’s hallucination problems, the internet-free model represents a huge leap forward in Microsoft’s AI development journey and makes the US the first country on the globe to use generative AI to analyze intelligence data (that we know of at least).
Microsoft Deploys an AI Model For the Intelligence Community
Since ChatGPT was first released in 2022, generative AI models have been used to help different sectors push their frontiers forward. Now, the US intelligence community has been afforded the same privilege, thanks to Microsoft’s new AI model specifically designed to analyze top-secret information.
Unlike most AI models, which rely on cloud services to infer patterns from data, Microsoft’s new model is completely divorced from the internet. This allows spy agencies to scrutinize classified data with the tool without subjecting it to security risks like data breaches or hacking attempts.
This just in! View
the top business tech deals for 2024 👨💻
According to William Chappel, Microsoft’s Chief Technology Officer for Strategic Missions and Technology, the model was modified from an AI supercomputer in Iowa and took 18 months to complete. Chappel explains that the mission started out as a passion project and that his team wasn’t sure how to go about it when they started developing in 2022.
“This is the first time we’ve ever had an isolated version – when isolated means it’s not connected to the internet – and it’s on a special network that’s only accessible by the US government,” – William Chappell, Microsoft’s chief technology officer for strategic missions and technology
By keeping the AI model separated from the internet, it’s able to stay “clean” because secret information won’t be funneled back into the platform. The CIA tried to create its own AI tool last year to compete with global AI powers like China. However, this model was largely disregarded due to security concerns.
Can Microsoft’s New AI Model Be Trusted?
Microsoft’s model is now theoretically able to be accessed by 10,000 members of the US intelligence community. The development gives agencies like the CIA a huge leg-up compared to international equivalents that don’t currently have access to this technology, according to the assistant director of the CIA for the Transnational and Technology Mission Center Sheetal Patel.
“There is a race to get generative AI onto intelligence data”, Patel recently announced at a security conference at Vanderbilt University. “The first country to use generative AI for their intelligence would win that race. And I want it to be us.”
However, despite the exciting opportunities the tool is affording to intelligence agencies, doubts about its reliability remain. Since large multimodal language models like GPT-4 are driven by statistical probabilities, they’ve been known to ‘hallucinate’ from time to time – AKA draw false conclusions.
This only causes minor inconveniences for those using the tool for basic tasks like completing maths homework. However, using the technology to analyze classified, sensitive information could potentially carry a much higher risk. Microsoft ensures its new AI model is safer than private chatbots, but neither the software manufacturer nor the CIA have commented on how this model will be audited for accuracy.
Microsoft Steps Up Its Competetive Advantage
In what’s shaping up to be a busy month for Microsoft, the Washington-based company also recently revealed that it’s working on its biggest in-house model yet – MAI-1.
The model is being built 100% internally and is rumored to have around 500 billion parameters. While this figure may still dwarf GPT-4’s 1.76 trillion parameters, it’s the most configuration variables to be offered by a Microsoft chatbot, representing a major leap forward in AI for the company.
Proving the big things can also come in small packages, Microsoft also released the first of three targeted chatbots, Phi-3-Mini, last month. The model is trained on a much smaller data set than alternatives like GPT-4, allowing it to run locally on smartphones and laptops, resulting in lower costs and improved performance. As the AI landscape grows increasingly competitive, Microsoft is clearly upping its anti when it comes to AI development – and it appears to be paying off.