Google’s Gemini Pro AI model was once pitched as the middle tier within the tech company’s AI family, but the latest just-released version — Gemini Pro 1.5 — is already more powerful than the higher-tier option.
Like any big AI, Gemini Pro 1.5 is jam-packed with tools and use-cases, with Google positioning it as a one-stop-shop for businesses small and large to streamline and superpower their operations. But just what can the new model do?
The tech giant is eager to explain just that, at Las Vegas this week for the Google Cloud Next 2024 event. Here’s what news has already been revealed about Pro 1.5 and all the tools that can integrate with the large language model.
The Updated Gemini Pro 1.5 Is Open to Public Preview
Surprising no one, generative AI was the headline topic, with Google’s Gemini AI leading the headlines thanks to the new public preview rollout of its updates to the latest model, Gemini Pro 1.5.
It’s available now within the company’s Vertex AI machine learning platform, although you’ll need to have access to the Vertex AI platform in order to use this latest model.
This just in! View
the top business tech deals for 2024 👨💻
It might seem like every AI company is constantly releasing the best and most-feature-rich AI model yet, and that’s exactly what Google says it has now with Gemini Pro 1.5. Possible use cases include university courses, financial services, or startups that need to code faster.
It Has Better Grounding With Google Search
In the AI biz, “grounding” refers to a generative AI’s system ability to merge its language generation with the real world’s knowledge base.
Google’s announcing improved grounding capabilities within Vertex AI. A big part of the improvement, which is available now in the Pro 1.5 public preview, is an integration that brings the immense data of Google Search into the grounding process to directly ground generative AI responses with Search.
Search is “perhaps the world’s most trusted source of factual information,” according to Google Cloud CEO Thomas Kurian during the announcement. Those who always add “reddit” to the end of their searches may disagree, but there’s no denying that the Google Search homepage is one of the most used websites of all time.
Gemini Pro 1.5 Can Process Audio Now
Users can now upload audio files to Gemini 1.5 Pro. The model can interpret these files, processing them into summaries, reports, or other formats. It’s a useful perk for those sitting on audio recordings of events, meetings, or earnings calls.
Gemini in Looker Will Process Data With AI…
The “Gemini in Looker” service is also part of the public preview, giving users a Pro 1.5-powered ability to “chat” with their data processing tool Looker, Google’s enterprise solution for business intelligence.
…And Gemini Can Now Code Better, Too
Gemini 1.5 Pro integration is also arriving to the coding community, with the new Gemini Code Assist tool, for enterprise-level code completion and assistance. It’s a challenger to GitHub’s AI tool, Copilot, and it brings a hefty 1 million token window with it — the largest window yet.
Users will be able to get Gemini Code Assist on-premise as well as off.
This will be a tool to keep an eye on: The potential for a functional AI coding tool is huge, and coders everywhere are definitely excited about exploring the possibilities. But writing complex code with an AI has meant risking AI-generated bugs that are incredibly tough to root out. Can Gemini Code Assist beat the curse? We’ll find out.
Gemini’s New AI Threat Detection Tools
The “Gemini in Threat Intelligence” tool helps businesses by delivering threat research that flags specific problems early with a little help from natural language prompts.
Similar functions include Gemini in Security Operations and Gemini in Security Command Center, which both add an AI layer to their respective areas. The goal across all these tools? Helping businesses flag threats before they suffer any harm due to them.
Google’s Finding New Ways to Integrate AI With Work
In addition to the above news, Google had plenty of more advanced tech specs to share.
The tech giant is now in the arm based CPU business, for instance, with the reveal of its new Axion chip, which it says offers a truly impressive “up to 50% better performance and up to 60% better energy-efficiency than comparable current-generation x86-based instances.”
Plus, Google’s AI Hypercomputer architecture is getting a big boost: A3 Mega VMs powered by NVIDIA H100 Tensor Core GPUs offer higher performance, with Cloud TPU v5p becoming generally available to boot.
The big bet behind the artificial intelligence industry is that AI models can keep improving fast enough that they can eventually justify integration into every fascet of our lives. AI proponents still have a long way to go on that rocky road. Still, Google’s latest round of new AI tool reveals and model advancements represent that company’s best answer yet.