Is there anything ChatGPT can't do? Microsoft has just announced that the AI-powered chatbot will soon be integrated into cybersecurity offerings from the Seattle-based tech giant.
The rollout of AI-powered technology over the last few months has been nothing if not meteoric. As soon as Microsoft acquired OpenAI and the ChatGPT software that came with it, the tech industry erupted in a battle for AI supremacy that has led to a wide range of AI alternatives.
The original still maintains its spot at the top, though, with Microsoft and OpenAI combining for a bevy of new functionalities that could make business easier for every single person at your company.
Microsoft Launches ‘Security Copilot'
Announced in a company blog post on Tuesday, Microsoft is launching a ChatGPT-powered chatbot for cybersecurity professionals that can help in the understanding and analysis of cyberattacks against businesses.
The tool, dubbed “Microsoft Security Copilot,” integrates with Microsoft security products as an automated helper to guide you through complexities of cybersecurity.
“This is really a better together story. Security Copilot is not only an OpenAI large language model, but rather it contains a network effect, enabling organizations to truly defend at machine speed.” – Charlie Bell, executive vice president for security, compliance, identity and management at Microsoft
Considering the significant rise in security breaches and data leaks over the last few years, a bit of help from AI is likely a much-appreciated boon for cybersecurity professionals. Still, is all this AI-powered functionality rolling out a bit too fast?
The Meteoric Rise of AI
ChatGPT gained popularity only a few short months ago, but the impact it has had on the business world has been staggering. Microsoft's $10 billion investment in OpenAI, the company behind ChatGPT, alone was enough to change the tide of the machine learning economy for businesses and employees alike.
Still, ChatGPT and the rest of the AI-powered business tools are far from perfect, which even Microsoft will admit.
“Security Copilot doesn’t always get everything right. AI-generated content can contain mistakes.” – Vasu Jakkal, corporate vice president of security, compliance, identity, and management at Microsoft
Regardless of its imperfect nature right now, the future of AI-powered tools like this is bright, and as you can likely tell from the gold rush on finding alternatives, quite lucrative.
Because these tools can be used for everything from coding to content creation at break-neck speeds, they've been increasingly attractive to virtually every business in the world. But does this kind of technology require a bit more regulation to avoid any of the potential downfalls?
Considering ChatGPT and similar AI-powered tools are projected to replace up to 80% of all jobs, it's safe to say that something needs to be addressed before it's too late. Some tech pioneers are suggesting a six-month pause on progress, just so regulators can catch up to the quick development of the tech. Still, in an industry known to “move fast and break things,” we aren't holding our breath.