Nvidia Tries to Rein In ChatGPT’s Overactive Imagination

The company's new software 'NeMe Guardrails' aims to make chatbots more secure, safe, and sane.

Nvidia, the GPU mega weight that helps train language models like ChatGPT, has recently launched ‘NeMo Guardrails' — an open-source software designed to keep AI chatbots on the straight and narrow.

According to the company, the software aims to keep responses on topic, improve data security, and combat random spurts of inaccurate information commonly known as AI ‘hallucinations'.

While this doesn't satisfy the six-month AI development pause that tech leaders like Elon Musk and Steve Wozniak are calling for, it does aim to address some major issues the technology faces today.

Nvidia Releases ‘NeMe Guardrails' to Tackle AI Hallucination

AI tools like ChatGPT, Google Bard, and Bing Chat are capable of responding to just about any prompt fired at them. But this doesn't mean their responses should always be trusted.

When put to the test, OpenAI's ChatGPT is consistently found to give inaccurate answers, with the chatbot routinely failing at basic math, going off script, and spouting out content that seems straight-up implausible.

Nvidia — the supercomputing giant that's responsible for training AI tools like ChatGPT — is aware of its tendency to hallucinate and has created NeMe Guardrails in an attempt to improve the accuracy and safety of the technology.

“Safety in generative AI is an industry-wide concern. NVIDIA designed NeMo Guardrails to work with all LLMs, such as OpenAI’s ChatGPT.” – Nvida blog post

NeMe helps developers make sure language models stick to their requirements by helping them to instate topical, safety, and security guardrails.

In Nvidia's own words, the software's topical rails aim to ‘prevent apps from veering off into undesired areas', while its security guardians ‘ensure apps respond with accurate, appropriate information'.

Finally, its security guardrails work by preventing the tools from connecting to unsafe third-party apps that may be culpable of compromising private information.

But how does the software limit chatbot delusion? According to Nvidia, the software uses a second logic learning model (LLM) to fact-check the answers of the first one. If the second LLM doesn't come up with matching answers, the response is deemed to be a hallucination before it's sent to the user.

Who Can Use NeMe Guardrails?

Since NeMo Guardrails runs on open-source technology, its able to be used by any enterprise app developer looking to add extra safeguards to its chatbot.

Programmers are able to use the language to create custom rules for their AI model, implementing as many guardrails as they see fit.

The software is being incorporated into the NVIDIA NeMo  framework which includes everything you would need to train and tune a language model and is currently available on GitHub.

Are These Guardrails Enough to Keep Users Safe?

Nvidia's new software represents an important development in chatbot accuracy.

However, while NeMe Guardrails was designed to keep AI-generated content on track while protecting users from security risks, it fails to address instances of “bias” and “deception” cited in a recent complaint to the Federal Trade Commission (FTC) by the Center of AI and Digital Policy (CAIDP).

“We look forward to the good that will come from making AI a dependable and trusted part of the future.” – Nvidia blog post

After pivoting its focus to AI technology, Nvidia has profited heavily from the explosion of tools like ChatGPT and Microsoft's Bing Chat, meaning it's unlikely to heed the calls of concerned voices such as Elon Musk and Steve Wozniak to slow down.

Ultimately, while some AI skeptics may fear NeMe Guardrails don't go far enough, the software does give developers a solid framework to follow. What's more, with the US rolling out AI controls a lot slower than its European counterparts, we think that any attempt to improve and regulate chatbot technology represents a promising step in the right direction.

Did you find this article helpful? Click on one of the following buttons
We're so glad you found this helpful! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Isobel O'Sullivan (BSc) is a senior writer at Tech.co with over four years of experience covering business and technology news. Since studying Digital Anthropology at University College London (UCL), she’s been a regular contributor to Market Finance’s blog and has also worked as a freelance tech researcher. Isobel’s always up to date with the topics in employment and data security and has a specialist focus on POS and VoIP systems.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today