Artificial intelligence startup OpenAI has made significant changes to its GPT pricing, slashing GPT-3.5 input tokens by 50% among a raft of other changes.
The company has also gone to great efforts to address technical issues that led some users to claim that GPT-4 had started to become lazy in its response to some coding questions, effectively not answering them.
While ChatGPT continues to reign supreme as the business world's chatbot of choice, there are more options out there, and they’re starting to provide more business-focused options too.
OpenAI Slashes GPT Prices
OpenAI published a blog post this week announcing that the company is “releasing new models, reducing prices for GPT-3.5 Turbo, and introducing new ways for developers to manage API keys and understand API usage.”
The most exciting update is the reduced model pricing for GPT-3.5, which is the language model that currently powers the free version of ChatGPT, but is available for businesses to use at a generally cheaper rate than GPT-4.
In the next week or so, OpenAI is introducing a new GPT-3.5 Turbo model, gpt-3.5-turbo-0125. “For the third time in the past year, we will be decreasing prices on GPT-3.5 Turbo to help our customers scale” the company explains.
“Input prices for the new model are reduced by 50% to $0.0005 /1K tokens and output prices are reduced by 25% to $0.0015 /1K token.”
🔎 Want to browse the web privately? 🌎 Or appear as if you're in another country?
Get a huge 86% off Surfshark with this special tech.co offer.
Tokens are words/bits of words that AI models can process and generate. Input tokens refer to the text and information users put into the models, and output tokens refer to model responses.
A Wake-Up Call: GPT-4 Isn’t Being Lazy Anymore
OpenAI has also announced an open preview of a new version of GPT-4, called GPT-4-0125. Like the new GPT-3.5 Turbo model, the name presumably refers to the announcement date (January 25th).
“This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of “laziness” where the model doesn’t complete a task” OpenAI explains.
The idea that ChatGPT had started to become strangely lazy was widely reported on during the build-up to the holiday season, with some publications even speculating that the chatbot was “winding down” for the Christmas period, as humans tend to do.
The “Winter Hypothesis” was an intriguing one – feeding the chatbot a may date before asking coding questions, one person found, seemed to improve its answers. However, another researcher was unable to reproduce the results with a statistically significant difference. Either way, according to OpenAI, GPT-4 will be providing more “thorough” answers to questions.
Other Chatbot Options
ChatGPT is still the most-used chatbot in the world. More than double the number of businesses we surveyed for our recent Impact of Tech on the Workplace report said they used ChatGPT rather than Bard, the second most popular AI tool being used by businesses.
OpenAI’s creation has a huge range of options for developers and large businesses who want to incorporate it into their software infrastructure. Right now, if you want to build a custom ChatGPT that your business can use internally to help with everyday work tasks, it’s the best AI chatbot for it.
But it’s not the only option out there. For instance, Claude – a chatbot created by Anthropic – is a highly capable chatbot with a free plan as well as Pro version that costs the same as ChatGPT Plus ($20 per month).
What’s more, the huge number of users trying to access ChatGPT means it often goes down, so being aware of what the different ChatGPT alternatives out there can do and how much it’ll cost you to use them is always useful. And, who knows – you might just find you prefer one of them.