ChatGPT Owner OpenAI Announces API Updates and Price Reduction

OpenAI has introduced updates to the API, GPT-3.5 Turbo and GPT-4 models, as well as a significant cost reduction.

With all eyes on the development of artificial intelligence, OpenAI has upped the excitement this week by announcing an update to its pioneering text generation models, as well as a cost reduction.

No matter the sector or use-case, OpenAI’s ChatGPT has changed the landscape of how we develop and direct artificial intelligence. The announcement sees a push forward with this thanks to a model update announcement from the company.

These updates look set to provide even more complex functionality and will arm developers with an extensive toolbox to create sophisticated AI applications.

Say Hello to Function Calling

Function calling is a new and more reliable way to connect GPT’s capabilities with APIs and other external tools, revealed in the OpenAI announcement. For GPT-3.5 Turbo and GPT-4, developers can now describe functions to the model via JSON Schema, ask it to call a specific function and have the model create code to execute them.

For example, function calling can create chatbots that answer questions by calling external tools (like ChatGPT plugins), extract structured data from text, or convert natural language into function calls, API calls or database queries.

Get your data back

With Incogni from Surfshark, you can reclaim your data today!

“These models have been fine-tuned to both detect when a function needs to be called and to respond with JSON that adheres to the function signature.” – OpenAI’s statements on the release.

Introducing the New GPT Models

Starting with GPT-4, GPT-4-0613 features an updated and improved model with function calling. GPT-4-32k-0613 includes the same, as well as an extended context length which means better comprehension of longer texts.

Both are currently only available through the waitlist however, OpenAI is keen to open this up to more developers as soon as possible.

Next up we have GPT-3.5 Turbo. GPT-3.5-turbo-0613 includes the same function calling as GPT-4, but with more reliable steerability thanks to the system message. Both features mean developers can guide the model’s responses even better.

GPT-3.5-turbo-16k features four times the context length of the model above and can now support around 20 pages of text in just one request. This functionality comes with a price change of $0.003 per 1k of input tokens and $0.004 per 1k output tokens.

Be Sure You’re Ready for the OpenAI Changes

Yesterday saw the start of the upgrade and deprecation process for the initial versions of these models. It’s important to note that applications using the stable models (gpt-3.5-turbo, gpt-4 and gpt-4-32k) will be automatically upgraded to the new models on June 27 2023.

While it’s always best to be sure you’re prepared for the switchover, if you do need more time to transition, OpenAI are allowing developers to continue using the older models – for the time being. You just need to specify gpt-3.5-turbo-0301, gpt-4-0314 or gpt-4-32k-0314 in the API request.

Set a reminder in your diaries though, as this functionality is only accessible until September 13 2023.

OpenAI Costs are Coming Down Too

We’ve already shared the updated token costs for GPT-3.5-turbo-16k but that’s not where the savings end. By making its systems even more efficient, OpenAI is saving money and passing those benefits onto developers.

Effective from yesterday, the most popular embedding model – text-embedding-ada-002 – is now $0.0001 per 1k tokens, which is a cost reduction of 75%.

Similarly, the most popular chat model (gpt-3.5-turbo) which powers ChatGPT sees a cost reduction of 25%. This means developers can now use the model for $0.0015 per 1k input tokens and $0.002 per 1k output tokens.

Read our guide on how to use ChatGPT for free.

What’s Next for OpenAI and ChatGPT?

Despite reservations about the safety and security of ChatGPT, excitement for the model remains and the innovations don’t stop, much of which is based on developer feedback.

“Developer feedback is a cornerstone of our platform’s evolution and we will continue to make improvements based on the suggestions we hear.” – OpenAI’s statements on the release

Incremental updates to existing models – as opposed to larger full-scale releases – is the strategy OpenAI is taking.

But before anyone gets too excited about the thought of a much-anticipated GPT-4 successor coming soon, CEO Sam Altman has poured cold water on the theory. He told Economic Times that they have “a lot of work to do” before starting work on the model.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Ellis Di Cataldo (MA) has over 9 years experience writing about, and for, some of the world’s biggest tech companies. She's been the lead writer across digital campaigns, always-on content and worldwide product launches, for global brands including Sony, Electrolux, Byrd, The Open University and Barclaycard. Her particular areas of interest are business trends, startup stories and product news.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today