July 20, 2023 update:
We previously communicated to developers that gpt-3.5-turbo-0301
, gpt-4-0314
and gpt-4-32k-0314
models were scheduled for sunset on Sept 13, 2023. After reviewing feedback from customers and our community, we are extending support for those models until at least June 13, 2024.
When we release new model versions, our top priority is to make newer models smarter across the board. We are targeting improvements on a large number of axes, such as instruction following, factual accuracy, and refusal behavior. For instance, the gpt-4-0613
model introduced last month resulted in significant improvement on calling functions.
We look at a large number of evaluation metrics to determine if a new model should be released. While the majority of metrics have improved, there may be some tasks where the performance gets worse. This is why we allow API users to pin the model version. For example, you can use gpt-4-0314
instead of the generic gpt-4
, which points to the latest model version. Each individually pinned model is stable, meaning that we won’t make changes that impact the outputs.
We are working hard to ensure that new versions result in improvements across a comprehensive range of tasks. That said, our evaluation methodology isn’t perfect, and we’re constantly improving it. One way to help us ensure new models get better at domains you care about, is to contribute to the OpenAI Evals library to report shortcomings in our models.
We understand that model upgrades and behavior changes can be disruptive to your applications. We are working on ways to give developers more stability and visibility into how we release and deprecate models.
We released gpt-3.5-turbo
and gpt-4
earlier this year, and in only a short few months, have seen incredible applications built by developers on top of these models.
Today, we’re following up with some exciting updates:
- new function calling capability in the Chat Completions API
- updated and more steerable versions of
gpt-4
andgpt-3.5-turbo
- new 16k context version of
gpt-3.5-turbo
(vs the standard 4k version) - 75% cost reduction on our state-of-the-art embeddings model
- 25% cost reduction on input tokens for
gpt-3.5-turbo
- announcing the deprecation timeline for the
gpt-3.5-turbo-0301
andgpt-4-0314
models
All of these models come with the same data privacy and security guarantees we introduced on March 1 — customers own all outputs generated from their requests and their API data will not be used for training.
Function calling
Developers can now describe functions to gpt-4-0613
and gpt-3.5-turbo-0613
, and have the model intelligently choose to output a JSON object containing arguments to call those functions. This is a new way to more reliably connect GPT’s capabilities with external tools and APIs.
These models have been fine-tuned to both detect when a function needs to be called (depending on the user’s input) and to respond with JSON that adheres to the function signature. Function calling allows developers to more reliably get structured data back from the model. For example, developers can:
- Create chatbots that answer questions by calling external tools (e.g., like ChatGPT Plugins)
Convert queries such as “Email Anya to see if she wants to get coffee next Friday” to a function call like send_email(to: string, body: string)
, or “What’s the weather like in Boston?” to get_current_weather(location: string, unit: 'celsius' | 'fahrenheit')
.
- Convert natural language into API calls or database queries
Convert “Who are my top ten customers this month?” to an internal API call such as get_customers_by_revenue(start_date: string, end_date: string, limit: int)
, or “How many orders did Acme, Inc. place last month?” to a SQL query using sql_query(query: string)
.
- Extract structured data from text
Define a function called extract_people_data(people: [{name: string, birthday: string, location: string}])
, to extract all people mentioned in a Wikipedia article.
These use cases are enabled by new API parameters in our /v1/chat/completions
endpoint, functions
and function_call
, that allow developers to describe functions to the model via JSON Schema, and optionally ask it to call a specific function. Get started with our developer documentation and add evals if you find cases where function calling could be improved
Function calling example
What’s the weather like in Boston right now?
curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{
"model": "gpt-3.5-turbo-0613",
"messages": [
{"role": "user", "content": "What is the weather like in Boston?"}
],
"functions": [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
]
}'
{
"id": "chatcmpl-123",
...
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"function_call": {
"name": "get_current_weather",
"arguments": "{ "location": "Boston, MA"}"
}
},
"finish_reason": "function_call"
}]
}
Use the model response to call your API
curl https://weatherapi.com/...
{ "temperature": 22, "unit": "celsius", "description": "Sunny" }
curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{
"model": "gpt-3.5-turbo-0613",
"messages": [
{"role": "user", "content": "What is the weather like in Boston?"},
{"role": "assistant", "content": null, "function_call": {"name": "get_current_weather", "arguments": "{ "location": "Boston, MA"}"}},
{"role": "function", "name": "get_current_weather", "content": "{"temperature": "22", "unit": "celsius", "description": "Sunny"}"}
],
"functions": [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
]
}'
{
"id": "chatcmpl-123",
...
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "The weather in Boston is currently sunny with a temperature of 22 degrees Celsius.",
},
"finish_reason": "stop"
}]
}
The weather in Boston is currently sunny with a temperature of 22 degrees Celsius.