This is a tutorial on using the OpenAI LLM API, focusing on: messages and function calls. But, without Python, TypeScript, or some other programming language. The only requirements are CURL (an HTTP client) and an OpenAI API key.
Why Bother?
“Why waste my time, when I can just import an API package?”
Sure, that works, until you go deeper. What if…
- you do not have access to / permission for / trust in the API package?
- you want to avoid software bloat?
- you want to understand what is happening?
- you want to make your own AI Agents?
Before, there was only /chat/completions/. Now, there are /responses, function calls, tool calls, computer calls, image calls, search calls, skills, etc.
I will show the bare minimum to interact with an LLM API:
- Prompt completions and context, and
- Function calls.
Prompt Completions and Context
In this section, I show messaging an LLM. The provider is OpenAI at:
https://api.openai.com/v1/responses
HTTP Method: POST
Endpoint /responses accepts application/json data. Using CURL, create a POST request with JSON data.
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{ ... json data goes here ... }'
I wish to clarify two things. Suppose I want to exchange messages with an LLM…
- What JSON data do I need?
- May I see an example for exchanging messages?
What JSON data do I need?
Input
Start with an empty JSON object.
{
request data go here ...
}
Select the model:
{
"model": "gpt-4.1"
}
Set "input" field value to an array [ ... ]. (Do not forget the comma.)
{
"model": "gpt-4.1",
"input": [
array items go here ...
]
}
The array items will be explained now.
Input Items
The API defines many objects you can put in the “input array” [ ... ]. Far too many, to list all. Instead, I show only four. Two object types may be created by your client program:
- EasyInputMessage, and
- FunctionCallOutput.
Two object types may be created by the server:
- ResponseOutputMessage, and
- ResponseFunctionToolCall.
In this section, I show EasyInputMessage and ResponseOutputMessage types. These are enough for prompts with context. In the Function Call section, I will show FunctionCallOutput and ResponseFunctionToolCall types.
Easy Input Message (Client)
Your client program sends prompts to the LLM inside an EasyInputMessage. The prompt text goes in the "content" field.
EasyInputMessage schema:
{
"content": string (this is where your prompt goes),
"role": "user" | "assistant" | "system" | "developer",
"type": "message"
}
Example:
{
"content": "This is a prompt sent to the LLM.",
"role": "user",
"type": "message"
}
ResponseOutputMessage (Server)
The LLM answers with a ResponseOutputMessage object type. It is more complex, when compared to EasyInputMessage. The reason is that it’s "content" field value is more complex. The value is an array that may contain two possible object types. The array tems are either a ResponseOutputRefusal type (the LLM refused to answer) or ResponseOutputText type (the LLM answered). I will first show the schema of these object types, and second, the schema for ResponseOutputMessage.
ResponseOutputRefusal schema:
{
"refusal": string,
"type": "refusal"
}
ResponseOutputText schema:
{
"annotations": [ FileCitation | URLCitation |
ContainerFileCitation | FilePath ],
"logprobs": [ logprobs object ],
"text": string,
"type": "output_text"
}
The ResponseOutputText schema is non-trivial. The values of the “annotations” and “logprobs” fields are complex. It is best to simply ignore them unless needed.
With that in mind, here is the schema for ResponseOutputMessage.
ResponseOutputMessage schema:
{
"id": string,
"content": [ ResponseOutputText | ResponseOutputRefusal ],
"role": "assistant",
"status": "in_progress" | "completed" | "incomplete",
"type": "message"
}
To show an example ResponseOutputMessage, I will make an API request and show the response.
May I see an example for echanging messages?
Sending A Single Message
The prompt is: “Hi!”.
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"input":
[
{
"content": "Hi!", "role": "user", "type": "message"
}
]
}'
This is the value of the “output” part of the response.
...
"output": [
{
"id": (ommited),
"type": "message",
"status": "completed",
"content": [
{
"type": "output_text",
"annotations": [],
"logprobs": [],
"text": "Hello! How can I help you today?"
}
],
"role": "assistant"
}
],
...
In this simple case, the output is an array of one item that is an object of type ResponseOutputMessage. And that object itself has a “content” field for which the value is an array of one item that is an object of type ResponseOutputText.
Creating Context
To continue the LLM conversation, you need to merge the (client) prompt and the output response (server). This is known as a context.
- Copy the ResponseOutputMessage object from the “output array”.
- Append a new EasyInputMessage object as the next prompt.
Make sure to add commas between the items of the “input array” when doing it manually.
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"input":
[
{
"content": "Hi!", "role": "user", "type": "message"
},
{
"id": (ommited),
"type": "message",
"status": "completed",
"content": [
{
"type": "output_text", "annotations": [], "logprobs": [],
"text": "Hello! How can I help you today?"
}
],
"role": "assistant"
},
{
"content": "Say hi again.", "role": "user", "type": "message"
}
]
}'
Here is the output.
...
"output": [
{
"id": (ommitted),
"type": "message",
"status": "completed",
"content": [
{
"type": "output_text",
"annotations": [],
"logprobs": [],
"text": "Hi again!"
}
],
"role": "assistant"
}
],
...
Section Summary
- The LLM API accepts JSON data as an input and writes JSON data as an output.
- To exchange messages, a model and an input array must be set.
- The elements of the input array are JSON objects that follow the EasyInputMessage schema or ResponseOutputMessage schema.
Function Calls
Section Overview
In this section, I show how to exchange messages that are function call ready with an LLM. I will use LLMs by OpenAI, which are available at:
https://api.openai.com/v1/responses
HTTP Method: POST
Endpoint /responses accepts application/json data. Using CURL, create a POST request with JSON data.
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{ ... json data goes here ... }'
In the previous section, I have shown that two fields are required: “input” and “output”. The values of these fields are arrays. So far, the only object types in these arrays were EasyInputMessage and ResponseOutputMessage. That will change now.
Request
{
"model": "gpt-4.1",
"input": [ ... ]
}
Response
{ ...
"output": [ ... ]
...
}
Two new objects I will show now are:
- FunctionCallOutput, and
- ResponseFunctionToolCall.
I wish to clarify two things. Suppose I want to exchange messages with an LLM and allow it to use some function calls with my client program…
- What JSON data do I need?
- May I see an example for exchanging messages with function calls?
What JSON data do I need?
To exchange messages that are function call ready, set the “tools” field in the request.
{
"model": "gpt-4.1",
"input": [ ... ],
"tools": [ ... ]
}
Tool Items
Each item, in the “tools array”, is an object { ... }. The API supports several different object types. I will only show:
- FunctionTool.
A FunctionTool object has a name, a description, and parameters that the client program must set. Name names the function. Description describes the function. Parameters describes the function arguments.
FunctionTool schema:
{
"type": "function",
"name": string,
"description": string,
"parameters": object
}
Parameters are described in the “properties” field. Each parameter is yet another object.
FunctionTool Parameters schema:
{
"type": "object",
"properties": object,
"required": [ strings ]
}
The value of the “required” field is an array that contains strings naming parameters that are required.
FunctionTool Parameters Properties schema:
{
argument_name:
{
"type": argument_type,
"description": argument_desc
},
...
}
The key argument_name is a string that names the function argument. The object argument_type is a string that names the function argument type. The object argument_desc is a string that describes the function argument.
Request With Tools
Now that you have seen the structure of a FunctionTool, here is what an example request that is function call ready looks like:
{
"model": "gpt-4.1",
"input":
[
{
"content": "Which natural number comes after 1678931?",
"role": "user", "type": "message"
}
],
"tools":
[
{
"name": "next_natural",
"type": "function",
"description": "next_natural takes as input a natural number.
Returns a the first natural number that is greater than the argument.",
"parameters": {
"type": "object",
"properties": {
"number" : {
"type": "number",
"description": "The input natural number."
}
},
"required": ["number"]
}
}
]
}
The request includes the “tools” field, for which the value is an array with exactly one FunctionTool object. When the request defines a FunctionTool, two things can happen:
- the FunctionTool may be ignored, or
- a response to use the FunctionTool may be created.
Your client program must support both scenarios. It may check the type of the output. If the type is a ResponseOutputMessage, the FunctionTool was ignored. If the type is a ResponseFunctionToolCall, the client must perform the function call.
In other words, the server returns a response with the “output” field value to be an array whose element is either a:
- ResponseOutputMessage, or
- ResponseFunctionToolCall.
Scenario A:
client --> request: EasyInputMessage and tools --> server
client <-- ResponseOutputMessage <-- server
Scenario B:
client --> request: EasyInputMessage and tools --> server
client <-- ResponseFunctionToolCall <-- server
client --> request: ResponseFunctionToolCallOutput and tools --> server
client <-- ReponseOutputMessage or ResponseFunctionToolCall <-- server
ResponseFunctionToolCall (Server)
Note that the server creates this object.
ResponseFunctionToolCall schema:
{
"arguments": string,
"call_id": string,
"name": string,
"type": "function_call",
"id": string,
"status": "in_progress" | "completed" | "incomplete"
}
FunctionCallOutput (Client)
Note that the client creates this object. When creating this object, the value of the "call_id" field is copied from the matching ResponseFunctionToolCall object.
Schema:
{
"call_id": string,
"output": string | (there is more but I ignore that),
"type": "function_call_output",
"id": string (mostly ignore this),
"status": "in_progress" | "completed" | "incomplete"
}
Example:
{
"call_id": "call_random123", (generated by server)
"output": "fizzbuzz",
"type": "function_call_output",
"id": "123456"
"status": "completed"
}
May I see an example for exchanging messages with function calls?
Example FunctionToolCall Request
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"input":
[
{
"content": "Which natural number comes after 1678931?",
"role": "user", "type": "message"
}
],
"tools":
[
{
"name": "next_natural",
"type": "function",
"description": "next_natural takes as input a natural number.
Returns a the first natural number that is greater than the argument.",
"parameters": {
"type": "object",
"properties": {
"number" : {
"type": "number",
"description": "The input natural number."
}
},
"required": ["number"]
}
}
]
}'
Example FunctionToolCall Response
...
"output": [
{
"id": (omitted),
"type": "function_call",
"status": "completed",
"arguments": "{\"number\":1678931}",
"call_id": (omitted),
"name": "next_natural"
}
],
...
Example FunctionToolCallOutput Request
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"input":
[
{
"content": "Which natural number comes after 1678931?",
"role": "user", "type": "message"
},
{
"id": (omitted),
"type": "function_call",
"status": "completed",
"arguments": "{\"number\":1678931}",
"call_id": "call_(same call id)",
"name": "next_natural"
},
{
"call_id": "call_(same call id)",
"output": "1678932",
"type": "function_call_output"
}
],
"tools":
[
{
"name": "next_natural",
"type": "function",
"description": "next_natural takes as input a natural number.
Returns the first natural number that is greater than the argument.",
"parameters":
{
"type": "object",
"properties":
{
"number":
{
"type": "number",
"description": "The input natural number."
}
},
"required": ["number"]
}
}
]
}'
Example FunctionToolCallOutput Response
...
"output": [
{
"id": (omitted),
"type": "message",
"status": "completed",
"content": [
{
"type": "output_text",
"annotations": [],
"logprobs": [],
"text": "The natural number that comes after 1,678,931 is 1,678,932."
}
],
"role": "assistant"
}
],
...
Section Summary
- The LLM API accepts JSON data as an input and writes JSON data as an output.
- To exchange function call ready messages, a model, an input array, and a tools array must be set.
- The elements of the input array are JSON objects that follow the EasyInputMessage, ResponseOutputMssage, ResponseFunctionToolCall, or ResponseFunctionToolCallOutput Schema.