Tool Use & Function Calling
A purely conversational LLM is a brain in a jar. It can think, but it cannot act. Tool Use (or Function Calling) gives the model "hands." It allows the LLM to interact with the external world—searching the web, querying databases, or sending emails.
How Tool Use Works
Models do not actually "call" the function. They generate text that represents a function call.
- Definition: You verify a list of tools (function signatures) to the model.
- Invocation: The model decides to use a tool and outputs a structured request (e.g., JSON) with arguments.
- Execution: Your code halts, parses the model's request, executes the actual function on your server/client.
- Observation: You feed the function's result back to the model as a new message.
- Response: The model incorporates the result to generate the final answer.
Defining Tools
The standard format for defining tools is a JSON Schema.
{
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string", "description": "City and state, e.g. San Francisco, CA" },
"unit": { "type": "string", "enum": ["celsius", "fahrenheit"] }
},
"required": ["location"]
}
}[!TIP] The Description is Key: The model uses the function
descriptionand parameterdescriptionfields to decide when and how to use the tool. Write them like documentation for a junior developer.
The Tool Loop (ReAct)
This cycle of Thought -> Action -> Observation is often called the ReAct loop.
while (steps < MAX_STEPS) {
// 1. Ask Model
const response = await model.chat({ messages, tools });
// 2. Check for Tool Calls
if (response.tool_calls) {
const outputs = [];
// 3. Execute Tools
for (const call of response.tool_calls) {
const result = await executeLocalFunction(call.name, call.args);
outputs.push({
tool_call_id: call.id,
role: "tool",
content: result
});
}
// 4. Feed back to history
messages.push(response.message);
messages.push(...outputs);
// Loop continues...
} else {
// Final Answer
return response.content;
}
}Handling Hallucinations & Errors
Argument Hallucination
The model might invent arguments that don't exist in your schema.
- Fix: Strict validation (Zod) before execution. If validation fails, return a "System Error" message to the model explaining why the arguments were invalid, so it can retry.
Tool Selection Errors
The model might choose the wrong tool.
- Fix: Reduce the number of tools. If you have 50 tools, don't dump them all in the context. Use a retrieval step to select the 5 most relevant tools for the current query.
Timeout / Failure
The API might be down.
- Fix: Return the error string to the model:
Error: API 503 Service Unavailable. A smart agent might decide to retry or apologize to the user.
Design Patterns
- Granularity: Prefer atomic tools (
search_google,scrape_page) over monolithic ones (research_topic). Atomic tools allow the agent to compose unique workflows. - Read-Only vs. Side-Effects: Be careful with tools that modify state (
delete_user). Always require a human-in-the-loop confirmation step for destructive actions. - Context Hygiene: Tool outputs can be huge (e.g., a whole HTML page). Truncate or summarize tool outputs before feeding them back to the context window to save tokens.
Summary
Tool use transforms an LLM from a chatbot into an Agent. By defining clear schemas and managing the execution loop robustly, you enable your AI to perform real work in the real world.