Function Calling and Tool Use
Function Calling and Tool Use
What is Function Calling?
Function calling allows LLMs to interact with external tools and APIs by generating structured function calls instead of plain text. The model decides when and how to use available tools.
How It Works
- Define available tools with schemas (name, description, parameters)
- Send user message + tool definitions to the LLM
- LLM decides which tool(s) to call and generates the arguments
- Your code executes the function and returns results
- LLM incorporates results into its response
Tool Definition Example
{
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., 'San Francisco, CA'"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
Best Practices
- Clear descriptions: The model uses the description to decide when to use a tool
- Specific parameter types: Use enums, constraints, and detailed descriptions
- Error handling: Return clear error messages so the model can recover
- Minimal toolset: Too many tools confuse the model. Group related functionality.
- Idempotent operations: Prefer tools that are safe to retry
Parallel Tool Use
Modern LLMs can call multiple tools simultaneously when the calls are independent, significantly reducing latency for complex tasks.
🌼 Daisy+ in Action: 8 MCP Tools
The Daisy+ MCP server exposes 8 tools that any AI agent can call: search_records, get_record, create_record, update_record, delete_record, get_model_fields, count_records, and invalidate_cache. This means digital employees can do anything a human user can — read invoices, update inventory, create project tasks — all through structured function calls.
There are no comments for now.