Supported Models
Function calling fine-tuning is currently supported on the following models:- llama-3-2-1b-instruct
- llama-3-2-3b-instruct
- llama-3-3-70b-instruct
- qwen2-5-1-5b-instruct
- qwen2-5-7b-instruct
- qwen2-5-14b-instruct
- qwen2-5-32b-instruct
Tool Schema
Your tools should follow the schema defined by Hugging Face. Here’s an example:Chat Dataset Format
Use a chat-style dataset with a “tools” key at the same level as “messages”:Required Configuration Parameters
When fine-tuning with function calling, you must enableapply_chat_template
in your
fine-tuning config. You can do
this either through the SDK or by checking the box in the adapter version UI
before training.
How Function Calling Works
Function calling involves an interactive flow between the user, assistant, and tools:- System acknowledges tools may be used
- User provides prompt with tools
- Assistant makes appropriate tool calls
- User/system executes tool calls
- Tool returns results
- Assistant provides formatted response
Converting from ShareGPT Format
If your data is in ShareGPT format, you can use this Python script to convert it to Predibase’s format:Next Steps
- Learn about chat templates for proper prompt formatting
- Explore adapter types for fine-tuning
- Start evaluating your function-calling models