Fine-tuning is the process of adapting a pre-trained model to a specific task or
domain. Predibase supports fine-tuning via the UI and Python SDK.
1
Upload a Dataset
The first step in fine-tuning is to upload a Dataset that
contains examples of the task you want the model to learn. You may also upload a dataset using the UI.
The next step is to create an Adapter — a small set of
auxiliary parameters that are added to the base model to learn the specific
task.
Copy
Ask AI
# Create an adapter repositoryrepo = pb.repos.create( name="my-adapter-repo", description="My first adapter repo", exists_ok=True,)# Start a fine-tuning job, blocks until training is finishedadapter = pb.adapters.create( config=SFTConfig( base_model="qwen3-8b", apply_chat_template=True ), dataset=dataset, repo="my-adapter-repo", description="initial model with defaults",)
3
Prompt your Adapter
Once you have a fine-tuned adapter (or even during training, once you have a
checkpoint), you can prompt your adapter using an existing deployment of the
same base model to spot-check its quality. Shared endpoints are available for quick testing without needing to first create a private deployment.
When you’re ready to serve your fine-tuned adapter in production, create a
Private Deployment.
Copy
Ask AI
from predibase import DeploymentConfigpb.deployments.create( name="my-qwen3-8b", config=DeploymentConfig( base_model="qwen3-8b", # Must be the same base model as the adapter was trained on min_replicas=0, max_replicas=1 ))client = pb.deployments.client("my-qwen3-8b")resp = client.generate( prompt, adapter_id="my-adapter-repo/1")print(resp.generated_text)
Note that you are creating a base model deployment which supports multi-LoRA, meaning your newly trained adapter
and any other adapters, can be prompted by specifying the adapter_id in the generate function.