Fine-tuning is the process of adapting a pre-trained model to a specific task or domain. Predibase supports fine-tuning via the UI and Python SDK.

1

Upload a Dataset

The first step in fine-tuning is to upload a Dataset that contains examples of the task you want the model to learn. You may also upload a dataset using the UI.

from predibase import Predibase, SFTConfig
pb = Predibase(api_token=<API_TOKEN>)

dataset = pb.datasets.from_file(
    "/path/to/dataset.csv",
    name="my_dataset",
)
2

Create an Adapter

The next step is to create an Adapter — a small set of auxiliary parameters that are added to the base model to learn the specific task.

# Create an adapter repository
repo = pb.repos.create(
    name="my-adapter-repo",
    description="My first adapter repo",
    exists_ok=True,
)

# Start a fine-tuning job, blocks until training is finished
adapter = pb.adapters.create(
    config=SFTConfig(
        base_model="qwen3-8b",
        apply_chat_template=True
    ),
    dataset=dataset,
    repo="my-adapter-repo",
    description="initial model with defaults",
)
3

Prompt your Adapter

Once you have a fine-tuned adapter (or even during training, once you have a checkpoint), you can prompt your adapter using an existing deployment of the same base model to spot-check its quality. Shared endpoints are available for quick testing without needing to first create a private deployment.

client = pb.deployments.client("qwen3-8b")
resp = client.generate(
    prompt,
    adapter_id="my-adapter-repo/1",
    max_new_tokens=512,
)
print(resp.generated_text)
4

Evaluate your Adapter

Before going to production, you may want to Evaluate your adapter to ensure it meets the desired response quality.

correct = 0
for prompt, label in evaluation_dataset:
    resp = client.generate(
        prompt,
        adapter_id="my-adapter-repo/1"
    )
    if resp.generated_text == label:
        correct += 1
accuracy = correct / len(evaluation_dataset)
5

Create a Production Deployment

When you’re ready to serve your fine-tuned adapter in production, create a Private Deployment.

from predibase import DeploymentConfig

pb.deployments.create(
    name="my-qwen3-8b",
    config=DeploymentConfig(
        base_model="qwen3-8b", # Must be the same base model as the adapter was trained on
        min_replicas=0,
        max_replicas=1
    )
)

client = pb.deployments.client("my-qwen3-8b")
resp = client.generate(
    prompt,
    adapter_id="my-adapter-repo/1"
)
print(resp.generated_text)

Note that you are creating a base model deployment which supports multi-LoRA, meaning your newly trained adapter and any other adapters, can be prompted by specifying the adapter_id in the generate function.