Skip to main content

Use a Fine-tuned Model

Once your fine-tuning job is complete, you have a few options for next steps:

Analyze Logs and Metrics

Predibase supports experiment tracking via Model Repositories and performance analysis in the SDK and in the Predibase UI.

When training in the SDK, you can get a stream of updates and metrics by using llm.finetune.get()

After kicking off a training job in the SDK, you'll be provided with a link to the Model Version Page for that model. From there, the Predibase UI offers various visualizations and charts to help you analyze and compare model performance.

Run Inference

You can choose to run inference on your newly fine-tuned model via Serverless Endpoints or Dedicated Deployments depending on your use-case.

Download Your Model

You can download your model artifacts using the following commands:

model = pc.get_model(<your_finetuned_model_repo_name>,<optional_model_version_number>)
model.download(name="llm.zip", location="/path/to/folder")

Since Predibase supports adapter-based fine-tuning, the exported model files will contain only the adapter weights, not the full LLM weights.