Skip to main content


# Pick a Huggingface LLM to fine-tune.
# The URI must look something like "hf://meta-llama/Llama-2-7b-hf"
llm = pc.LLM(uri)

# Asynchronous fine-tuning
llm.finetune(prompt_template=None, target=None, dataset=None, engine=None, config=None, repo=None)

# Synchronous (blocking) fine-tuning

This method allows you to train a finetuned LLM (without deploying it). To learn more about finetuning, you can read our primer on it here.


Where possible, Predibase will use sensible defaults for your fine-tuning job (including generating a training config, selecting an engine for you, etc.).

prompt_template: Optional[str]

The prompt, in template string form, to be used for finetuning the LLM. The name of the feature from the dataset to use as input should be surrounded by brackets.

target: Optional[str]

The name of the column or feature in the dataset to finetune against.

dataset: Optional[Union[str, Dataset]]

The dataset to use for finetuning (and which should contain the target above). This can either be a Predibase Dataset object, or a raw string mapping to the name of one of your Predibase Datasets.

engine: Optional[Union[str, Engine]]

The engine to use for finetuning. This can either be a Predibase Engine object or a raw string mapping to the name of one of your Predibase Engines

config: Optional[Union[str, Dict]]

The model config to use for training.

repo: Optional[str]

The name of the model repo that will be created for training.


llm.finetune: A ModelFuture object representing the training job kicked off by Predibase.
llm.finetune.get: A Model object that holds a trained, finetuned LLM model.

Example Usage:

fine_tuned_llm = llm.finetune(
prompt_template="Given a target sentence: {ref}, say something.", # `ref` is a name of a column from `viggo`
target="mr", # `mr` is a name of a column from `viggo`
# Model repository llama-2-7b-viggo already exists and new models will be added to it.
# Check Status of Model Training Here:
# Monitoring status of model training...
# Compute summary:
# Cloud: aws
# * g4dn.2xlarge (x2)
# Training job submitted to Predibase. Track progress here:

Supported OSS LLMs

You may fine-tune any Huggingface model meeting the following criteria:

  • Has the "Text Generation" and "Transformer" tags
  • Does not have a "custom_code" tag
  • Maximum of 7 billion parameters

You may fine-tune larger LLMs, however you may encounter training failures if the right parameters are not configured. We are continuing to build out support for larger and a more diverse set of LLMs.