Skip to main content


Read more about some popular parameters you can configure for fine-tuning. Some are available in the llm.finetune function call, and others are available via modifying the Ludwig config (see example).

Epochs integer

The number of iterations over the dataset that occur during training of the model. Increase the number of epochs when you want the model to more closely follow its training dataset.

Learning Rate float

Controls how much to change the model in response to the estimated error each time the model weights are updated.

R (Rank) integer

The rank of the low-rank matrices learned during fine-tuning. A smaller R leads to a lower rank matrix, which results in fewer parameters to learn. This can lead to faster training and potentially reduced computational resources but potentially at the expense of learning task-specific information.

Other parameters


While we provide a method for you to modify Ludwig configs directly in Predibase, we highly recommend not doing so until you're an advanced user. Modifying parameters other than the ones mentioned above may cause your model not to train properly.

Dropout float

The dropout probability for LoRA layers. Defaults to 0.05. Recommended Range: 0.05 - 0.3.

Alpha integer

The alpha parameter for LoRA scaling. Alpha controls the effect of the LoRA weights on the final output prediction. Generally, we recommend not tuning alpha unless absolutely necessary. Defaults to 2 * R.

Full List

To see the full list of configurable parameters during training, you can visit the Ludwig Docs and see a corresponding example here.