Predibase supports fine-tuning LLMs for classification tasks.
This adds LoRA weights as well as a new classification head, that are optimized during the training process.This is especially useful if you know you want your model to always predict from a set of predefined labels. Inference will be faster, regardless of the number of tokens per label. The model will never hallucinate classes. Finally, accuracy will be higher when compared to SFT for classification.
from predibase import ClassificationConfigadapter = pb.adapters.create( config=ClassificationConfig( base_model="qwen3-8b", ), dataset="imdb_sentiment_analysis",)
For classification training, there is no option to automatically apply a chat template. If you want to do so, you should apply it to the text before uploading the dataset.Furthermore, turbo and turbo_lora are not applicable adapter types.