Predibase supports fine-tuning LLMs for classification tasks. This adds LoRA weights as well as a new classification head, that are optimized during the training process. This is especially useful if you know you want your model to always predict from a set of predefined labels. Inference will be faster, regardless of the number of tokens per label. The model will never hallucinate classes. Finally, accuracy will be higher when compared to SFT for classification.

Ideal Use Cases

Routing and Orchestration

  • Training a model router.
  • Product classification.

Sentiment & Feedback

  • Sentiment analysis.
  • Customer feedback labeling.

Safety & Guardrails

  • Guardrails models.
  • Toxicity/PII detection.

Quick Start

To get started with classification fine-tuning
  1. Prepare a dataset with a text field and a label field.
  2. Kick off a training job using a ClassificationConfig.
from predibase import ClassificationConfig
    
adapter = pb.adapters.create(
    config=ClassificationConfig(
        base_model="qwen3-8b",
    ),
    dataset="imdb_sentiment_analysis",
)
For classification training, there is no option to automatically apply a chat template. If you want to do so, you should apply it to the text before uploading the dataset.Furthermore, turbo and turbo_lora are not applicable adapter types.

Next Steps