- See if your adapter’s performance improves with further training on the same data
- Fine-tune on a similar dataset without starting over
- Resume training from a specific checkpoint
Starting a Continued Training Run
To continue training from an existing adapter, provide the adapter ID in your configuration:Configuration Options
When continuing training, only two parameters can be modified:epochs
ortrain_steps
: Number of additional epochs or training steps to run. Only one of these parameters can be modified.enable_early_stopping
: Whether to enable early stopping.
Training Progress
Training on the Same Dataset
When continuing training on the same dataset:- Training progress from the previous run is preserved
- Checkpoints and metrics are maintained
- Training picks up exactly where it left off
- Optimizer, learning rate scheduler, and RNG state are restored
Training on a Different Dataset
You can also use an existing adapter as the starting point for fine-tuning on a new dataset:- Training starts fresh but uses the LoRA weights from the final checkpoint of the base run as initialization
- Optimizer, learning rate scheduler, and RNG are re-initialized
- Only
epochs
/train_steps
andenable_early_stopping
are configurable - Training progress from the base run is not preserved
- Small warmup ratio is used to prevent catastrophic forgetting in the early stages of training
Next Steps
- Explore different adapter types
- Start evaluating your models