Parameter efficient fine-tuning with adapters
apply_chat_template
flag to automatically format your prompts with the base
model’s chat template. This is particularly useful when fine-tuning
instruction-tuned models.
apply_chat_template
is supported in the SFTConfig:
True
, each training sample in the dataset will
automatically have the model’s chat template applied to it. Note that this
parameter is only supported for instruction and chat fine-tuning, not continued
pretraining.
apply_chat_template
set to True
, please use
only the
OpenAI-compatible method to
query the model because the chat template will automatically be applied to your
inputs. You can see sample code in the
Python SDK example.