Skip to main content

pb.adapters.create

pb.adapters.create

Create a new adapter by starting a new (blocking) fine-tuning job

Parameters:

   config: FinetuningConfig, default None

   dataset: str, default None
The dataset to use for fine-tuning

   continue_from_version: str, default None
The adapter version to continue training from

   repo: str, default None
Name of the adapter repo to store the newly created adapter

   description: str, default None
Description for the adapter

   show_tensorboard: bool, default False
If true, launch a tensorboard instance to view training logs

Returns:

   Adapter

Examples:

Example 1: Create an adapter with defaults

# Create an adapter repository
repo = pb.repos.create(name="news-summarizer-model", description="TLDR News Summarizer Experiments", exists_ok=True)

# Start a fine-tuning job, blocks until training is finished
adapter = pb.adapters.create(
config=FinetuningConfig(
base_model="mistral-7b"
),
dataset="tldr_news",
repo=repo,
description="initial model with defaults"
)

Example 2: Create a new adapter with customized parameters

# Create an adapter repository
repo = pb.repos.create(name="news-summarizer-model", description="TLDR News Summarizer Experiments", exists_ok=True)

# Start a fine-tuning job with custom parameters, blocks until training is finished
adapter = pb.adapters.create(
config=FinetuningConfig(
base_model="mistral-7b",
task="instruction_tuning",
epochs=1, # default: 3
rank=8, # default: 16
learning_rate=0.0001, # default: 0.0002
target_modules=["q_proj", "v_proj", "k_proj"], # default: None (infers [q_proj, v_proj] for mistral-7b)
),
dataset="tldr_news",
repo=repo,
description="changing epochs, rank, and learning rate"
)

Example 3: Create a new adapter by continuing to train on top of an existing adapter version

adapter = pb.adapters.create(
# Note: only `epochs` and `enable_early_stopping` are available parameters in this case.
config=FinetuningConfig(
epochs=3, # The maximum number of ADDITIONAL epochs to train for
enable_early_stopping=False,
),
continue_from_version="myrepo/3", # The adapter version to resume training from
dataset="mydataset",
repo="myrepo"
)

Example 4: Create a new adapter by continuing to train on top of a specific checkpoint of an existing adapter version

adapter = pb.adapters.create(
# Note: only `epochs` and `enable_early_stopping` are available parameters in this case.
config=FinetuningConfig(
epochs=3, # The maximum number of ADDITIONAL epochs to train for
enable_early_stopping=False,
),
continue_from_version="myrepo/3@11", # Resumes from checkpoint 11 of `myrepo/3`.
dataset="mydataset",
repo="myrepo"
)

Async fine-tuning job

pb.finetuning.jobs.create

Start a non-blocking fine-tuning job

Parameters:

   config: FinetuningConfig, default None

   dataset: str, default None
The dataset to use for fine-tuning

   repo: str, default None
Name of the adapter repo to store the newly created adapter

   description: str, default None
Description for the adapter

   watch: boolean, default False
Defines whether to block until the fine-tuning job finishes

Returns:

   FinetuningJob

adapter: FinetuningJob = pb.finetuning.jobs.create(
config=FinetuningConfig(
base_model="mistral-7b",
task="instruction_tuning",
epochs=1, # default: 3
rank=8, # default: 16
learning_rate=0.0001 # default: 0.0002
target_modules=["q_proj", "v_proj", "k_proj"], # default: None (infers [q_proj, v_proj] for mistral-7b)
),
dataset=dataset,
repo=repo,
description="changing epochs, rank, and learning rate",
watch=False,
)