Skip to main content

Model Export

A trained model can be downloaded to your local machine through the Predibase UI. This is useful when you have your own serving stack that you'd like to plug Predibase models into.

Downloading a Model

On the details page for a trained model, click the button with a Download icon and select from one of the supported output frameworks:

The download will start automatically once the model has been successfully packaged for export.

Note: Exporting a model for download requires an active engine. If your selected engine isn't active, it may take some time for the engine to spin up.

Supported Frameworks

The model can be exported to the following formats:

  • ludwig a zip file containing the native ludwig model files.
  • torchscript a single file that is an intermediate representation of a PyTorch model that can then be run in a high-performance environment.
  • triton a tarfile containing a compiled torchscript model and configuration for deployment with the open source Triton Inference Server.


Currently, certain models cannot be exported as Torchscript due to limitations with preprocessing support. Specifically, models with the following feature types may not successfully export to Torchscript:

  • Vector
  • Audio
  • Date
  • H3

Support for exporting models with these feature types is a work in progress.

Exporting to External Datastores

Predibase supports exporting trained models to external datastores (e.g., S3) via the EXPORT MODEL PQL statement.

Using Downloaded Model

Install dependencies

pip install ludwig

Load (and preprocess) data

You can pass the raw data that you used to create the model to model.predict().

Ludwig will take care of the necessary preprocessing steps.

import pandas as pd

input_data = {
"Pclass": ["3"],
"Sex": ["male"],
"Age": [34.5],
"SibSp": [0.0],
"Parch": [0.0],
"Fare": [7.8292],
"Embarked": ["Q"],
"Cabin": ["C85"],
df = pd.DataFrame(input_data)

Load and use the model

Assuming the model was downloaded in Ludwig format and unzipped to a local directory /path/to/model (replace this with the actual path).

from ludwig.api import LudwigModel

model = LudwigModel.load("/path/to/model")

preds, _ = model.predict(df)


This will output something like:

                      Survived_probabilities  Survived_predictions  Survived_probabilities_False  Survived_probabilities_True  Survived_probability
0 [0.8492070138454437, 0.15079298615455627] False 0.849207 0.150793 0.849207