Predibase allows you to seamlessly train, iterate on, and deploy your models using an intuitive, declarative configuration. Under the hood, Predibase leverages Ludwig, the Declarative Deep Learning Framework, to power model training.
Predibase takes a GitOps-style approach to model building. This is evident with the ideas of model repositories, version lineage, version diff, and forking throughout the platform.
Models as Configuration
All models in Predibase are powered by Ludwig and can be expressed in a declarative fashion. Ludwig models follow an architecture called Encoder-Combiner-Decoder (ECD).
- composable: plug and play different cutting-edge models
- flexible: support for linear, tree-based, and deep learning models
- multi-modal: the data-driven architecture can fit to your data and handle all your structured & unstructured data
- versatile: a single framework can support text classification, regression, time-series forecasting, image captioning and much more
To read more about ECD, visit this page in the Ludwig Docs.
A model repository is the source of truth for model versions for a given use case. Your model repository can help easily version, iterate, and track different models over time and across collaborators. It also enables visualization, comparison, and deployment of all your model versions.
Model repositories are visible and editable by anyone in your organization.
A model repository maintains a record of all model versions trained within it. Moreover, Predibase displays a lineage of those model versions over time showing the order in which models were trained and who trained each version. This allows you to quickly understand the lifecycle of experiments in the repository at a glance.
Given all models in Predibase can be expressed as a declarative configuration, it's simple to diff two model versions to understand the delta between them and how that might affect model performance.