Predibase home page
Search...
⌘K
Ask AI
Support
Sign In
Sign In
Search...
Navigation
Integrations
LiteLLM
Documentation
Python SDK
REST API
User Guides
Getting Started
Introduction
Quickstart
Inference
Overview
Querying Models
Models
Deployments
Fine-tuned Adapters
Batch Inference
Fine-Tuning
Overview
Supported Models
Datasets
Adapters
Tasks
Distributed Training
Evaluation
Hyperparameter Tuning
Account
Roles & Permissions
VPC Provisioning
Integrations
Weights & Biases
Comet
LangChain
LiteLLM
PortKey.AI
Examples
LoRA Land for Customer Support
Toxic Comment Classifier
GRPO for Countdown
Recommender System → LLM Generation
Retrieval-Augmented Generation
Resources
Usage and Billing
Frequently Asked Questions
Changelog
On this page
LiteLLM 🚅
Integrations
LiteLLM
Integrate Predibase with LiteLLM
LiteLLM 🚅
LiteLLM
handles load balancing, fallbacks and spend tracking across 100+ LLMs in the OpenAI format. The Predibase integration provides access to base open source LLMs and highly efficient fine-tuned adapters.
Documentation
LangChain
PortKey.AI
Assistant
Responses are generated using AI and may contain mistakes.