Skip to content

🚀 What is inference-models?

inference-models is the library to make predictions from computer vision models provided by Roboflow — designed to be fast, reliable, and user-friendly. It offers:

  • Multi-Backend Support: Run models with PyTorch, ONNX, TensorRT, or Hugging Face backends
  • Automatic Model Loading: Smart model resolution and backend selection
  • Minimal Dependencies: Composable extras system for installing only what you need
  • Behavior-Based Interfaces: Models with similar behavior share consistent APIs; custom models can define their own
  • Full Roboflow Platform Support: Run any model trained on Roboflow

Roadmap for inference-models

We are still making changes to the API and adding new features. API should be fairli stable already, but it is advised to pin to specific version if you are using it in production and reveiw our roadmap.

💻 Installation

CPU installation:

uv pip install inference-models

inference-models can be installed with CUDA and TensorRT support - see Installation Guide for more options.

CPU installation:

pip install inference-models

inference-models can be installed with CUDA and TensorRT support - see Installation Guide for more options.

🏃‍➡️ Usage

Load and run a pretrained model:

import cv2
import supervision as sv
from inference_models import AutoModel

# Load pretrained model from Roboflow
model = AutoModel.from_pretrained("rfdetr-base")

# Run inference (works with numpy arrays or torch.Tensor)
image = cv2.imread("<path-to-your-image>")
predictions = model(image)

# Use with supervision
annotator = sv.BoxAnnotator()
annotated = annotator.annotate(image, predictions[0].to_supervision())

Load and run models trained on the Roboflow platform:

import cv2
import supervision as sv
from inference_models import AutoModel

# Load your custom model from Roboflow
model = AutoModel.from_pretrained(
    "<your-project>/<version>",
    api_key="<your-api-key>"  # model access secured with API key
)

# Run inference (works with numpy arrays or torch.Tensor)
image = cv2.imread("<path-to-your-image>")
predictions = model(image)

# Use with supervision
annotator = sv.BoxAnnotator()
annotated = annotator.annotate(image, predictions[0].to_supervision())

🧠 Supported Model Architectures

  • RFDetr
  • SAM models family
  • Vision-Language Models (Florence, PaliGemma, Qwen, SmolVLM, Moondream)
  • OCR (DocTR, EasyOCR, TrOCR)
  • YOLO
  • and many more

For detailed model documentation, see Supported Models.

🔧 Run your local models

Load your own model implementations from a local directory - models with architectures not in the main inference-models package. This is especially valuable for production deployment of custom models. Find more information in Load Models from Local Packages.

from inference_models import AutoModel

model = AutoModel.from_pretrained(
    "/path/to/my_custom_model",
    allow_local_code_packages=True
)

See Load Models from Local Packages for complete details on creating custom model packages.

📄 License

The inference-models package is licensed under Apache 2.0. Individual models may have different licenses - see the Supported Models for details.


Ready to get started? Head to the Quick Overview