🐳 Docker Environments¶
Documentation for using inference-models in Docker containers.
Recommended for Jetson
Docker is the recommended installation method for NVIDIA Jetson devices. See Hardware Compatibility for details.
Work in Progress
Docker builds for inference-models are currently in progress. Some builds are automated, while others are manual. We encourage you to test these images and raise issues if you encounter any problems.
📦 Available Docker Images¶
Pre-built experimental Docker images are available on Docker Hub under the roboflow/inference-exp repository.
x86_64 / AMD64 Images¶
| Image Tag | Base | CUDA Version | Status |
|---|---|---|---|
roboflow/inference-exp:cpu-latestroboflow/inference-exp:cpu-<version> |
Ubuntu 22.04 | N/A (CPU only) | 🤖 Automated |
roboflow/inference-exp:cu118-latestroboflow/inference-exp:cu118-<version> |
Ubuntu 22.04 | CUDA 11.8 | 🤖 Automated |
roboflow/inference-exp:cu124-latestroboflow/inference-exp:cu124-<version> |
Ubuntu 22.04 | CUDA 12.4 | 🤖 Automated |
roboflow/inference-exp:cu126-latestroboflow/inference-exp:cu126-<version> |
Ubuntu 22.04 | CUDA 12.6 | 🤖 Automated |
roboflow/inference-exp:cu128-latestroboflow/inference-exp:cu128-<version> |
Ubuntu 22.04 | CUDA 12.8 | 🤖 Automated |
Image Tags
-latesttags point to the most recent release-<version>tags (e.g.,cpu-0.17.3) pin to a specific version for reproducibility
Jetson Images¶
| Image Tag | JetPack Version | Status |
|---|---|---|
roboflow/roboflow-inference-server-jetson-5.1.1:0.62.5-experimental |
JetPack 5.1 (L4T 35.2.1) | ✋ Manual (experimental) |
roboflow/inference-exp:jp61-* |
JetPack 6.1 (L4T 36.x) | 🚧 In development |
Image Status
- 🤖 Automated - Built automatically on releases and main branch pushes
- ✋ Manual - Built manually; not part of automated pipeline
- 🚧 In development - Coming soon
🚀 Quick Start¶
CPU-only¶
GPU (CUDA 12.8)¶
Jetson (JetPack 5.1 - Experimental)¶
docker run -it \
--runtime nvidia \
roboflow/roboflow-inference-server-jetson-5.1.1:0.62.5-experimental \
python3
Jetson 5.1 Experimental Build
This is an experimental build that works but is not part of the automated pipeline. Future updates will be delivered later.
🔨 Building Custom Images¶
You can build your own Docker images using the Dockerfiles in the inference_models/dockerfiles/ directory:
💡 Usage Tips¶
Mounting Volumes¶
Mount your model cache for cache persistency:
docker run -it \
-v ~/.cache/inference:/root/.cache/inference \
roboflow/inference-exp:cpu-latest \
bash
🐛 Reporting Issues¶
These Docker images are experimental. If you encounter issues:
- Check the Hardware Compatibility guide
- Verify your Docker and NVIDIA runtime setup
- Open an issue with:
- Image tag used
- Error message
- Steps to reproduce
🚀 Next Steps¶
- Installation Guide - Local installation options
- Hardware Compatibility - Platform-specific requirements
- Understand Core Concepts - Understand the architecture