Skip to content

๐Ÿ–ฅ๏ธ Hardware Compatibility

Platform-specific compatibility and testing status for inference-models.

๐Ÿงช Testing Coverage & Support Status

The following table shows the current testing and support status for different platforms:

Platform OS/Distribution Installation Method Support Status
CPU (x86_64) Linux (general) Bare-metal, Docker โœ… Stable
CPU (x86_64) macOS Bare-metal, Docker โœ… Stable
CPU (Apple Silicon) macOS Bare-metal, Docker โœ… Stable
NVIDIA GPU Ubuntu 22.04 LTS Bare-metal, Docker โœ… Stable
NVIDIA GPU Ubuntu 24.04 LTS Bare-metal, Docker โœ… Stable
NVIDIA GPU Other Linux distros Bare-metal, Docker โš ๏ธ Requires verification
Jetson (JetPack 6.1) Ubuntu 22.04 (Jetson) Docker โœ… Stable
Jetson (JetPack 6.1) Ubuntu 22.04 (Jetson) Bare-metal โš ๏ธ Experimental
Jetson (JetPack 5.1) Ubuntu 20.04 (Jetson) Docker (custom build) โœ… Stable
Jetson (JetPack 5.1) Ubuntu 20.04 (Jetson) Bare-metal โŒ Not possible
Windows Windows 10/11 Any โ“ Not tested

Windows Support

Windows is not tested at all. While the package may install and run on Windows, we have not performed any testing on this platform. Use at your own risk.

๐Ÿ’ป CPU Support

x86_64 / AMD64

Supported platforms:

  • Linux - General distribution support
  • macOS - Intel and Apple Silicon

Installation methods:

What works:

  • All CPU-based models
  • PyTorch CPU backend
  • ONNX Runtime CPU backend

Apple Silicon (M1/M2/M3/M4)

Supported platforms:

  • macOS with Apple Silicon processors

Installation methods:

  • โœ… Bare-metal - Direct pip/uv installation
  • โœ… Docker - Containerized deployment

MPS (Metal Performance Shaders) Support:

  • โš ๏ธ Experimental - MPS GPU acceleration available for select models only
  • Supported models: RFDetr and other compatible architectures
  • Limitations: Not all models support MPS; most run on CPU
# Install on Apple Silicon
pip install inference-models

๐ŸŽฎ NVIDIA GPU Support

Tested Distributions

Stable support:

  • โœ… Ubuntu 22.04 LTS - Fully tested and recommended
  • โœ… Ubuntu 24.04 LTS - Fully tested and recommended

Other distributions:

  • โš ๏ธ Requires verification - Other Linux distributions (Debian, RHEL, CentOS, etc.) should work but require testing in your specific environment

Installation methods:

Requirements:

  • NVIDIA GPU with CUDA 11.8 or 12.x support
  • Appropriate NVIDIA drivers installed
  • For Docker: NVIDIA Container Toolkit

๐Ÿค– NVIDIA Jetson Support

Supported devices:

  • Jetson Orin AGX
  • Jetson Orin NX
  • Jetson Orin Nano

Installation methods:

Method Status Description
Docker โœ… Stable Recommended for production use
Bare-metal โš ๏ธ Experimental pip install works but not extensively tested; use at your own risk

Docker installation (recommended):

See Docker Environments for pre-built Jetson images.

Bare-metal installation (experimental):

# Use at your own risk - not extensively tested
uv pip install "inference-models[torch-jp6-cu126,onnx-jp6-cu126]"

Bare-metal on Jetson

Bare-metal installation on Jetson devices is experimental. While pip install should work, it has not been extensively tested. We recommend using Docker for production deployments.

JetPack 5.1 (Legacy)

Supported devices:

  • Jetson Orin AGX
  • Jetson Orin NX
  • Jetson Orin Nano

Installation methods:

Method Status Description
Docker (custom build) โœ… Stable Custom Docker build required; see below
Bare-metal โŒ Not possible Verified to not work due to dependency conflicts

Docker installation:

JetPack 5.1 requires a custom Docker build. See the Roboflow inference repository for Jetson 5.1 Dockerfiles.

No Bare-metal Support for JetPack 5.1

We have verified that bare-metal installation on JetPack 5.1 is not possible due to incompatible system dependencies and library conflicts. You must use the custom Docker build.

TensorRT on Jetson

Use JetPack TensorRT

Jetson devices come with TensorRT pre-installed as part of JetPack. Do not install the trt10 extra on Jetson platforms.

  • Use the system TensorRT: /usr/lib/aarch64-linux-gnu/libnvinfer.so
  • Installing PyPI TensorRT packages will cause conflicts

๐Ÿš€ Next Steps