Development Environment¶
This guide walks you through setting up your development environment for contributing to inference-models.
Prerequisites¶
- uv - Fast Python package installer and resolver
- Git
- (Optional) CUDA-capable GPU for GPU backend development
Python Version
uv will automatically install Python 3.12 when creating the virtual environment. No need to install Python separately.
Setting Up the Environment¶
1. Install uv¶
2. Clone the Repository¶
3. Sync Dependencies¶
uv will automatically create a virtual environment and install dependencies:
# Install base package with test dependencies
uv sync --extra test
# Install with specific backends for CPU development
uv sync --extra test --extra torch-cpu --extra onnx-cpu
# Install with GPU backends (CUDA 12.8)
uv sync --extra test --extra torch-cu128 --extra onnx-cu12 --extra trt10
# Install with model-specific dependencies (e.g., MediaPipe)
uv sync --extra test --extra mediapipe
uv sync vs uv pip install
uv sync is the recommended way to install dependencies as it:
- Creates a virtual environment automatically
- Installs the package in editable mode
- Locks dependencies for reproducibility
- Is significantly faster than pip
Running Tests¶
Working Directory
All commands below assume you're in the inference/inference_models directory.
Run All Tests¶
Run Specific Test Suites¶
# Unit tests
uv run pytest tests/unit_tests/
# Integration tests
uv run pytest tests/integration_tests/
Skip Slow Tests¶
Code Quality¶
Working Directory
All commands below assume you're in the inference/inference_models directory.
Format Code¶
# From inference/inference_models directory
black inference_models tests
isort --profile black inference_models tests
Check Code Quality¶
# From inference/inference_models directory
black --check inference_models tests
isort --profile black --check inference_models tests
Contribution Idea
We'd love to have a unified code quality tool or script! If you're interested in improving the developer experience, consider creating a simple script or Makefile to run these commands together.
Verifying Your Setup¶
Working Directory
All commands below assume you're in the inference/inference_models directory.
Test that everything works:
# From inference/inference_models directory
uv run python -c "from inference_models import AutoModel; print('✅ Import successful')"
Run a simple inference test:
# From inference/inference_models directory
uv run python -c "
from inference_models import AutoModel
model = AutoModel.from_pretrained('yolov8n-640')
print('✅ Model loaded successfully')
"
Next Steps¶
- Core Architecture - Understand the codebase structure
- Adding a Model - Learn how to add a new model
- Writing Tests - Best practices for testing