get_selected_onnx_execution_providers¶
inference_models.developer_tools.get_selected_onnx_execution_providers
cached
¶
Get the list of ONNX execution providers that are both requested and available.
Checks which ONNX Runtime execution providers are available on the system and
filters them against the requested providers from the ONNXRUNTIME_EXECUTION_PROVIDERS
environment variable. This is used internally by ONNX-based models to determine
which execution providers to use.
The function is cached, so subsequent calls return the same result without re-checking the environment.
Returns:
-
List[str]–List of execution provider names that are both requested (via environment
-
List[str]–variable) and available on the system. Returns empty list if ONNX Runtime
-
List[str]–is not installed.
Environment Variables
ONNXRUNTIME_EXECUTION_PROVIDERS: Comma-separated list of requested execution providers. Example: "CUDAExecutionProvider,CPUExecutionProvider"
Examples:
Check available ONNX execution providers:
>>> from inference_models.developer_tools import get_selected_onnx_execution_providers
>>>
>>> providers = get_selected_onnx_execution_providers()
>>> print(f"Available providers: {providers}")
>>> # ['CUDAExecutionProvider', 'CPUExecutionProvider']
Use in custom ONNX model:
>>> from inference_models.developer_tools import get_selected_onnx_execution_providers
>>> import onnxruntime as ort
>>>
>>> providers = get_selected_onnx_execution_providers()
>>> if not providers:
... raise RuntimeError("No ONNX execution providers available")
>>>
>>> session = ort.InferenceSession("model.onnx", providers=providers)
Note
- Common execution providers: "CUDAExecutionProvider", "CPUExecutionProvider", "TensorrtExecutionProvider", "OpenVINOExecutionProvider"
- The function only returns providers that are both requested AND available
- If ONNX Runtime is not installed, returns an empty list
See Also
x_ray_runtime_environment(): Get comprehensive runtime information including all available ONNX execution providers