get_trt_engine_inputs_and_outputs¶
inference_models.models.common.trt.get_trt_engine_inputs_and_outputs
¶
Extract input and output tensor names from a TensorRT engine.
Inspects a TensorRT engine to determine which tensors are inputs and which are outputs. This is useful for setting up inference execution contexts and understanding the engine's interface.
Parameters:
-
(engine¶ICudaEngine) –TensorRT CUDA engine (ICudaEngine) to inspect.
Returns:
-
Tuple[List[str], List[str]]–Tuple of (input_names, output_names) where: - input_names: List of input tensor names - output_names: List of output tensor names
Examples:
Inspect TensorRT engine:
>>> from inference_models.developer_tools import (
... load_trt_model,
... get_trt_engine_inputs_and_outputs
... )
>>>
>>> engine = load_trt_model("model.plan")
>>> inputs, outputs = get_trt_engine_inputs_and_outputs(engine)
>>>
>>> print(f"Inputs: {inputs}") # ['images']
>>> print(f"Outputs: {outputs}") # ['output0', 'output1']
Use for setting up inference:
>>> inputs, outputs = get_trt_engine_inputs_and_outputs(engine)
>>> context = engine.create_execution_context()
>>>
>>> # Set input tensor
>>> input_name = inputs[0]
>>> context.set_input_shape(input_name, (1, 3, 640, 640))
Note
- Requires TensorRT to be installed
- Works with TensorRT 10.x engines
- Tensor names are defined during engine building/export
See Also
load_trt_model(): Load TensorRT engine from fileinfer_from_trt_engine(): Run inference with TensorRT engine