Hello everyone,
I have a YOLOv8 .pt
model, which I attempted to convert into an ONNX model using the following command:
python3 export_yoloV8.py -w yolov8m.pt -s 320 --opset 15 --simplify --dynamic
The --batch 1 parameter is working fine for different models but for dynamic isn’t working.
However, when trying to build the TensorRT engine from the exported ONNX model, I encountered the following error:
ERROR: [TRT]: ModelImporter.cpp:776: — End node —
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && “For range operator with dynamic inputs, this version of TensorRT only supports INT32!”
Could not parse the ONNX file
Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get CUDA engine from custom library API
0:00:10.620280761 33 0x2ad55d90 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:10.621458082 33 0x2ad55d90 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:10.621520166 33 0x2ad55d90 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:10.621594752 33 0x2ad55d90 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:10.621625377 33 0x2ad55d90 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /root/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
ERROR: main:707: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /root/DeepStream-Yolo/config_infer_primary_yoloV8.txt
, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed.
As far as I know, my configuration file and DeepStream app settings are correct. However, I suspect the issue might be related to the ONNX model export or TensorRT’s handling of dynamic inputs.
Could anyone help me resolve this issue? Any suggestions or guidance would be greatly appreciated!