Deepstream testing: Yolov8 models for multiple streams

Hello everyone,

I have a YOLOv8 .pt model, which I attempted to convert into an ONNX model using the following command:
python3 export_yoloV8.py -w yolov8m.pt -s 320 --opset 15 --simplify --dynamic
The --batch 1 parameter is working fine for different models but for dynamic isn’t working.

However, when trying to build the TensorRT engine from the exported ONNX model, I encountered the following error:

ERROR: [TRT]: ModelImporter.cpp:776: — End node —
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && “For range operator with dynamic inputs, this version of TensorRT only supports INT32!”

Could not parse the ONNX file

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get CUDA engine from custom library API
0:00:10.620280761 33 0x2ad55d90 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:10.621458082 33 0x2ad55d90 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:10.621520166 33 0x2ad55d90 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:10.621594752 33 0x2ad55d90 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:10.621625377 33 0x2ad55d90 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /root/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

ERROR: main:707: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /root/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed.

As far as I know, my configuration file and DeepStream app settings are correct. However, I suspect the issue might be related to the ONNX model export or TensorRT’s handling of dynamic inputs.

Could anyone help me resolve this issue? Any suggestions or guidance would be greatly appreciated!

Hello,

Try this repository : GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform (Jetson / GPU) - jetson nano development kit
• DeepStream Version - 6.0.1

• JetPack Version (valid for Jetson only) - jetpack 4.6.1

• TensorRT Version - 8.2.1.9

In Jetson Nano, I’m doing the deepstream testing & here is the spec:

I’m converting the model from other server.
The details about the server are here given below:

The python version that i’m converting the .pt model to ONNX Model is given below with parameters.

Python 3.8.20 (default, Oct 3 2024, 15:24:27)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type “help”, “copyright”, “credits” or “license” for more information.

from ultralytics import YOLO
WARNING ⚠️ torchvision==0.15 is incompatible with torch==2.3.
Run ‘pip install torchvision==0.18’ to fix torchvision or ‘pip install -U torch torchvision’ to update both.
For a full compatibility table see GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision
model = YOLO(“yolov8s.pt”)

python3 export_yoloV8.py -w yolov8n.pt -s 320 --opset 12 --dynamic.

So, using these parameters i’m facing the error, while building the TensorRT Engine.

Here, we have attached the deepstream configuration files:

DeepStream leverages TensorRT to generate engine. seems this is a TensorRT issue. please refer to this topic.