"Internal data stream error" with tee and nvstreammux

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
535.113.01
• Issue Type( questions, new requirements, bugs)
Questions

Hi there, I am trying to understand some issues/behavior when using tee and nvstreammux. I wish to run the following pipeline (simplified):

gst-launch-1.0 -v filesrc location=/home/DockerVolumeMount/mp4s/20231019-164703-593_2310191643_SK5SS6UL_S.mp4 \
! decodebin ! nvvideoconvert ! m.sink_0 nvstreammux name=m batch_size=1 width=1920 height=1080 \
! tee name=t \
t. ! queue ! nvvideoconvert ! nvinfer config-file-path=/home/DockerVolumeMount/vi_deepstream/viexample_configs/vi_pgie_angle_right_config.yml \
t. ! queue ! nvvideoconvert ! nvinfer config-file-path=/home/DockerVolumeMount/vi_deepstream/viexample_configs/vi_pgie_angle_left_config.yml

So one source file, passed through nvstreammux, and then tee’d off into two branches for separate pgies with nvinfer. When I run this I get the following error:

ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer1: Internal data stream error.

Somewhere I am wrong with the pipeline. I’m not sure if I have a misunderstanding of Gstreamer elements or my misunderstanding is around Deepstream/nvstreammux. I’m trying to run in parallel rather than in serial due to latency constraints + I will need to run other non-deepstream branches as well. I’ve been unsuccessful looking online and in these forums for an answer to my problem, though I am suspicious it is something simple. Thank you for the help!

tee (gstreamer.freedesktop.org)
The tee plugin will clone the GstBuffer for each branch, so the GstBuffers between different branches from tee src pads are sharing the same content. If you change the GstBuffer content in one branch, the other branches are changed too including metadata.

nvstreammux is the plugin to generate batched data. The nvstreamdemux is the plugin to get separated stream from the batch.

So that is why we involve metamux for such case. Please refer to NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com)

What if there is only 1 source, ie. 1 input video? It seems from that app that there are multiple sources, and those sources are separated before being passed into new separate nvstreammux plugins, at which point the inference modules run on those separate sources/streams.

Thanks for the help.

After some more investigation this error ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer1: Internal data stream error. looks to be due to the fact that there is no sink after the nvinfer module. Running the following pipeline works (two fakesink modules added after inference modules):

gst-launch-1.0 -v filesrc location=/home/DockerVolumeMount/mp4s/20231019-164703-593_2310191643_SK5SS6UL_S.mp4 \
! decodebin ! nvvideoconvert ! m.sink_0 nvstreammux name=m batch_size=1 width=1920 height=1080 \
! tee name=t \
t. ! queue ! nvvideoconvert ! nvinfer config-file-path=/home/DockerVolumeMount/vi_deepstream/viexample_configs/vi_pgie_angle_right_config.yml ! fakesink \
t. ! queue ! nvvideoconvert ! nvinfer config-file-path=/home/DockerVolumeMount/vi_deepstream/viexample_configs/vi_pgie_angle_left_config.yml ! fakesink

In trimming my pipeline down I got rid of the sinks and didn’t think it mattered.

I’m going to leave this thread up a little while longer since I may still have a related question for my original issue, which I will try and figure out now before posting anything more.

Just going to post an addendum here in case someone in the future finds it useful. I was getting the error ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstQTDemux:qtdemux0: Internal data stream error. with the following pipeline:

gst-launch-1.0 -v filesrc location=/home/root/Data/mp4s/20231019-164703-593_2310191643_SK5SS6UL_S.mp4 \
! decodebin ! tee name=t \
          t. ! queue ! nvvideoconvert ! nvvidconv ! 'video/x-raw,width=240,height=136,format=GRAY8' ! exposure \
          t. ! queue ! nvvideoconvert ! nvvidconv ! 'video/x-raw,width=1920,height=1080,format=RGBA' ! zoom  \
          ! nvvidconv ! nvdrmvideosink conn-id=0 sync=false

where here the zoom and exposure plugins are custom plugins. For whatever reason the pipeline needs to be:

gst-launch-1.0 -v filesrc location=/home/root/Data/mp4s/20231019-164703-593_2310191643_SK5SS6UL_S.mp4 \
! decodebin ! tee name=t \
          t. ! queue ! nvvideoconvert ! 'video/x-raw,width=240,height=136,format=GRAY8' ! exposure \
          t. ! queue ! nvvideoconvert ! 'video/x-raw,width=1920,height=1080,format=RGBA' ! zoom  \
          ! nvvidconv ! nvdrmvideosink conn-id=0 sync=false

WITHOUT the nvvidconv plugins in the middle there, at which point it works. I’m not entirely sure why this is the case, as I’ve used nvvidconv modules at other times in similar ways with these plugins. Regardless this fixes it and maybe someone who sees this in the future will find it useful.

Closing the thread now.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.