Deepstream python code - Failed in mem copy (Segmentation fault)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Orin NX
• DeepStream Version
7.1.0
• JetPack Version (valid for Jetson only)
6.2
• TensorRT Version
10.3.0.30-1+cuda12.5
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I used the deepstream-python-app test3 example, and I customized test3.py to track human using 4 USB 2.0 cameras (640X480, mjpeg).
My pipeline :
v4l2src → capsfilter → jpegdec → videoconvert → nvvideoconvert → capsfilter → nvstreammux → queue1 → nvinfer (pgie) → queue2 → nvtracker (IOU_tracker) → queue3 → tiler → queue4 → nvvidconv → queue5 → nvosd

My program has errors, and the timing of each error occurrence is different.
If you need more information, please let me know.

The following is the error:

0:03:55.528418999 195820 0xaaaaf3210180 ERROR nvinfer gstnvinfer.cpp:1279:get_converted_buffer: cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:03:55.528546780 195820 0xaaaaf3210180 WARN nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame: error: Buffer conversion failed
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

Error: gst-stream-error-quark: Buffer conversion failed (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1576): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
Exiting app

ERROR: [TRT]: IExecutionContext::enqueueV3: Error Code 1: Cask (Cask convolution execution)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:03:55.530251251 195820 0xaaaaf3210120 WARN nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=3
Unable to set device in gst_nvstreammux_src_collect_buffers
Unable to set device in gst_nvstreammux_src_collect_buffers
Segmentation fault

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

1.Refer to the FAQ below and use gst-launch-1.0 to test the v4l2src pipeline first.

2.We have only tested DS 7.1 on Jetpack 6.1, please try to re-burn JP6.1

  1. My cameras appear to function correctly initially. In fact, our customized program runs successfully (including streaming, inference, and tracking) for periods ranging from tens of seconds to several minutes before failing. It seems like with less cameras the program could last longer before failing. Does increasing the number of cameras proportionally increase resource consumption?

  1. Due to the significant advantages offered by the super mode in JP 6.2, I prefer not to downgrade to JP 6.1. Could you please provide some recommended debugging methods or tools specifically for JP 6.2 that would help identify the root cause of our issues?
1 Like

You can refer to this topic, I’m not sure if this issue is related to USB bus bandwidth.

If you can confirm that this is a problem caused by USB hardware limitations, the same Jetpack major version number usually does not cause problems

Hi Junshengy,

In our application of the ORIN NX, we currently have 4 USB 2.0 cameras hooked up to the two separate USB hubs on the Development kit. Using v412-ctl, we were able to get all 4 cameras to stream simultaneously at about 14-17fps ( optimal case).

We were informed that with DS7.1 we should be able to get a better performance out of the device.

As far as the post you suggested, I’m not sure how relevant our situation is to theirs.

Could we get an explanation as to what is causing the following issue? and maybe how we could tweak it? Thank you!

0:03:55.528418999 195820 0xaaaaf3210180 ERROR nvinfer gstnvinfer.cpp:1279:get_converted_buffer: cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:03:55.528546780 195820 0xaaaaf3210180 WARN nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame: error: Buffer conversion failed
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

Error: gst-stream-error-quark: Buffer conversion failed (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1576): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
Exiting app

ERROR: [TRT]: IExecutionContext::enqueueV3: Error Code 1: Cask (Cask convolution execution)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:03:55.530251251 195820 0xaaaaf3210120 WARN nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=3
Unable to set device in gst_nvstreammux_src_collect_buffers
Unable to set device in gst_nvstreammux_src_collect_buffers
Segmentation fault

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

On JP6.2, using nvvideoconvert to copy data from the GPU to the CPU causes a similar problem, but I’m not sure if this is the same as your problem. Please provide your pipeline. In addition, please open a new topic to discuss your problem.

The following topic provides workaround.

Hi junshengy,

The crashed issue likely is solved by adding the two lines in my code:
At least, the program could last longer.

nvvidconv.set_property(“compute-hw”, 1)
nvvidconv.set_property(“nvbuf-memory-type”, 4)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.