Segmentation fault (core dumped) when using yolo-11-L-seg

*• Hardware Platform gpu : NVIDIA RTX 4000 Ada Generation Laptop GPU

• DeepStream Version : 7.1

**• NVIDIA GPU Driver Version :NVIDIA GPU Driver Version 575.51.03

• Issue Type( questions, new requirements, bugs) : bugs

I am trying to run yolo-v-11-l-seg in deepstream but I am getting segmantation fault I have try with different models sizes (L,n, s) all of them same result
this is last logs


(deepstream-app:2056): GStreamer-WARNING **: 10:25:33.011: ../gst/gstpad.c:4416:gst_pad_chain_data_unchecked:<nvv4l2decoder0:sink> Got data flow before segment event
0:00:00.217439738  2056 0x71fb74002760 WARN            videodecoder gstvideodecoder.c:2816:gst_video_decoder_chain:<nvv4l2decoder0> Received buffer without a new-segment. Assuming timestamps start from 0.
0:00:00.322570696  2056 0x71fb74002760 WARN            v4l2videodec gstv4l2videodec.c:2297:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
** INFO: <bus_callback:277>: Pipeline running

0:00:00.323280297  2056 0x71fb74002760 WARN          v4l2bufferpool gstv4l2bufferpool.c:1130:gst_v4l2_buffer_pool_start:<nvv4l2decoder0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:00.325600649  2056 0x71fb740032e0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1607:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY

(deepstream-app:2056): GStreamer-WARNING **: 10:25:33.122: ../gst/gstpad.c:4416:gst_pad_chain_data_unchecked:<nvv4l2decoder0:sink> Got data flow before segment event

(deepstream-app:2056): GStreamer-WARNING **: 10:25:33.171: ../gst/gstpad.c:4416:gst_pad_chain_data_unchecked:<nvv4l2decoder0:sink> Got data flow before segment event

(deepstream-app:2056): GStreamer-WARNING **: 10:25:33.172: ../gst/gstpad.c:4416:gst_pad_chain_data_unchecked:<nvv4l2decoder0:sink> Got data flow before segment event
0:00:00.382790335  2056 0x71fb74000d80 WARN          v4l2bufferpool gstv4l2bufferpool.c:1130:gst_v4l2_buffer_pool_start:<sink_sub_bin_encoder1:pool:src> Uncertain or not enough buffers, enabling copy threshold
Segmentation fault (core dumped)

and this is my yolo_inf_config_file

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#model-engine-file=/conv_seg_v4.onnx_b1_gpu0_fp16.engine
model-engine-file=/conv_seg_v4.onnx_b1_gpu0_fp32.engine
#model-engine-file=/conv_seg_v4.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-7.1/sources/my_data/conv/seg_label.txt
#labelfile-path=/opt/nvidia/deepstream/deepstream-7.1/sources/my_data/conv/yolo_labels.txt
onnx-file=/conv_seg_v4.onnx
#onnx-file=/yolo11n-seg.onnx
batch-size=1
input-dims=3;640;640;1
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=3
cluster-mode=4
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-instance-mask-func-name=NvDsInferParseYoloSeg
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-YOLOv11/libs/nvdsinfer_customparser_yolo_seg/libnvdsinfer_custom_impl_yolo_seg.so
output-instance-mask=1
segmentation-threshold=0.5

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=100

These log and configuration files cannot get any information.

  1. Make sure your yolov11-seg.onnx model is correctly exported from the pt model first
  2. Then use gdb to view the call stack.If the onnx model is correct, this might be related to a custom post-processing library.
gdb --args you_app xxx

If you are only interested in instance segmentation, you can refer to this example

I’ve fixed the segmentation fault issue — it turned out to be a problem with the model itself.

Now I’m trying to access the segmentation data from my Python code. I’m running two models: one for detection and the other for segmentation. The segmentation output shows correctly in the output window, so the model seems to be working fine.

However, I can’t access the segmentation data programmatically in my code. Here’s the function I’m using:

def osd_sink_pad_buffer_probe(pad, info, u_data):
frame_number = 0
gst_buffer = info.get_buffer()
if not gst_buffer:
return Gst.PadProbeReturn.OK

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame:
    frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)

    l_obj = frame_meta.obj_meta_list
    while l_obj:
        obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
        l_user = obj_meta.obj_user_meta_list
        while l_user:
            user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            if user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDSINFER_SEGMENTATION_META:
                segmeta_ptr = ctypes.cast(user_meta.user_meta_data, ctypes.POINTER(pyds.NvDsInferSegmentationMeta))
                segmeta = segmeta_ptr.contents
                print(f"[Frame {frame_meta.frame_num}] Instance mask size: {segmeta.width}x{segmeta.height}")
            l_user = l_user.next
        l_obj = l_obj.next

    l_frame = l_frame.next

Any idea what might be going wrong or what I should check to retrieve the segmentation metadata properly?

Thanks!

Please refer to this sampe code in pyds. since some models are no longer supported in DS-7.1, this code cannot be run directly for the time being, but it can still be used as a reference.