Correct inference on frames, but saved frames have artifacts

• Hardware Platform Jetson
• DeepStream Version 6.3
• JetPack Version 5.1
• TensorRT Version 8.5
• Issue Type questions

Hi, my Deepstream application (Yolo as pgie, LPRNet as sgie) performs inference on frames from RTSP stream, and under certain conditions they will be uploaded. Sometimes, the inference of the frame is correct, but the image that is uploaded has artifacts which make text unreadable. Do you have any idea what could be the problem and how to debug this issue?

Code snippet for saving image:

    n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), batch_id)
    frame_copy = np.array(n_frame, copy=True, order='C')
    # On Jetson, since the buffer is mapped to CPU for retrieval, it must also be unmapped
    pyds.unmap_nvds_buf_surface(hash(gst_buffer), batch_id)
    
    out = Image.fromarray(frame_copy)
    rgb_im = out.convert('RGB')
    buffer = io.BytesIO()
    rgb_im.save(buffer, 'JPEG')
    buffer.seek(0)
    put_object_to_s3(buffer, img_key) # call boto3 client to upload image

This code is called from the probe of the sink pad of the nvosd element, full pipeline is source → nvstreammux → pgie → tracker → sgie → nvvideoconvert → nvdsosd → sink

Hi,

Could you upload an example image showing the artifacts so we can better understand the issue?

A few possible causes come to mind:

  • Encoder errors
  • GStreamer freeing the buffer before it’s converted
  • Color space conversion issues

One potential solution is to use GStreamer to save the image locally, and then have a separate job handle uploading to S3. For example, the pipeline could look like this:

source → nvstreammux → pgie → tracker → sgie → nvvideoconvert → nvdsosd → nvvideoconvert → jpegenc → multifilesink

Hi, here’s the image with artifacts.


Do you have any idea what could it be?

Please refer to our DS_troubleshooting : Solution 11.

Those definitely look like artifacts from dropped packets or insufficient bitrate in the stream.

So I added this line of code on nvurisrcbin:

    if "source" in name:
        source_element = child_proxy.get_by_name("source")
        if source_element.find_property('drop-on-latency') is not None:
            Object.set_property("drop-on-latency", False)  # set to False to avoid image artifacts

There are still artifacts, however, this time the inference wasn’t correct:


Do you think this is the same problem, or a different one?

Could you refer to our FAQ to get the graph of your pipeline? We can check your configured parameters through this.

Here’s the graph

From your graph, there is no paramters for the source bin and the sink plugin is fakesink.
What specific plugins are you using for your source plugin and sink plugin in the previous pipeline?

For source I’m using nvstreammux with nvurisrcbin:

    if is_live:
        logger.info("At least one of the sources is live")
        streammux.set_property('live-source', 1)

    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', len(cameras))
    streammux.set_property('batched-push-timeout', 33000)
    nbin = Gst.Bin.new(bin_name)
    if not nbin:
        logger.error("Unable to create source bin")

    # Source element for reading from the uri.
    # We will use decodebin and let it figure out the container format of the
    # stream and the codec and plug the appropriate demux and decode plugins.
    uri_decode_bin = _call_element_factory("nvurisrcbin", "uri-decode-bin")
    
    uri_decode_bin.set_property("rtsp-reconnect-interval", 80)
    uri_decode_bin.set_property("cudadec-memtype", 0)
    uri_decode_bin.set_property("latency", 2000)
    uri_decode_bin.set_property("select-rtp-protocol", 4)
    if loop_file:
        uri_decode_bin.set_property("file-loop", 1)
    
    # We set the input uri to the source element
    uri_decode_bin.set_property("uri", uri)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has been created by the decodebin
    uri_decode_bin.connect("pad-added", _cb_newpad, nbin)
    uri_decode_bin.connect("child-added", _decodebin_child_added, nbin)

    # We need to create a ghost pad for the source bin which will act as a proxy
    # for the video decoder src pad. The ghost pad will not have a target right
    # now. Once the decode bin creates the video decoder and generates the
    # cb_newpad callback, we will set the ghost pad target to the video decoder
    # src pad.
    Gst.Bin.add(nbin, uri_decode_bin)
    bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
    if not bin_pad:
        logger.error(" Failed to add ghost pad in source bin \n")
        return None
    
    return nbin

As for fakesink, these are the only parameters:

        sink.set_property('enable-last-sample', 0)
        sink.set_property('sync', 0)

Can you try performing JPEG encoding using GStreamer? Just add nvvideoconvert and jpegenc before the fakesink, and move the probe accordingly.

Also, try saving the buffer you’re receiving before the nvstreammux to see if it already has the artifacts.

@miguel.taylor’s analysis might also be possible. Thanks
@nikola2 , could you attach the part for saving images and the resolution of your rtsp source?

I already shared the part for copying image from buffer.

And I call boto3 client.put_object(body=buffer, key=img_key)
Resolution of RTSP source is 1920x1080 H.265 with 30fps.

@miguel.taylor Ok, I’ll try adding multifilesink,

We did experiment with recording source video using gst-launch-1.0 in parallel with the application running, and the recorded video had no artifacts.

Based on the above two points, it can be confirmed that the issue lies with your image encoder part. Could you try the method we provided in our sample deepstream_imagedata-multistream.py?

So I added this line to my previous code just before uploading image, is this what you were saying?

rgb_im.save(f"data/cache/af-test/{id}.jpg", image_format)

I used PIL instead of cv2 and I don’t have tiler in my pipeline.

There was one case with artifacts and image saved on disk is identical:

This example merely shows how to use OpenCV to save your images. Even you don’t have tiler, you can add that probe to other plugins.
Based on the comparisons you made earlier, the most likely cause is a problem with the PIL coding. So you can try to use OpenCV to save the images to verify this.

As you can see the image that was saved using PIL

Is the same as the one that was saved using cv2

OK. Let’s directly run our sample to narrow down the scope. Could you use our deepstream_imagedata-multistream.py directly to save the image with your rtspsrc?

If there are still the artifacts issues, please provide us your stream and we can try that on our side.