• Hardware Platform Jetson • DeepStream Version 6.3 • JetPack Version 5.1 • TensorRT Version 8.5 • Issue Type questions
Hi, my Deepstream application (Yolo as pgie, LPRNet as sgie) performs inference on frames from RTSP stream, and under certain conditions they will be uploaded. Sometimes, the inference of the frame is correct, but the image that is uploaded has artifacts which make text unreadable. Do you have any idea what could be the problem and how to debug this issue?
Code snippet for saving image:
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), batch_id)
frame_copy = np.array(n_frame, copy=True, order='C')
# On Jetson, since the buffer is mapped to CPU for retrieval, it must also be unmapped
pyds.unmap_nvds_buf_surface(hash(gst_buffer), batch_id)
out = Image.fromarray(frame_copy)
rgb_im = out.convert('RGB')
buffer = io.BytesIO()
rgb_im.save(buffer, 'JPEG')
buffer.seek(0)
put_object_to_s3(buffer, img_key) # call boto3 client to upload image
This code is called from the probe of the sink pad of the nvosd element, full pipeline is source → nvstreammux → pgie → tracker → sgie → nvvideoconvert → nvdsosd → sink
Could you upload an example image showing the artifacts so we can better understand the issue?
A few possible causes come to mind:
Encoder errors
GStreamer freeing the buffer before it’s converted
Color space conversion issues
One potential solution is to use GStreamer to save the image locally, and then have a separate job handle uploading to S3. For example, the pipeline could look like this:
if "source" in name:
source_element = child_proxy.get_by_name("source")
if source_element.find_property('drop-on-latency') is not None:
Object.set_property("drop-on-latency", False) # set to False to avoid image artifacts
There are still artifacts, however, this time the inference wasn’t correct:
From your graph, there is no paramters for the source bin and the sink plugin is fakesink.
What specific plugins are you using for your source plugin and sink plugin in the previous pipeline?
For source I’m using nvstreammux with nvurisrcbin:
if is_live:
logger.info("At least one of the sources is live")
streammux.set_property('live-source', 1)
streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
streammux.set_property('batch-size', len(cameras))
streammux.set_property('batched-push-timeout', 33000)
nbin = Gst.Bin.new(bin_name)
if not nbin:
logger.error("Unable to create source bin")
# Source element for reading from the uri.
# We will use decodebin and let it figure out the container format of the
# stream and the codec and plug the appropriate demux and decode plugins.
uri_decode_bin = _call_element_factory("nvurisrcbin", "uri-decode-bin")
uri_decode_bin.set_property("rtsp-reconnect-interval", 80)
uri_decode_bin.set_property("cudadec-memtype", 0)
uri_decode_bin.set_property("latency", 2000)
uri_decode_bin.set_property("select-rtp-protocol", 4)
if loop_file:
uri_decode_bin.set_property("file-loop", 1)
# We set the input uri to the source element
uri_decode_bin.set_property("uri", uri)
# Connect to the "pad-added" signal of the decodebin which generates a
# callback once a new pad for raw data has been created by the decodebin
uri_decode_bin.connect("pad-added", _cb_newpad, nbin)
uri_decode_bin.connect("child-added", _decodebin_child_added, nbin)
# We need to create a ghost pad for the source bin which will act as a proxy
# for the video decoder src pad. The ghost pad will not have a target right
# now. Once the decode bin creates the video decoder and generates the
# cb_newpad callback, we will set the ghost pad target to the video decoder
# src pad.
Gst.Bin.add(nbin, uri_decode_bin)
bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
if not bin_pad:
logger.error(" Failed to add ghost pad in source bin \n")
return None
return nbin
@miguel.taylor’s analysis might also be possible. Thanks @nikola2 , could you attach the part for saving images and the resolution of your rtsp source?
Based on the above two points, it can be confirmed that the issue lies with your image encoder part. Could you try the method we provided in our sample deepstream_imagedata-multistream.py?
This example merely shows how to use OpenCV to save your images. Even you don’t have tiler, you can add that probe to other plugins.
Based on the comparisons you made earlier, the most likely cause is a problem with the PIL coding. So you can try to use OpenCV to save the images to verify this.
OK. Let’s directly run our sample to narrow down the scope. Could you use our deepstream_imagedata-multistream.py directly to save the image with your rtspsrc?
If there are still the artifacts issues, please provide us your stream and we can try that on our side.