**• Hardware Platform- Jetson
**• DeepStream Version- 6.0
**• JetPack Version (valid for Jetson only)- 5.1.1
**• Requirement- I am seeking guidance on the step-by-step process to deploy my anomaly detection model, which is stored in an HDF5 file, onto a Jetson Nano using DeepStream. Additionally, I would like to understand how to seamlessly integrate this network file into the DeepStream pipeline. Both the input and output maintain the same format, my network provides reconstructed images as output, and my ultimate goal is to conduct further processing to evaluate the reconstruction error between the input image and the reconstructed image.The image resolution is 1920x1080. Could you provide detailed instructions for this deployment and integration process?
Just to confirm, is your module a Jetson Nano or Jetson Orin Nano? As you are using Jetpack 5.1.1, so suppose it is a Jetson Orin Nano?
yes, jetson orin nano
Where and how did you get the HDF5 file?
DeepStream can support the following types of models:
Caffe Model and Caffe Prototxt
ONNX
UFF file
TAO Encoded Model and Key
You need to convert the HDF5 model to the type DeepStream support first.
Have you read the User Manaul Welcome to the DeepStream Documentation — DeepStream 6.0 Release documentation and tried the basic sample applications C/C++ Sample Apps Source Details — DeepStream 6.0 Release documentation?
With DeepStream, we use gst-nvinfer(Gst-nvinfer — DeepStream 6.0 Release documentation) or gst-nvinferserver(Gst-nvinferserver — DeepStream 6.0 Release documentation). The pipeline is based on GStreamer https://gstreamer.freedesktop.org/. Please try the basic deepstream-test1 sample and read the code to understand how DeepStream construct the application.
For your anomaly detection model. The output of the model is not supported by the default gst-nvinfer postprocessing. You need to customize the output postprocessing to get the reconstructed image. The suggestion is to use “output-tensor-meta=1” and “network-type=100” parameters of gst-nvinfer and handle the model output in the nvinfer element src pad probe function in the application. There is output tensor parsing sample for your reference. /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-infer-tensor-meta-test. The input image is also available through NvBufSurface. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
Do you mean you want to display the pixel by pixel reconstruct errors?
After getting the reconstructed image, I want to process it to obtain residual image(the background will be black and anomaly will be white)
You need to implement this by yourself. I’ve told you how to get the original image and the output image, please calculate the reconstruct errors and convert the error to black and white image by yourself.
And then you can replace the GstNvBufSurface content by overriding the NvBufSurface. Please refer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
I am fairly new to this
Could you please tell where exactly like in which function will I get my output image?
And in which file will I have to make changes to calculate the reconstruction error
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
DeepStream is a SDK which provides the APIs, the sample libraries and the sample applications for the users to construct video/audio/other data inferencing and analyzing application. I don’t know how you will write your application, so it is impossible to tell you which line should be added to which functions.
You can’t skip to customization without knowing basic pipeline and elements features and codes. Especially you need to be familiar with gst-nvinfer source code for your customization. Please start from beginning.
- GStreamer basic knowledge and coding skills.
- gst-python basic usage. Python GStreamer Tutorial (brettviren.github.io)
- DeepStream documents and samples: Welcome to the DeepStream Documentation — DeepStream documentation 6.4 documentation
- gst-nvinfer source code: /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer and /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer, DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
- pyds and the samples: NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications (github.com)
You can also contact the sales and marketing guys to get more direct services.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.