Error loading model TAO 4.0 and DeepStream Python Apps

Please provide the following information when requesting support.
• Hardware : A100
• Network Type: Detectnet_v2
• TLT Version: I am using the TAO Docker x86
• Training spec file, Deepstream conf and TAO results:
• Deepstream: Nvidia Deepstream Docker
everything.zip (48.9 MB)
• How to reproduce the issue ?: I am working with the deepstream-multistream python apps and cannot use a new model training with a custom dataset in TAO4.0. The model fails after this code:

python3.8 deepstream_imagedata-multistream.py file:///share_data_deepstream/tao/WH_TAOtest.h264 frame

This is the result:

Frames will be saved in  frame
Creating Pipeline

Creating streamux

Creating source_bin  0

Creating source bin
source-bin-00
Creating Pgie

Creating nvvidconv1

Creating filter1

Creating tiler

Creating nvvidconv

Creating nvosd

Creating EGLSink

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing...
1 :  file:///share_data_deepstream/tao/WH_TAOtest.h264
Starting pipeline

ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 213, Serialized Engine Version: 232)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine
0:00:01.974624358  1010      0x208bd30 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed
0:00:02.069613931  1010      0x208bd30 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed, try rebuild
0:00:02.070005395  1010      0x208bd30 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:130 Cannot access prototxt file '/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/../../../../samples/models/tao_model/resnet18_detector.prototxt'
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:03.017040852  1010      0x208bd30 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:03.111856367  1010      0x208bd30 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:03.111910479  1010      0x208bd30 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:03.111932240  1010      0x208bd30 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:03.111940255  1010      0x208bd30 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

The baseline testing works fine after using “fakesink” but every time I try to change the model for a new one the error pop up.
Any help is welcome

Have you checked if this file exists?
'/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/../../../../samples/models/tao_model/resnet18_detector.prototxt' ?

Officially, for running TAO models with deepstream, we provided GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream .
For questions of deepstream-python-apps, please create topic in deepstream forum.

Unfortunately, I couldn’t find a way to create it using TAO nor did I find a way to do it manually.

@ganmobar
Please follow above-mentioned github if you run TAO models with deepstream. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.