Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 550
• Issue Type( questions, new requirements, bugs) questions
I use Deepstream 5.1 Docker on Ubuntu 20.04.6 LTS + Docker Engine (with nvidia-congtainer-toolkit) + Nvidia Driver version 550
My Dockerfile
FROM nvcr.io/nvidia/deepstream:5.1-21.02-triton
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y python3-pip python3-dev
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.6 20
RUN update-alternatives --install /usr/bin/python python /usr/bin/python2.7 10
RUN python --version
RUN pip --version
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y vlc
RUN sed -i 's/geteuid/getppid/' /usr/bin/vlc
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y wget curl vim traceroute \
net-tools htop iftop iotop gcc g++ build-essential unzip libgirepository1.0-dev \
apt-transport-https ca-certificates python-gi-dev
RUN pip install --upgrade pip setuptools wheel
RUN pip install opencv-python --verbose
RUN pip install PyGObject pycairo common --verbose
## Install deepstream-python-app
WORKDIR /opt/nvidia/deepstream/deepstream/sources/
RUN wget https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/archive/refs/tags/v1.0.3.tar.gz
RUN tar zxvf v1.0.3.tar.gz
RUN mv deepstream_python_apps-1.0.3 deepstream_python_apps
My docker-compose.yml
version : '3'
services:
deepstream-lpr-python:
build: .
container_name: deepstream-lpr-python
shm_size: 8gb
runtime: nvidia
restart: always
privileged: true
network_mode: host
environment:
- TZ=Asia/Bangkok
- NVIDIA_VISIBLE_DEVICES=all
- DISPLAY=$DISPLAY
- PYTHONPATH=/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- .:/deepstream-lpr-python
tty: true
working_dir: /deepstream-lpr-python
After build with Docker Compose successful. I start Docker and use docker-compose exec
command to go inside container.
(base) devteam@devteam-pc:/ssd-disk3-data/devteam/deepstream-lpr-python-version$ xhost +
access control disabled, clients can connect from any host
(base) devteam@devteam-pc:/ssd-disk3-data/devteam/deepstream-lpr-python-version$ docker-compose up -d
[+] Running 1/1
✔ Container deepstream-lpr-python Started 0.4s
(base) devteam@devteam-pc:/ssd-disk3-data/devteam/deepstream-lpr-python-version$ docker-compose exec deepstream-lpr-python bash
Then I run deepstream-test3.py with Sample mp4 file.
root@devteam-pc:/deepstream-lpr-python# cd /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test3
root@devteam-pc:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test3# python deepstream_test_3.py file:////opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
Then it show error result like this below. Please advise me how to solve this problem, Thank You.
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing...
1 : file:////opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
Starting pipeline
libEGL warning: DRI2: failed to authenticate
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test3/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:01.809922198 122 0x26e84f0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test3/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:01.809943344 122 0x26e84f0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test3/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:01.809952318 122 0x26e84f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:06.739322773 122 0x26e84f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:06.743585593 122 0x26e84f0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: qtdemux0
Decodebin child added: multiqueue0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: aacparse0
Decodebin child added: avdec_aac0
Decodebin child added: nvv4l2decoder0
In cb_newpad
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f32b3c6d228 (GstCapsFeatures at 0x1f9cd40)>
In cb_newpad
gstname= audio/x-raw
Frame Number= 0 Number of Objects= 6 Vehicle_count= 4 Person_count= 2
Frame Number= 1 Number of Objects= 5 Vehicle_count= 3 Person_count= 2
Frame Number= 2 Number of Objects= 5 Vehicle_count= 3 Person_count= 2
Frame Number= 3 Number of Objects= 6 Vehicle_count= 4 Person_count= 2
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number= 4 Number of Objects= 7 Vehicle_count= 4 Person_count= 3
Frame Number= 5 Number of Objects= 6 Vehicle_count= 4 Person_count= 2
0:00:06.985440530 122 0x24ecf70 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:06.985456791 122 0x24ecf70 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(1984): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Exiting app
Frame Number= 6 Number of Objects= 5 Vehicle_count= 3 Person_count= 2
Frame Number= 7 Number of Objects= 4 Vehicle_count= 2 Person_count= 2
Frame Number= 8 Number of Objects= 5 Vehicle_count= 3 Person_count= 2
Frame Number= 9 Number of Objects= 6 Vehicle_count= 4 Person_count= 2
root@devteam-pc:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test3#