According to my understanding, this error is because the versions of Cudnn and Cuda do not match each other. But it’s unreasonable that I installed jetpack 4.6 via SD card image and Cuda 10.2 and Cudnn 8.2.1 are already attached, so why does the above error occur?
Source: note.txt (7.1 KB)
This was the error before I encountered the cuDNN error above. I tried to create an engine class to facilitate inference with my direction.
Error log at: in function NvDsInferContext_QueueInputBatch
ERROR: [TRT]: 1: [convolutionRunner.cpp::executeConv::458] Error Code 1: Cudnn (CUDNN_STATUS_EXECUTION_FAILED)
ERROR: Failed to enqueue trt inference batch
Thanks for reply. I resolve problem with sample code in gstdexample in gst-plugins. For jetson the way get data for inference difference with X86. ifdefaarch64
/* To use the converted buffer in CUDA, create an EGLImage and then use
* CUDA-EGL interop APIs */
if (USE_EGLIMAGE) {
if (NvBufSurfaceMapEglImage (dsexample->inter_buf, 0) !=0 ) {
goto error;
}
/* dsexample->inter_buf->surfaceList[0].mappedAddr.eglImage
* Use interop APIs cuGraphicsEGLRegisterImage and
* cuGraphicsResourceGetMappedEglFrame to access the buffer in CUDA */
/* Destroy the EGLImage */
NvBufSurfaceUnMapEglImage (dsexample->inter_buf, 0);
}