Error generated while running the code after connecting the camera

Hi,
You can implement the function in CUDA code. With NvBufSurface APIs, you can access and modify data of each pixel. There is a CUDA sample for gstreamer nvivafilter plugin:
Nvcompositor plugs alpha is not work and alpha plugs no work - #14 by DaneLLL

The sample demonstrates applying alpha effect to each pixel. Not exactly identical to your use-case, but can be a reference.

Many many thanks for your replay Dane,

Is it possible to change the pipline to get the RGB frame directly,
launch_stream
<< “nvarguscamerasrc name=mysource ! "
<< “video/x-raw(memory:NVMM),width=”<< w <<”,height=“<< h <<”,framerate=30/1,format=NV12 ! "
<< "nvvidconv name=myconv ! "
<< "video/x-raw(memory:NVMM),format=RGBA ! "
<< “fakesink”;
From the NVMM code .
I have tried with BGRx its work fine but when i use BGR instead of BGRx things went wrong…
How to place RGB instead of RBGA…
@DaveYYY @DaneLLL

Hi,
24-bit formats like RGB or BGR are not supported. Please convert to RGBA and re-sample to BGR.

Can we do it wit pipeline with out using the cuda conversionsI have tried this change but
end up with errors.

launch_stream << "nvarguscamerasrc name=mysource ! "
    << "video/x-raw(memory:NVMM),width=" << w << ",height=" << h << ",framerate=30/1,format=NV12 ! "
    << "nvvidconv  ! "
    << "video/x-raw(memory:NVMM),format=RGBA ! "
    << "videoconvert name=myconv ! "
    << "video/x-raw(memory:NVMM),format=BGR ! "
    << "fakesink";


    launch_string = launch_stream.str();

    g_print("Using launch string: %s\n", launch_string.c_str());

    GError *error = nullptr;
    gst_pipeline = (GstPipeline *)gst_parse_launch(launch_string.c_str(), &error);

    if (gst_pipeline == nullptr) {
        g_print("Failed to parse launch: %s\n", error->message);
        return -1;
    }
    if (error)
        g_error_free(error);
    GstElement *conv = gst_bin_get_by_name(GST_BIN(gst_pipeline), "myconv");
    GstPad *sink_pad = gst_element_get_static_pad(conv, "sink");
    gst_pad_add_probe(sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
                      conv_src_pad_buffer_probe, NULL, NULL);

    gst_element_set_state((GstElement *)gst_pipeline, GST_STATE_PLAYING);

If im go for cuda conversion i need to sped around 4-5 ms for that opeartion .But now its happening at <=1ms…

@DaneLLL @DaveYYY

Hi,
Please try

launch_stream << "nvarguscamerasrc name=mysource ! "
    << "video/x-raw(memory:NVMM),width=" << w << ",height=" << h << ",framerate=30/1,format=NV12 ! "
    << "nvvidconv  ! "
    << "video/x-raw,format=RGBA ! "
    << "videoconvert name=myconv ! "
    << "video/x-raw,format=BGR ! "
    << "fakesink";

So that you can use software videoconvert plugin to get BGR data in CPU buffer. This is a quick solution but not optimal. For optimal performance, we suggest implement CUDA code for the functions.

Thanks for your replay DaneLLL,
1.The videconvert plugin is working in CPU ,then how we will extract the NVMM buffer from it?
@DaneLLL @DaveYYY

Hi,
The plugin supports only CPU buffers. For an optimal solution, you would need to implement the functions through CUDA programming.

Thanks for the update as per your suggestion i have made the cuda code for processing…

            cv::cuda::GpuMat d_mat(h, w, CV_8UC4, eglFrame.frame.pPitch[0]);
            cv::cuda::cvtColor(d_mat, img_RGB, cv::COLOR_RGBA2BGR);
			const auto objects = yolo.detectObjects(img_RGB);

        	yolo.drawTrans(objects,img_RGB);

Ineed i have placed the pipline as
launch_stream << "nvarguscamerasrc name=mysource ! "
<< “video/x-raw(memory:NVMM),width=” << w << “,height=” << h << ",framerate=30/1,format=NV12 ! "
<< "nvvidconv name=myconv ! "
<< "video/x-raw(memory:NVMM),format=RGBA ! "
<< “nv3dsink”;

In the output there is only the plane video no detections are present . I also tried to convert my BGR frame back to RGBA ,No use …Can you pllease gave me some suggestion
As in the drawTrans method i am placing an RGBA image over the video.
But in output only plain video i coming …
@DaneLLL @DaveYYY
Thanks in advance

Hi,
For using nv3dsink, you would need to convert frame data in img_RGB back to d_mat for rendering.

Another approach is to run like:

launch_stream << "nvarguscamerasrc name=mysource ! "
<< "video/x-raw(memory:NVMM),width=" << w << ",height=" << h << ",framerate=30/1,format=NV12 ! "
<< "nvvidconv name=myconv ! "
<< "video/x-raw(memory:NVMM),format=RGBA ! "
<< "appsink";

And call OpenCV functions in appsink, and render img_RGB through OpenCV APIs.

Thanks for your valuable time DaneLLL,
From the d_mat variable (which is RGBA 4 channel ,GpU mat varibale )
I am converting it to BGR ,then doing my opencv opeartions including blending
As per my understanding from the NVMM code ,we are making in place changes in the buffer and we are sending it to the nv3dsink.

My opeartions are

            cv::cuda::GpuMat d_mat(h, w, CV_8UC4, eglFrame.frame.pPitch[0]);
            cv::cuda::cvtColor(d_mat, img_RGB, cv::COLOR_RGBA2BGR);
			const auto objects = yolo.detectObjects(img_RGB);

        	yolo.drawTrans(objects,img_RGB);

In the demo code given
we are applying sobel filter in the d_mat variable .and the changes happens in the image is displayed using nv3d sink
But when i make changes it is not reflecting…
Things i made
RGBA -----> BGR ----->Doing opencv opeartions
If i download the gpumat and show in opencv imshow it is working fine…but if passed to nv3dsink no changes are s showing…
I have changed the BGR again back to RGBA but didnt get
Thanks in advance…

Hi,
Looks like after the yolo functions, you have to call

            cv::cuda::cvtColor(img_RGB, d_mat, cv::COLOR_BGR2RGBA);

d_mat is the frame data sending to nv3dsink

Hi,
I have tried the same but not getting…

            cv::cuda::cvtColor(d_mat, img_RGB, cv::COLOR_RGBA2BGR);
			const auto objects = yoloV8.detectObjects(img_RGB);

        	yoloV8.drawObjectLabels(objects,img_RGB);
            cv::cuda::cvtColor(img_RGB,d_mat, cv::COLOR_BGR2RGBA);



The pipline is 	launch_stream   << "nvarguscamerasrc name=mysource ! "
                    << "video/x-raw(memory:NVMM),width="<< w <<",height="<< h <<",framerate=30/1,format=NV12 ! "
                    << "nvvidconv name=myconv ! "
                    << "video/x-raw(memory:NVMM),format=RGBA ! "
                    << "nv3dsink";

@DaneLLL @DaveYYY

Hi,
Please save img_RGB to a file and check. Maybe the yolo functions do not detect any object.

I have placed a fakesink instead of nv3dsink and i have downloaded img_RGB to a cpu mat and displayed using imshow its working…

Hi,
So coverting BGR to RGBA though cvtColor() should fail. May be there is constraint in cvtColor()

Your solution should work. Run this pipeline:

launch_stream << "nvarguscamerasrc name=mysource ! "
<< "video/x-raw(memory:NVMM),width=" << w << ",height=" << h << ",framerate=30/1,format=NV12 ! "
<< "nvvidconv name=myconv ! "
<< "video/x-raw(memory:NVMM),format=RGBA ! "
<< "fakesink";

And render img_RGB throgh imshow()

Imshow and downloading to cpumat is taking good amount of time ie why i have used this approach.
can you please suggest any other alternative where i can see this using nvplugin…
also i have noticed that after converting to RGBA back when i visualize sometimes the blended output comes sometimes not fully
…Not getting the exact idea…

Thanks in advance…
@DaveYYY @DaneLLL

Hi,
As we have suggested in previous comments. For optimal solution, it is better to implement CUDA code to directly access/process the pixels in NvBufSurface.

auto start_time = std::chrono::high_resolution_clock::now();

            NvBufSurface *surface = frameQueue.pop();
            NvBufSurfaceMapEglImage(surface, 0);

            CUresult status;
            CUeglFrame eglFrame;

            CUgraphicsResource pResource = NULL;
            cudaFree(0);
            status = cuGraphicsEGLRegisterImage(&pResource,
                                                surface->surfaceList[0].mappedAddr.eglImage,
                                                CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE);
            if (status != CUDA_SUCCESS) {
                printf("cuGraphicsEGLRegisterImage failed: %d \n", status);
            }
            status = cuGraphicsResourceGetMappedEglFrame(&eglFrame, pResource, 0, 0);
            status = cuCtxSynchronize();
            cv::cuda::cvtColor(img_RGB,d_mat, cv::COLOR_BGR2RGBA);

            // cv::Mat DisplayFrame;
            // img_RGB.download(DisplayFrame);

            // Display the frame
            // cv::imshow("GPU Video", DisplayFrame);

            // // Exit the loop if the user presses the 'Esc' key
            // if (cv::waitKey(30) == 27) {
            //     break;
            // }
            status = cuCtxSynchronize();
            status = cuGraphicsUnregisterResource(pResource);
            NvBufSurfaceUnMapEglImage(surface, 0);

            auto end_time = std::chrono::high_resolution_clock::now();
        	auto elapsed_time = std::chrono::duration_cast<std::chrono::milliseconds>(end_time - start_time);
        	std::cout << "Elapsed time for capturing and inferencing_ : " << elapsed_time.count() << " milliseconds" << std::endl;

All the operations are doing in the cuda itself i am extracting the data from the *NvBufSurface surface = frameQueue.pop();
Pointer …

Thanks in advance @DaneLLL @DaveYYY

Hi,
If your use-case is to run Yolo model, you may try this:
Deploy YOLOv8 with TensorRT and DeepStream SDK | Seeed Studio Wiki

DeepStream SDK is an optimal solution for running deep learning inference. We would suggest use it if you would like to run Yolov8.

Thanks for your suggestion,
but i dont have provision to use deepstream…
I have one doubt when i print the surface pointer i am able to see that 4 addresses are repetedly printing .
in my case ,i am pushing the pointers to a queue, if the size exceeds max of 20,Newer will replace the older one.
and that address are used by another thread .
also in the second thread pop present ,
But if this is the case like only 4 adresses are coming then the queue is filled with this 4 adresses and i am loosing frames…Is it correct?

  1. So i placed output-buffer= 20 in nvvidconv,Now when i print it is printing 20 unique adresses…
    So at 20 different places of the memory the images are present ?Am i right Dane
    @DaveYYY @DaneLLL Thanks in advance…