Is it possible to change the pipline to get the RGB frame directly,
launch_stream
<< “nvarguscamerasrc name=mysource ! "
<< “video/x-raw(memory:NVMM),width=”<< w <<”,height=“<< h <<”,framerate=30/1,format=NV12 ! "
<< "nvvidconv name=myconv ! "
<< "video/x-raw(memory:NVMM),format=RGBA ! "
<< “fakesink”;
From the NVMM code . I have tried with BGRx its work fine but when i use BGR instead of BGRx things went wrong…
How to place RGB instead of RBGA… @DaveYYY@DaneLLL
So that you can use software videoconvert plugin to get BGR data in CPU buffer. This is a quick solution but not optimal. For optimal performance, we suggest implement CUDA code for the functions.
Ineed i have placed the pipline as
launch_stream << "nvarguscamerasrc name=mysource ! "
<< “video/x-raw(memory:NVMM),width=” << w << “,height=” << h << ",framerate=30/1,format=NV12 ! "
<< "nvvidconv name=myconv ! "
<< "video/x-raw(memory:NVMM),format=RGBA ! "
<< “nv3dsink”;
In the output there is only the plane video no detections are present . I also tried to convert my BGR frame back to RGBA ,No use …Can you pllease gave me some suggestion
As in the drawTrans method i am placing an RGBA image over the video.
But in output only plain video i coming … @DaneLLL@DaveYYY
Thanks in advance
Thanks for your valuable time DaneLLL,
From the d_mat variable (which is RGBA 4 channel ,GpU mat varibale )
I am converting it to BGR ,then doing my opencv opeartions including blending… As per my understanding from the NVMM code ,we are making in place changes in the buffer and we are sending it to the nv3dsink.
In the demo code given
we are applying sobel filter in the d_mat variable .and the changes happens in the image is displayed using nv3d sink
But when i make changes it is not reflecting…
Things i made
RGBA -----> BGR ----->Doing opencv opeartions If i download the gpumat and show in opencv imshow it is working fine…but if passed to nv3dsink no changes are s showing… I have changed the BGR again back to RGBA but didnt get
Thanks in advance…
Imshow and downloading to cpumat is taking good amount of time ie why i have used this approach.
can you please suggest any other alternative where i can see this using nvplugin… also i have noticed that after converting to RGBA back when i visualize sometimes the blended output comes sometimes not fully
…Not getting the exact idea…
Thanks in advance… @DaveYYY@DaneLLL
Hi,
As we have suggested in previous comments. For optimal solution, it is better to implement CUDA code to directly access/process the pixels in NvBufSurface.
Thanks for your suggestion,
but i dont have provision to use deepstream…
I have one doubt when i print the surface pointer i am able to see that 4 addresses are repetedly printing .
in my case ,i am pushing the pointers to a queue, if the size exceeds max of 20,Newer will replace the older one.
and that address are used by another thread .
also in the second thread pop present ,
But if this is the case like only 4 adresses are coming then the queue is filled with this 4 adresses and i am loosing frames…Is it correct?
So i placed output-buffer= 20 in nvvidconv,Now when i print it is printing 20 unique adresses…
So at 20 different places of the memory the images are present ?Am i right Dane @DaveYYY@DaneLLL Thanks in advance…