Jump to content

Best Practice for Stereo Camera with C++


UMNWalterLab

Recommended Posts

Hello,

 

We need to grab the dual camera images and simply show them on Vive Pro, and later, possibly add some NPR filters. We got it to work, partially, with SRWorks SDK, OpenCV, and OpenVR, but we are wondering if there's a simpler solution to show the images right away after grabbing the frames from the SDK.

 

Our solution so far is to grab the openCV Mat, convert them to OpenGL textures, and show them in OpenVR. It's very slow, and to be honest it's very slow and don't seem like real. If we can eliminate OpenVR and OpenGL, we could get better performance. By the way, the official DLL version 0.7.5.0 was a nightmare, and we had to debug our code for 3 days to find out it won't work simultaneously with OpenGL. The post on this forum with the attached DLLs of 0.7.5.1 helped and works perfectly.

 

So, my questions essentially are:

 

1- Is there any way to just show texture/image/picture direcltly on Vive Pro screens?

 

2- Is there a one-to-one translation between front cameras and Vive Pro screens? I am assuming not, because of asymmetric nature of Vive lenses. You cannot simply swap the rendering buffer with textures filled by left and right cameras’ distorted images in an OpenGL application, right? I've read somewhere that the center of each eye is leaning inwards, so are the cameras the same?

 

3- What happens when you change the IPD inside the Vive. Since the front cameras are fixed, is the picture moved by software automatically before rendering, or the user needs to take actions and shift the images?

 

Thank you for any feedback.

Link to comment
Share on other sites

A time-space warping techinques is applied in SRWorks, so the perceived latency for mostly static scene is mitigated to a minimun when you fast move your head with HMD. 

 

I guess you may concern the latency with moving object but the programming language is not a key factor contributing the latency. It mostly depends on your pipeline responding to your use case. The fastest tested approach is to render your camera texture to a overaly using openvr directly.

 

Link to comment
Share on other sites

Thank you, DanY. So, by last paragraph, you mean we can get the camera feeds directly inside OpenVR? Because right now, we are using OpenVR, but in conjunction to SRWorks: SRWorks > Distorted Images > Convert to Texture > Show with OpenVR.

 

Link to comment
Share on other sites

  • 2 weeks later...

On 1: Let me know if you find this out, would be great to know. On 2: The closest I've been able to get to One to One would be using the the undistorted images, with VRTextureBounds of (Left:0.2,0.0,0.78,1.0 | Right: 0.22, 0.0, 0.8, 1.0) This is not ideal, as with the unity and unreal samples you should be projecting the image further away from the eyes and accounting for motion of the user's HMD between capture frames. On 3: I noticed no discernible difference in any IPD settings while using my HMD in pure VR, AR, or passthrough. I'm not sure if there's an issue with my HMD but I would presume that IPD setting does typically impact the view as a whole since it should be a universal change across all applications regardless of the application's intent.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...