Jump to content

Depth Image usage


buffalovision

Recommended Posts

What are the plans for the depth image provided by the SDK?

 

I see that CameraDepthMaskMaterial gets updated by ViveSR_DualCameraImageRender, but I don't see it used anywhere. (see top of pic) Looking at the shader, it appears to do what you'd expect, a mask based on depth. Are there any examples of using this in a scene or perhaps examples coming in an update? I would be great to see this used on the video feed or as a replacement for the video feed combined with a generic environment mesh.

 

Also, is there additional tuning happening on depth image or additional configuration that will be exposed? Using the depth example, there is a lot of noise and the near field doesn't just saturate to red, it drops out almost completely to black. See attached screenshot, with my left forearm across the screen towards my laptop, the very near field seems to get lost, and my hand is pretty much indecipherable.

 

Any tips or roadmap info appreciated!

 

 

 

ColorDepthMask_DepthImage.jpg.01346f2d18badf7314f26f85ad7ad34f.jpg

Link to comment
Share on other sites

After you have imported SR Experience package included in v0.6.0.0 SDK under Unity\Experience\Vive-SRWorks-0.6.0.0-Unity-Experience.unitypackage, there is a sample scene Scenes\Sample3_DynamicMesh.unity which uses the depth frame to create the mesh on the fly. 

 

The current depth's limitation of nearest distance is 30 cm. Next, it will provide a hand mode which supports nearest distance to 15 cm. 

 

 

 

Link to comment
Share on other sites

Thanks for the info. Hand mode sounds great-- reducing the distance as much as possible would be ideal, since losing hands at close range feels like it would be very immersion-breaking! Even 10cm would be fantastic...


My goal is ideally to segment the video feed to just render the hands, although it is looking like the depth image may be too noisy to achieve this. Another approach would be creating a stylization, perhaps by analyzing the dynamic mesh calculated in the example you reference. Based on ongoing sdk work, is there an approach you'd recommend?

 

I have more experimenting to do and am still new to openCV, but wondering if results could be improved with tuning (or configuration options like window size, etc?) based on depth ranges for my app's expected use cases. Thanks in advance for any insight!

 

Link to comment
Share on other sites

For now, with OpenCV you could apply countour filters to a region of interest (ROI)  based on the depth area of interest - so yes that could be one approach (or vice versa use depth later to cut out from the filtered countour) and then you can try locating finger tips for either detecting gestures or mapping it to a rigged model. Technically you could try it without depth info and figure out the disparity yourself from the stereo views to help finalize orientation if that's necessary. 

Link to comment
Share on other sites

Thanks! To be clear, it appears that this approach would mean not using ViveSR, correct? I'd need expose the functionality I need in OpenCV to C# or write a native plugin that talks to OpenCV directly.

 

Is there any plan to open source ViveSR or allow this developer community to use it and submit pull requests? It feels like use cases for this hardware may be quite varied, yet not being able to use the official plugin could be problematic in terms of maintenance and updates.

 

Link to comment
Share on other sites

Yes correct, that suggestion was if you wanted to implement it yourself but as indicated we will be including support in the SDK but it may take a bit more time.

 

I will pass on the recommendation on open sourcing (some if not all) of the SDK to make it easier to accomodate varied use cases.

 

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...