pablocael Posted July 5, 2018 Share Posted July 5, 2018 Hi! Im trying to use Depth Image Texture to discard pixels of virtual objects whose depth are behind real objects. For instance, if my hand passes in front of a virtual cube, it would occlude it. The first naive ideia is to use a shader to directly compare depth values from virtual scene (using Depth Buffer) and real world (using HTC Depth Texture). However, those textures does not have the same proportion. If there a way of doing this using resources from SRWorks? A more depth think brought me to think that I would need to sample world points and generate a texture for each eye (since occlusion occurs differently for each eye). However I think I would need some kind of "raycast" for generating the depth values. Thanks! Link to comment Share on other sites More sharing options...
Ethan Lin Posted July 6, 2018 Share Posted July 6, 2018 Hi pablocael, Yes, you can do this in SRWorks now. Please try the following steps: [in Attached Pic 1] 1. Place a quad under "[ViveSR]->DaulCamera(head)->TrackedCamera(Left)->Anchor(Left)" (As a sibling to ImagePlane-left, and also, check the layer is still "Default") 2. Make the quad local position=(0,0,2), scale=(4.57, 3.42, 1). (I believe that is what you mentioned "does not have the same proportion") 3. Use CameraDepthMaskMaterial, Color Write to "None" (Set to "All" for visualization) [in Attached Pic 2] 4. When running with enabling depth processing, check the toggle box "Update Depth Material" 5. You can try to toggle some other settings in the blue squares 6. There you go :smileyhappy: ! Cheers Link to comment Share on other sites More sharing options...
pablocael Posted July 6, 2018 Author Share Posted July 6, 2018 Thanks! It worked!! Link to comment Share on other sites More sharing options...
vladstorm Posted July 7, 2018 Share Posted July 7, 2018 wait, is the hand tracking work already or it's just hands depth on the photo? Link to comment Share on other sites More sharing options...
dario Posted July 7, 2018 Share Posted July 7, 2018 Yes and no, if you check the "Run Depth Mesh Collider" you will be able to collide the dynamic meshes (which can include your hands) with virtual objects but not just your hands exclusuvely. We'll be sharing code to detect the hands soon enough but you can start here with this solution depending on your use case. Link to comment Share on other sites More sharing options...
sjobom Posted July 11, 2018 Share Posted July 11, 2018 Is there a way to do something similar in Unreal? Link to comment Share on other sites More sharing options...
dario Posted July 11, 2018 Share Posted July 11, 2018 There should be as the underlying native APIs are the same, so depending if you're ok with looking at and calling native code. We will be looking into providing similar solutions for UE4 in future updates. Link to comment Share on other sites More sharing options...
DaKenpachi Posted July 18, 2018 Share Posted July 18, 2018 , , Is there a way to enable depth processing and it´s settings by default (meaning not during runtime), so I don´t have to enable them again everytime I start? Link to comment Share on other sites More sharing options...
dario Posted July 18, 2018 Share Posted July 18, 2018 ViveSR_DualCameraImageCapture.EnableDepthProcess(true); Link to comment Share on other sites More sharing options...
dario Posted August 3, 2018 Share Posted August 3, 2018 For an example that shows how to set these settings in code check this hand interaction and occlusion example: https://github.com/ViveSoftware/ViveSRWorksHand Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.