Jump to content
Happy Holidays! Limited Staff Responses: 1/20 - 1/31 ×

VIVE XR Elite AR Usage / Passthrough


Recommended Posts

Hello,

I'm looking for more information about how I can access the front-facing cameras color cameras on the standalone VIVE XR Elite Developer Kit. From my understanding this feature was removed from the WAVE XR SDK due to business privacy concerns, however, as this headset is equally positioned for AR/VR applications and boasts a high definition color passthrough, it makes me think that there should certainly be some way now or in the near future to access the raw camera input for use in developing towards Computer Vision applications. I have found working examples of providing Passthrough and Underlay on the headset, however this does not provide the same input the Raw color Frames themselves.

I suspect there is a method in which developers must be able to perform tasks such as Frame Shading or Planar Image Tracking for MR and AR applications - is this not possible for the XR Elite through WaveXR?

 

 

Link to comment
Share on other sites

Hi @Identical Josh,

Thanks for your inquiries, and you're right we disabled the direct access the raw data from camera image due to privacy policy. But we do understand there're many applications that require camera image to do further development so we provide alternative ways for developers to can achieve similar goals and results. 

For example, we provide several passthrough methods on latest Wave 5.1.0 and also add Scene SDK session for processing planes, anchors, meshes etc.
https://hub.vive.com/storage/docs/en-us/ReleaseNote.html#release-5-1-0

If you have any specific targets which it's difficult to reach through latest SDK, we'd like to learn the use cases and discuss how to support these scenarios.

Thanks.

  • Like 1
Link to comment
Share on other sites

Thanks @Tony PH Lin

 

I appreciate your quick response. I am a Ph.D. researcher at Indiana University studying Interactive and Intelligent Systems and Cognitive Science. As a part of my thesis, I intend to study AR and VR applications as a method of studying human visual learning. My specific use-case is actually to apply a Shader I have written to the camera; the purpose of which is to study visual processing performed by the retina.

I have several applications which use this shader in strictly VR environments using this headset, however it would be simpler and more realistic to use this headset to perform this Retinal Shader effect on real world images given its full color capability, rather than building multiple simulations of, for instance, a game of ping pong, or an object-recognition task. Additionally, participants of this study would undergo IRB approval before using an application in which the front-facing cameras are used, and would do so in a controlled experimental setup.

I would be very interested in following up with any members of your staff or development team who might be able to facilitate this research scenario. 

 

Thank you for your time,

Joshua McGraw

Link to comment
Share on other sites

Hello, 

I have the same exact use case but for private research and onsite experiences. My team is trying to use a headset like the Vive Elite XR with passthrough to get video feed so that we can run computer vision algorithms on top of the image. The Hololens Headset is currently used for this through their Research Mode feature which gives access to a lot of these features. However the hololens is far more expensive than the Vive XR Elite. It will be nice if Vive XR Elite had a Research Mode feature like the Hololens that the developer has to explicitly enable through maybe the desktop application and has to accept an agreement to be able to use. This will allow us developers to do computer vision research in realtime while no users are at risk. We have also tried attaching a zed mini camera to the headset but the passthrough wouldn't integrate with all the other features of the headsets like boundaries and depth correction which we would have to build ourselves significantly slowing down the research. Also on site experiences would benefit massively allowing the detection of objects through computer vision in the on site environment and adding these physical objects to the XR experience.

I urge to consider this for developers without having to get into some sort of business partnership as having this feature in the Hololens available to average developers allows them to prototype ideas that can later become business investment into the platform. 

Please look into adding this and thank you.    

  • Like 1
Link to comment
Share on other sites

On 1/10/2023 at 6:10 AM, Tony PH Lin said:

Hi @Identical Josh,

Thanks for your inquiries, and you're right we disabled the direct access the raw data from camera image due to privacy policy. But we do understand there're many applications that require camera image to do further development so we provide alternative ways for developers to can achieve similar goals and results. 

For example, we provide several passthrough methods on latest Wave 5.1.0 and also add Scene SDK session for processing planes, anchors, meshes etc.
https://hub.vive.com/storage/docs/en-us/ReleaseNote.html#release-5-1-0

If you have any specific targets which it's difficult to reach through latest SDK, we'd like to learn the use cases and discuss how to support these scenarios.

Thanks.

Hello Tony,

I hope you are enjoying the New Year holiday. I wanted to follow up on this thread in forwarding this request to a team who might be able to assist us within this use-case scenario, or arrange a meeting with your team to discuss its applications academically.

 

Best,

Joshua

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...