Jump to content

Create my own pass through video stream


Lukk

Recommended Posts

Hello
I want to create a high quality dual camera setup which then I want to stream to VR gogles, something like the pass-through in Vive Pro but in remote location (cable connected) with higher quality and decent-low latency. In the beggining I don't want to interfere with the video stream in any way, just direct streaming.
What tool can I use to create such an app or something? I'm a programmer, but I never did anything related to VR.

Thanks!

Link to comment
Share on other sites

@LukkDepends on if you're using a prebuilt camera like the Zed camera or rolling your own solution. The Zed/Zed Mini camera is the best supported of aftermarket MR pass-through cams. If you're using a productized solution like a Zed camera - it will have an SDK and that SDK will determine your limitations if you can't get direct hardware access. I'd imagine that you can get a networked multiplayer session between the two sessions in a game engine or figure out how to network with the camera more directly.

If you're rolling your own camera solution - each video stream can just be networked in whichever way you find finds produces the least latency and then you can render each feed to each eye accordingly.

Link to comment
Share on other sites

Thanks @VibrantNebula for answer.

I'd rather use my own solution, mostly because of scalability. I'm thinking about some kind of network stream and I may not be clear with my question - I want to know what tools I could use on the VR googles part of the system, does i.g. SRWorks SDK have the ability to pass a video stream directly to Vive screen? I don't want to render a screen or anyting in VR, just directly pass two ~180 degree videos.

Link to comment
Share on other sites

@Lukk - SRWorks SDK is specifically for the built in cameras - it cannot be used for custom solutions. We don't have any SDK that would enable you to do this - you'd need to configure a custom solution within a game engine.

 

In order to render the images, you'd generally do something like translating the video feeds into a render texture and then rendering out each individual texture to it's respective eye.  You can use something like Photon Engine to drive the networking of that video texture. Render textures can be videos Distortion would probably be your biggest enemy here - the camera FOV is probably a really sensitive subject and you'd need to account for that if using 180-degree cameras. I'm sure that that solution would get really hardware specific really quickly.

Here is a Unity example solution I found on the SteamVR forums:

  1. Add the objects that represent a stereo pair to different layers
  2. Add SteamVR CameraRig (using plugin v2.2.0 here)
  3. Add a second camera next to the existing camera in the CameraRig
  4. Set one camera's Target Eye to Left and remove the right layer from the culling mask
  5. Set the other camera's Target Eye to Right and remove the left layer from the culling mask
Link to comment
Share on other sites

@Lukk The cameras on the Vive/Vive Pro are ~ 95.1 degrees horizontal by 78.8 vertical as seen by the game engines and that hardware selection corresponds to the visual FOV of the headset. If you try to take a 180 degree feed and project it on a headset that is ~110FOV, you're going to have major distortion and perception issues if you don't crop or correct for it within the context of your implementation.

Link to comment
Share on other sites

@VibrantNebula thank you very much for your answer, it helps a lot and quite possibly saved me a lot of wasted time 🙂 I'll look into the solutions you mentioned.
Didn't think this through about the 180 degree FOV, I'll use something smaller, like 110-120 degree and crop the overflow.

Thank you once again 🙂

 

Link to comment
Share on other sites

@Lukk What engine have you settled on? I looked into this a bit more and if you're using Unity, it looks like Photon Engine and their BOLT subsystem might be an appropriate networking route.

Another idea - this has mostly been done by robotics researchers and students - you can maybe do a quick search for teams that have accomplished this and send them a note asking if they can share any lessons learned as this is definitely a niche use-case.

Link to comment
Share on other sites

@VibrantNebula I'm still waiting for my cameras, I didn't started the software part yet (it's rather a side job/idea for me, not to use in my daily work)
The idea is indeed to use in some robotic-like system and I did some research about it. It is true, that this is a niche case, I found only one project back from 2015 similar to this. Maybe I should really write to them as you suggested 🙂
Thanks!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...