Jump to content

nightcat33

Verified Members
  • Posts

    4
  • Joined

  • Last visited

Reputation

0 Neutral

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The following is my proposed solution for the problem, and I would like to know if it is correct: In Unity's virtual scene, there are two virtual cameras—one for each eye—rendering different views. As a result, two separate images are generated at the same time: one for the left eye and one for the right eye, which correspond to the left and right displays of the VR headset (HTC VIVE Focus Vision). The images are transmitted via the DisplayPort interface. The question is: how should the rendered results be transmitted? Should the left and right camera renders be transmitted alternately through the DisplayPort? Alternatively, should the left and right camera renders be combined into a single image for transmission, and then split on the VR side? Can Unity control the rendering order of virtual cameras? Can Unity directly or indirectly control the data sent to DisplayPort? How should the VR headset client handle the data received from DisplayPort? Transmission Options: Alternating transmission of the left and right camera renders: In this method, the rendered results for the left and right cameras are transmitted alternately through DisplayPort. The advantage of this approach is its relatively simple logic, with a stable data transmission flow. Unity can render the images for the left and right eyes sequentially and send them in turn. It is essential to ensure that the DisplayPort has enough bandwidth to support the required frame rate and resolution. Merging both camera renders into one image: This approach involves combining the rendered results of the left and right cameras into a single large image and transmitting it all at once via DisplayPort. The benefit here is a reduction in transmission complexity, as only one transmission is needed, and the VR headset will split the combined image back into individual left-eye and right-eye images. This method may provide better frame synchronization, but the headset will need to handle the decoding and splitting of the data. Can Unity control the rendering order of virtual cameras? Unity allows you to control the rendering order of virtual cameras through custom rendering pipelines and scripts, such as using the OnRenderImage or Camera.Render methods. You can use the Camera.depth property to specify the rendering priority of each camera, ensuring they render in the desired order. Additionally, using a Scriptable Render Pipeline (SRP) allows further customization of the rendering process, giving full control over the camera rendering sequence. Can Unity control the data sent to DisplayPort? Unity does not have direct control over the data transmission to DisplayPort. However, it can indirectly influence it by using custom plugins or external libraries to communicate with the DisplayPort hardware. Unity mainly handles generating the rendered results, while the system's underlying drivers handle the DisplayPort transmission. How should the VR headset client handle the data received from DisplayPort? For the HTC VIVE Focus Vision headset, after receiving data transmitted through DisplayPort, the VR headset will process these images according to its internal rendering pipeline. For the second approach, where the images are combined, the VR client will need to split the large image into individual left-eye and right-eye images and then display them on the respective screens. The VR headset’s API or drivers should be capable of handling combined images and correctly displaying the left and right eye content.
  2. Hi there, I came across a similar question recently and wanted to ask for some advice. I have set up my PC as a video streaming server and plan to connect it to the HTC Vive Focus 3 via a USB cable. I'm interested in developing an Android application on the Focus 3 side that can access and stream content sent from the PC over USB. While I know that wireless data transfer is possible, I'd like to explore this USB-based approach. Do you have any insights or suggestions on how to build this setup? Thanks in advance for your help!
  3. Hi, I’m new to VR development and have recently purchased a VIVE Focus 3. I’m setting up a camera on a robot and plan to have a PC act as a server to receive live video feeds. I want to develop an Android application for the Focus 3 that will act as a client to display these videos. The Focus 3 and PC will be connected via a Type-C USB cable, with the PC sending encoded video data frames to the Focus 3 for rendering. If I choose to use the Wave Unity SDK for developing the client-side app, how should I access the USB data input source on the Focus 3? Additionally, on the PC side, what method should I use to transmit the data? Can I use a USB library, and if so, which one would you recommend? I’m planning to use Linux Ubuntu OS. Also, what video encoding format would be suitable for this setup? Thank you!
  4. Hi, I’m new to VR development and have recently purchased a VIVE Focus 3. I’m setting up a camera on a robot and plan to have a PC act as a server to receive live video feeds. I want to develop an Android application for the Focus 3 that will act as a client to display these videos. The Focus 3 and PC will be connected via a Type-C USB cable, with the PC sending encoded video data frames to the Focus 3 for rendering. If I choose to use the Wave Unity SDK for developing the client-side app, how should I access the USB data input source on the Focus 3? Additionally, on the PC side, what method should I use to transmit the data? Can I use a USB library, and if so, which one would you recommend? I’m planning to use Linux Ubuntu OS. Also, what video encoding format would be suitable for this setup? Thank you!
×
×
  • Create New...