Jump to content

simmb91

Members
  • Posts

    2
  • Joined

  • Last visited

Everything posted by simmb91

  1. Thank you for the tip @Daniel_Y, I wasn't sure the method gives the right informations according to this issue : https://github.com/ValveSoftware/openvr/issues/1100 For now, when I acquire the TrackedCamera framebuffer, I simply use it as a texture on a shader applied on a surface : using UnityEngine; using Valve.VR; public class SteamVR_CameraBackground : Monobehaviour { public MeshRenderer TargetRenderer; public bool Undistored = false; Texture2D m_videoTex; private void OnEnable() { SteamVR_TrackedCamera.VideoStreamTexture videoSource = SteamVR_TrackedCamera.Source(Undistored); videoSource.Acquire(); if (!videoSource.hasCamera) { enabled = false; } } private void OnDisable() { SteamVR_TrackedCamera.VideoStreamTexture videoSource = SteamVR_TrackedCamera.Source(Undistored); videoSource.Release(); } private void Update() { SteamVR_TrackedCamera.VideoStreamTexture videoSource = SteamVR_TrackedCamera.Source(Undistored); m_videoTex = videoSource.texture; TargetRenderer.material.mainTexture = m_videoTex; } } The texture layout looks like this: As I work on single pass stereo mode, Here's how I re-project the image: Shader "Unlit/StereoDistort" { Properties{ _MainTex("Texture", 2D) = "white" {} } SubShader{ Tags { "RenderType" = "Opaque" } Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata_t { float4 vertex : POSITION; float2 uv : TEXCOORD0; UNITY_VERTEX_INPUT_INSTANCE_ID }; struct v2f { float4 vertex : SV_POSITION; float2 uv : TEXCOORD0; UNITY_VERTEX_INPUT_INSTANCE_ID UNITY_VERTEX_OUTPUT_STEREO }; uniform sampler2D _MainTex; uniform float4 _MainTex_ST; v2f vert(appdata_t v) { v2f o; UNITY_SETUP_INSTANCE_ID(v); UNITY_INITIALIZE_OUTPUT(v2f, o); UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o); o.vertex = UnityObjectToClipPos(v.vertex); o.uv = TRANSFORM_TEX(v.uv, _MainTex); return o; } fixed4 frag(v2f i) : SV_Target { UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i); float offset = lerp(0.5, 0, unity_StereoEyeIndex); // switch 0.5 and 0 values to switch between left and right eyes float vCoord = offset + (1. - i.uv.y) * .5; fixed4 col = tex2D(_MainTex, float2(i.uv.x, vCoord)); return col; } ENDCG } } } Result: In your VIveSR Unity package, you use 2 surfaces, on for each eye, when I use only 1 surface. I don't feel an offset problem between eyes though. The main issue still the inequality when I rotate my head or when i move forward, the 3D scene doesn't stay at the same place according to the Camera framebuffer. I am pretty sure I miss a step to make it work, but I can't figure out what. Thank you again.
  2. Hello, I was wondering if anyone could share what is the algorithm or worflow used by SR Works for example to make a see-through mode ? So first, is there any possibilities to get each camera intrinsics from OpenVR ? Do I need to call another API for that ? In OpenVR I am able to acquire a current front facing camera framebuffer, which is upside down and displayed as top/bottom for the left and right camera in the same image ( i actually use the distorted framebuffer). The problem comes after that... what can I do with this image ? In Unity, i apply the image on a mesh material as a texture. This is a particular mesh : it's an anamorphic plane (96° horizontally, 80° vertically). this mesh is transformed by the HMD matrices, and it's offseted a little on forward axis by few centimeters. Then i scaled up the mesh to become really far without changing anything in the field of view. If test it this way, everything seems promising, but if i add 3d objects in the scene, we can clearly see this is not working... The rate of rotation/translation is inequal. Any idea ? Thank you. @Daniel_Y @reneeclchen @Andy.YC_Wang @Jad
×
×
  • Create New...