Thank you for the tip @Daniel_Y, I wasn't sure the method gives the right informations according to this issue : https://github.com/ValveSoftware/openvr/issues/1100 For now, when I acquire the TrackedCamera framebuffer, I simply use it as a texture on a shader applied on a surface :
using UnityEngine;
using Valve.VR;
public class SteamVR_CameraBackground : Monobehaviour {
public MeshRenderer TargetRenderer;
public bool Undistored = false;
Texture2D m_videoTex;
private void OnEnable()
{
SteamVR_TrackedCamera.VideoStreamTexture videoSource = SteamVR_TrackedCamera.Source(Undistored);
videoSource.Acquire();
if (!videoSource.hasCamera)
{
enabled = false;
}
}
private void OnDisable()
{
SteamVR_TrackedCamera.VideoStreamTexture videoSource = SteamVR_TrackedCamera.Source(Undistored);
videoSource.Release();
}
private void Update()
{
SteamVR_TrackedCamera.VideoStreamTexture videoSource = SteamVR_TrackedCamera.Source(Undistored);
m_videoTex = videoSource.texture;
TargetRenderer.material.mainTexture = m_videoTex;
}
}
The texture layout looks like this:
As I work on single pass stereo mode, Here's how I re-project the image:
Shader "Unlit/StereoDistort" {
Properties{
_MainTex("Texture", 2D) = "white" {}
}
SubShader{
Tags { "RenderType" = "Opaque" }
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata_t {
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct v2f {
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
UNITY_VERTEX_OUTPUT_STEREO
};
uniform sampler2D _MainTex;
uniform float4 _MainTex_ST;
v2f vert(appdata_t v)
{
v2f o;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_INITIALIZE_OUTPUT(v2f, o);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag(v2f i) : SV_Target
{
UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i);
float offset = lerp(0.5, 0, unity_StereoEyeIndex); // switch 0.5 and 0 values to switch between left and right eyes
float vCoord = offset + (1. - i.uv.y) * .5;
fixed4 col = tex2D(_MainTex, float2(i.uv.x, vCoord));
return col;
}
ENDCG
}
}
}
Result: In your VIveSR Unity package, you use 2 surfaces, on for each eye, when I use only 1 surface. I don't feel an offset problem between eyes though. The main issue still the inequality when I rotate my head or when i move forward, the 3D scene doesn't stay at the same place according to the Camera framebuffer. I am pretty sure I miss a step to make it work, but I can't figure out what.
Thank you again.