Jump to content

Getting Screen Position of the Gaze in Unity


Recommended Posts

I'm trying to display a UI element in the position of the gaze collision. I found out that just using:

Camera.main.WorldToScreenPoint(collision.position, Camera.MonoOrStereoscopicEye.Left);

does not fully work for the "LeftEye" display - there seems to be some screen-cropping going on (about 15% on each side). 

Does anybody have a way how to reliably obtain a screenpoint from world point in SteamVR in Unity please?

@Daniel_Y @Corvus
 

Link to comment
Share on other sites

  • 4 months later...
  • 2 weeks later...

Hi @Corvus

I am trying to save eye position in screen coordinates (where the observer is looking in the scene in X,Y,Z pixels) for an analysis. For now I've been trying to raycast to get a hit point with world geometry, and to then convert that hit point to screen coordinates, but the values I'm getting seem off. 

When I tried going over the documentation, I didn't think focusInfo was quite what I needed. I looked again and I'm still unsure, can you clarify what focusInfo.point is? 

Thanks so much for your help!

Avi 

 

Link to comment
Share on other sites

  • 2 months later...

I did not figure it out. Would you please post details @Av ?

I know where the issue lies, though, it's a difference between the view provided by "Left Eye" vs the full image rendered to the headset.

I have made a simple example, where the camera coordinates of an object are printed directly on screen. The project is attached (EyeTrackingIssue.zip), but the full code is the following:
 

using UnityEngine;
using UnityEngine.UI;
using UnityEngine.XR;

public class DisplaySSCoordinates : MonoBehaviour
{
    public Transform debugSphere;
    public Text displayText;
    public Camera vrCamera;
    
    private void Update()
    {
        var coords = vrCamera.WorldToScreenPoint(debugSphere.position, Camera.MonoOrStereoscopicEye.Left);
        var eyeTextureSize = new Vector2(XRSettings.eyeTextureWidth, XRSettings.eyeTextureHeight);
        var units = new Vector2(coords.x / eyeTextureSize.x, coords.y / eyeTextureSize.y);
        displayText.text = $"({units.x:F2},{units.y:F2})";
    }
}


You can see that in the bottom-left and top-right corners, the coordinates are not what I would expect. For the center it is correct:

Center.thumb.PNG.1a8207375454acbe8e2cd019368a70af.PNG

TopRight.thumb.PNG.aded5aa7796f4d407fb79f76fd006421.PNG

BottomLeft.thumb.PNG.202def20f461eeb8f1a4decdab0360d3.PNG

However when I switch from the "LeftEye" to "OcclusionMesh", the coordinates are correct:

OcclusionMesh.thumb.PNG.2fd7b89c727ce8b85d2aedd65a2cf8c7.PNG

This is obviously not SRanipal issue only, probably will also happen for other EyeTracking systems, however I think this is the prescribed way of getting the screen-space position (it's a while I'm not sure, I'm happy for a different workflow).

I noticed that the error changes with the aspect ratio, which makes sense as the screen-crop of the "LeftEye" changes too.

I'd need to obtain the information how the "EyeTexture" is cropped into the game view to calculate the position correctly. Any recommendations?



 

 

EyeTrackingIssue.zip

Link to comment
Share on other sites

  • 2 weeks later...

Hi @Adam Streck

Apologies for only now getting back to you, I missed your message. I don't have my HMD (it's in my research lab and it's tough to get to with COVID-19 restrictions) so it's hard for me to test your code and troubleshoot. What I can say is that we ended up using a different workflow that with some tweaking seems to work for our needs. I'm posting the steps we took below. Let me know if you have any questions.

We started off by setting up a raycast with the camera as the origin, and gaze direction. We took the hit point we got from the raycast and used WorldToScreenPoint to convert the units into pixels. 

There's a very obvious limitation that for the raycast to work, there needs to be a game object for the ray to collide with. Your method avoids that limitation which is awesome!

Next time I'm able to access the HMD I'll give your code a try and see what I can find. Sorry I can't be more helpful, if you do decide to go with our solution and have questions, let me know. 

Avi 

 

 

Link to comment
Share on other sites

  • 1 month later...
On 11/16/2020 at 12:16 AM, Av said:

Hi @Adam Streck

Apologies for only now getting back to you, I missed your message. I don't have my HMD (it's in my research lab and it's tough to get to with COVID-19 restrictions) so it's hard for me to test your code and troubleshoot. What I can say is that we ended up using a different workflow that with some tweaking seems to work for our needs. I'm posting the steps we took below. Let me know if you have any questions.

We started off by setting up a raycast with the camera as the origin, and gaze direction. We took the hit point we got from the raycast and used WorldToScreenPoint to convert the units into pixels. 

There's a very obvious limitation that for the raycast to work, there needs to be a game object for the ray to collide with. Your method avoids that limitation which is awesome!

Next time I'm able to access the HMD I'll give your code a try and see what I can find. Sorry I can't be more helpful, if you do decide to go with our solution and have questions, let me know. 

Avi 
 

 

 

Thanks for the reply either way. Raycast is not an option for us but I still don't know how it would help since I'd need to do the inverse projection matrix calculation and I don't know the projection matrix for the VR view.

It seems that this whole issue is actually a Unity bug / behaviour so I'm trying to solve it with them now, see http://fogbugz.unity3d.com/default.asp?1304082_gsk09e85lf3m2k7i

Edited by DrAdamStreck
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...