Sorry, this took much longer than an hour 😛 https://github.com/unity-sXR/sXR
It's not well documented yet, but I think all the main kinks have been worked out... Should be getting a lot of updates in the next week. You should be able to download this Github library as a zip file, unzip it, and point Unity Hub at the project.
It was built in Unity v2023.1.0a14 so I'd recommend going with that version/newer. It also requires Microsoft's .Net2.1, Vive Console(through Steam), and SteamVR. I believe it should have incorporated all the required packages (including the SRanipal SDK) into the project.
The functionality you're looking for is in Assets/sxr/Backend/Singletons/GazeHandler.cs
To use SRanipal, click on the sxr tab on the task bar. Open the settings menu and click "Use SRanipal". It will work right now since the conditional formatting is commented out, but I plan on having this work for all devices. In a soon-to-be update, you'll have to have the option clicked to use SRanipal instead of OpenXR. There's also an option to use an autosave which is pretty nice. It can also handle a VR-GUI (for displaying task instructions), writing data, full screen shaders, camera tracking, sounds, controllers, and a bunch of other nifty stuff. Right now it's kind of spread out, but in the next week there will be simple one line commands for everything, all within the main sxr class e.g.:
if(sxr.ControllerButton(sxr.Buttons.RightTrigger){
sxr.ShowImageGUI("instructions");
sxr.StartTimer("instructionTimer");
sxr.PlaySound("instructionsSound");
sxr.WriteToTaggedFile("subjectEyeData", sxr.GetFullEyeData());
}
Right now, you have to access the GazeHandler info by declaring the verboseData variable as public and then using GazeHandler.Instance.verboseData. If you type that into your IDE, it will show you all the SRanipal stuff that you can access like pupil size, how open the eyes are, etc. I should have all those implemented as sxr functions tomorrow (but I also thought it would take an hour to get this to you so maybe not :P). In the near future it's going to have more features (like the ability to replay trials with the user's gaze highlighted), plus Google Colab notebooks for visualizing paths, analyzing eye info, etc. I should be putting a paper out soon for it and would love to have feedback/feature requests. If you end up using it and run into questions, shoot me an email: "justin_kasowski@ucsb.edu".
Goodluck!