Hi @user-ff2367! Yes exactly, every mount geometry needs to be calibrated once to determine the position it places the module in within the headset. The resulting calibration data needs to be placed within the config.json
file of the app. No further calibration should be necessary on a per user or per session basis. The calibration process is documented here:
https://docs.pupil-labs.com/neon/neon-xr/build-your-own-mount/
Hello @marc! Thank you very much for the information
Hello! I'm currently exploring Neon XR core package with XR Interaction Toolkit in Unity and I'm curious about its compatibility, specifically regarding the XRGazeInteractor and its support for OpenXR bindings. Does it support OpenXR bindings or can it be bridge with XRGazeInteractor?
I would appreciate any experiences or advice with using NeonXR with XRGazeInteractor/XR Interaction Toolkit!
Hi @user-435838 ! Our developer has prepared a sample for this use case here and some instructions as you can see below:
Based in Unity3D docs it is possible to use XRGazeInteractor
even without OpenXR bindings.
If a device that supports eye tracking does not support these bindings, you will need to retrieve the gaze Pose data from the device and update the XRGazeInteractor Transform with an additional provider.
So, basically you need to: 1. Create a new project 2. Import the XR Interaction Toolkit with Starter Assets and XR Device Simulator 3. Import Neon XR as mentioned here
Gaze Interactor
as targetDisable input tracking on XR Controller component which is attached to Gaze Interactor
gameObject.
Add a callback for the Gaze Data Ready event, which is the OnGazeDataReady method
from XRGazeInteractorBridge.cs (XRGazeInteractorBridge is implementation of the additional provider mentioned above)
Add XR Device Simulator prefab into the scene
Note: The scene has some simulators you may want to disable when building your application.
When using NeonXR is the gaze overlay and VR automatically recorded together as depicted in the Microsoft Flight simulator demonstration because this wasn’t the case with the Core device which required a rather convoluted method of capturing both the VR scene and gaze overlay.
HI @user-057596 ! The video demo requires couple things:
Essentially, is similar to what is done here
In this case, Flight Simulator, allows you to show both views (HMD and PC mirrored), so recording the latest is easier and usually less distorted. But recording directly from SteamVR should also be easy, and there is even a plugin for that.
Then you need to translate the gaze coordinates. Neon outputs them in the scene camera space, so you just need to find the relationship to the content you are seeing.
If you look at some targets and you have the gaze position and the pixel coordinate of that target, is a simple correlation, kinda like a traditional calibration in eye tracking. But since the relationship between the scene camera and the content is "not" going to change, and NeonNet takes care of the relationship between your eyes and the module, you can do it just one time.
PS. If you want the best temporal sync, have a look here
Thanks @user-d407c1 that was really helpful 💪🏻
Hello Miguel, thank you so much!! I'll try it out today.