Hi @user-8779ef! We have added a new channel for Neon XR here, so we don't mix discussions with the Core VR/AR system. I'll respond here to your message!
I am not completely sure I understand all the details of your concern yet. But let me provide some more details on how the mapping of gaze data into the virtual world works. Maybe this will clear some things up!
First, note that Neon’s eye tracking is slippage invariant, so movement of the Neon Module on the face of a wearer is fully compensated in the gaze estimates.
Neons gaze estimates are initially given as gaze direction originating from the physical scene camera. The Neon XR Core package includes functionality to convert them into gaze directions in the virtual world. So when a raw gaze direction R is received in Unity, it is not directly used as a virtual gaze direction, but it undergoes a transformation into virtual coordinates first.
This works as follows:
Hello @marc , thank you for this detailed explanation! If I understand correctly, to use the Neon XR without the Pupil provided mount, I need to perform the hardware calibration myself (to estimate the Neon Module pose in relationt to the virtual camera), right?
So the gaze estimates are taking into account the eyes’ pose with respect to the VR optics/display implicitly, because the eyes’ pose with respect to the Module is respected in the estimates. Knowledge of what is shown on the screen is used during calibration. After that it is not used, except for maybe in the last step when the gaze direction might be intersected with the geometry (which shows on the screen).
I hope this makes the inner working of the gaze mapping more clear. I’m not sure this is addressing your concern though. Let us know if you still see an issue and I’d be happy to investigate further!
@marc ...one more related question. Does the Neon draw upon the virtual camera's projection matrix when producing its estimate of gaze direction in VR?
No currently not. The gaze data in the virtual space consists of 3D directions and (if you intersect with the geometry) 3D points. Currently, this data is not explicitly projected onto the virtual camera to obtain 2D gaze points.