hi, i'm trying to use the hmd choreography for calibration and i'm following the example on the hmd_eyes repository here: https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/hmd_calibration_client.py. i'm confused about what exactly the reference data i'm sending should be. in the demo it seems like you send two identical positions that exactly correspond to the "scene" coordinates of the calibration target. why are there two copies? and should they be the same? thanks!
Hi @user-691c90 , just to be sure I understand, are you trying to use the HMD choreography with a VR headset or on a flat monitor?
with a VR headset
Ok, and are you using Unity? May I ask if the standard calibration scene did not work? Because if you use that scene, then you don't need to write your own routines in Python.
No i'm trying to add eye tracking to an existing python project, but this document maybe helpful, thanks. It suggests that pupil capture expects the reference data in local/camera space?
I see. In that case, yes, the reference targets are sent to Pupil Capture in the local coordinate system of the VR camera.
When you ask why the two copies of each reference are sent, you are referring to these lines?
Yes those lines!
I will double check with the team in the morning, but this is done because Pupil Core measures each eye independently, determining their individual relationships, in terms of gaze, to each reference target. It then later integrates these into a single binocular gaze estimate.
Pupil Capture's Gazer classes match the incoming pupil data with each reference target based on timestamp, as described here for the HMD reference client. In the HMD context, you can simply send each reference target twice with the same timestamp and the Gazer class will match them with the appropriate subsequent pupil datum. The 3D HMD calibration routines then determine the relationship between pupils and 3D gaze rays in the VR coordinate system.
You can see though, that in the case of the standard Unity calibration scene, 40 samples are taken per target, rather than 2.
When implementing your own calibration routine, you should also give the standard Unity calibration scene a try. It will make clearer how the targets should be presented and how head movement is accounted for.
thanks, so maybe it's not necessary to duplicate each sample. just send multiple samples for each target at different timepoints.
do you know in what coordinate system pupil labs expects the position data?
It might be worth it to first follow the method in that reference implementation, to be sure you replicate a known working example, before making modifications. Adding an extra data point per sample, even when its 20 or 40, will be minimal load.
As mentioned above, the position data should be in the local 3D coordinate system of the VR camera. In Unity, the units are simulated as millimeters and the Pupil Capture HMD works under this assumption, but it should be capable of handling arbitrary units. The important thing is to remain consistent.
Running the standard Unity calibration scene should help clarify some of that, if this is unclear.
do you know why the y coordinate is normalized by aspect ratio in the unity calibration controller? https://github.com/pupil-labs/hmd-eyes/blob/b6d3579c3cb2bcac47f8e67a46b4abe4c6217bf6/plugin/Scripts/CalibrationController.cs#L257
Hi @user-691c90 , I conferred with my colleagues and took a look through the git history. To answer both of your questions:
As described here (https://discord.com/channels/285728493612957698/285728635267186688/1384941939081613514), the HMD eyes calibration routine is written with the expectation that you send the data for each reference target twice, as shown in these lines, so it would be advised to do the same in your code.
The y-coordinate of the calibration target is scaled by the aspect ratio just before it is sent to Pupil Capture's calibration process. This is done as part of transforming Unity's scaling for the calibration targets to be compatible with Pupil Capture's scale & expectations.
I do not. I will check with my colleagues in the morning about that.
hi! wondering if you used the diy approach and if you did what cameras were you able to use that fit inside the vr headset?
i'm using the Core XR add-ons for HTC Vive
it seems like the method of decasing the cameras would be too big of specs to fit inside the headset