🥽 core-xr


user-691c90 17 June, 2025, 17:54:58

hi, i'm trying to use the hmd choreography for calibration and i'm following the example on the hmd_eyes repository here: https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/hmd_calibration_client.py. i'm confused about what exactly the reference data i'm sending should be. in the demo it seems like you send two identical positions that exactly correspond to the "scene" coordinates of the calibration target. why are there two copies? and should they be the same? thanks!

user-f43a29 18 June, 2025, 09:15:14

Hi @user-691c90 , just to be sure I understand, are you trying to use the HMD choreography with a VR headset or on a flat monitor?

user-691c90 18 June, 2025, 15:24:03

with a VR headset

user-f43a29 18 June, 2025, 15:25:33

Ok, and are you using Unity? May I ask if the standard calibration scene did not work? Because if you use that scene, then you don't need to write your own routines in Python.

user-691c90 18 June, 2025, 15:35:35

No i'm trying to add eye tracking to an existing python project, but this document maybe helpful, thanks. It suggests that pupil capture expects the reference data in local/camera space?

user-f43a29 18 June, 2025, 16:18:11

I see. In that case, yes, the reference targets are sent to Pupil Capture in the local coordinate system of the VR camera.

When you ask why the two copies of each reference are sent, you are referring to these lines?

user-691c90 18 June, 2025, 16:20:52

Yes those lines!

user-f43a29 18 June, 2025, 17:04:49

I will double check with the team in the morning, but this is done because Pupil Core measures each eye independently, determining their individual relationships, in terms of gaze, to each reference target. It then later integrates these into a single binocular gaze estimate.

Pupil Capture's Gazer classes match the incoming pupil data with each reference target based on timestamp, as described here for the HMD reference client. In the HMD context, you can simply send each reference target twice with the same timestamp and the Gazer class will match them with the appropriate subsequent pupil datum. The 3D HMD calibration routines then determine the relationship between pupils and 3D gaze rays in the VR coordinate system.

You can see though, that in the case of the standard Unity calibration scene, 40 samples are taken per target, rather than 2.

When implementing your own calibration routine, you should also give the standard Unity calibration scene a try. It will make clearer how the targets should be presented and how head movement is accounted for.

user-691c90 18 June, 2025, 17:22:31

thanks, so maybe it's not necessary to duplicate each sample. just send multiple samples for each target at different timepoints.

do you know in what coordinate system pupil labs expects the position data?

user-f43a29 18 June, 2025, 17:27:05

It might be worth it to first follow the method in that reference implementation, to be sure you replicate a known working example, before making modifications. Adding an extra data point per sample, even when its 20 or 40, will be minimal load.

As mentioned above, the position data should be in the local 3D coordinate system of the VR camera. In Unity, the units are simulated as millimeters and the Pupil Capture HMD works under this assumption, but it should be capable of handling arbitrary units. The important thing is to remain consistent.

Running the standard Unity calibration scene should help clarify some of that, if this is unclear.

user-691c90 18 June, 2025, 18:09:36

do you know why the y coordinate is normalized by aspect ratio in the unity calibration controller? https://github.com/pupil-labs/hmd-eyes/blob/b6d3579c3cb2bcac47f8e67a46b4abe4c6217bf6/plugin/Scripts/CalibrationController.cs#L257

user-f43a29 20 June, 2025, 08:10:24

Hi @user-691c90 , I conferred with my colleagues and took a look through the git history. To answer both of your questions:

user-f43a29 18 June, 2025, 18:16:34

I do not. I will check with my colleagues in the morning about that.

user-e52730 20 June, 2025, 22:10:21

hi! wondering if you used the diy approach and if you did what cameras were you able to use that fit inside the vr headset?

user-691c90 23 June, 2025, 17:56:23

i'm using the Core XR add-ons for HTC Vive

user-e52730 20 June, 2025, 22:11:13

it seems like the method of decasing the cameras would be too big of specs to fit inside the headset

End of June archive