Hi everyone, I want to calculate the vergence from Pupil Core data. I have these parameters: ` target_x = 643 target_y = 315 gaze_pos_vid_x = 649.946106 gaze_pos_vid_y = 302.057208 distance_to_target = 869.268085 pup_diam_l = None pup_diam_r = 3.94412
Optical axis left (gaze_normal1)gaze_normal1_x = 0.196103 gaze_normal1_y = -0.204533 gaze_normal1_z = 0.959015
Eyeball center lefteye_center1_3d_x = -40.468474 eye_center1_3d_y = 17.110107 eye_center1_3d_z = -14.915746
Optical axis right (gaze_normal0)gaze_normal0_x = -0.067185 gaze_normal0_y = -0.151333 gaze_normal0_z = 0.986197
Eyeball center righteye_center0_3d_x = 20.005274 eye_center0_3d_y = 14.670975 eye_center0_3d_z = -20.640095
3D gaze pointgaze_point_3d_x = 4.37738 gaze_point_3d_y = -24.66304 gaze_point_3d_z = 203.919871
1. Vergence from eye centers and distance to target:eye_center_left = np.array([eye_center1_3d_x, eye_center1_3d_y, eye_center1_3d_z]) eye_center_right = np.array([eye_center0_3d_x, eye_center0_3d_y, eye_center0_3d_z])
interocular_distance = np.linalg.norm(eye_center_right - eye_center_left) vergence_rad_geom = 2 * np.arcsin(interocular_distance / (2 * distance_to_target)) vergence_deg_geom = math.degrees(vergence_rad_geom)
print(f"Vergence (distance-based): {vergence_deg_geom:.2f}°") # 4.01°
2. Vergence from optical axis vectors:gaze_dir_left = np.array([gaze_normal1_x, gaze_normal1_y, gaze_normal1_z]) gaze_dir_right = np.array([gaze_normal0_x, gaze_normal0_y, gaze_normal0_z])
dot_product = np.dot(gaze_dir_left, gaze_dir_right) vergence_rad = np.arccos(dot_product) vergence_deg = math.degrees(vergence_rad)
print(f"Vergence (optical-axis-based): {vergence_deg:.2f}°") # 15.52°`
I know that the participant is looking at the target, and I'm confident that distance_to_target is accurate. But I'm getting a large difference between the vergence from distance (4.01°) and from optical axes (15.52°). What could explain this discrepancy?
Hi @user-94183b !
Note that distance computation from gaze normals intersection is a bit tricky, as they rarely intersect cleanly.
That said, if you now the depth is correctly measured, and the participant was looking at that distance, I would suggest to first check that the pye3d model is well fitted, the IPD you measure resembles the one used in the fitting process (bundle adjustment) and plot the vectors to get a better idea of whats happening.
Hi @user-d407c1
Thank you for the suggestions! I followed your advice to check the model fit and found something interesting.
What's working: The 2D gaze mapping appears accurate - my target was at (643, 315) and Pupil Core's gaze_pos_vid shows (649, 302), so the participant was clearly looking at the correct location.
The problem: The depth estimation is completely wrong. My participant used a chinrest with the target at 800-900mm distance, but gaze_point_3d_z shows only 203.9mm (~20cm). This is a 4x underestimation!
My questions: How is gaze_point_3d_z calculated in the 3D gaze mapping process? Since the 2D accuracy looks good, what specific pye3d model quality metrics should I check to diagnose this depth estimation problem?
How is computed is described on this post by my colleague @nmt (https://discord.com/channels/285728493612957698/285728493612957698/1098535377029038090). He also commented on why it might be inaccurate and shared some ideas on exploring alternatives for estimating depth: https://discord.com/channels/285728493612957698/285728493612957698/1100125103754321951
As for the model itself, Iād recommend opening the debug view on the eye cameras to qualitatively assess tracking and fit.
Just to clarify my understanding: given the inaccuracy in viewing depth estimation that @nmt has discussed, would reliable vergence measurements not be feasible with Pupil Core? And since Pupil Neon only provides optical axis data, would the attached formula (from https://arxiv.org/pdf/2311.09242) not be applicable, making Neon unsuitable for vergence studies as well?
It's important to distinguish between vergence measurement and depth estimation.
While vergence can be characterized, it changes very little for farther distances, making it unreliable for estimating depth. This is due to the non-linear relationship between vergence angle and depth ā small angular changes correspond to large differences in depth as distance increases as well as individual characteristics where those axes may not even intersect.
Also worth noting, the paper you referenced compared vergence angles at different known distances, but at a quick glance, it did not attempt to estimate depth from those measurements ā a different objective, IMO.
Finally, Neon provides the optical axis of each eye as part of a 3D informed model that is robust to slippage. Relative changes in the optical axes can shed light on vergence behaviour.
Is it Neon? or Pupil Core? Asking because the plot says the "Pupil Neon". Regardless, could you share an example rec. to data@pupil-labs.com so we can have a look at the rec. data quality as a first step?
Sorry, I uploaded the wrong plot. This should be the correct one. Please ignore the Neon data (as its vergence is based on the optical axes). Unfortunately, I cannot share the participants' gaze data. What data quality problems can we have here?
The main concern is the pye3d model that models the eyeball https://docs.pupil-labs.com/core/best-practices/#pye3d-model
I did this step and for all the participants I had "a stable circle that surrounds the modelled eyeball, and this should be of an equivalent size to the respective eyeball." However, is there a way to check the model fit on the recorded data?
Lemme check and get back to you
In the meantime, since you can't share the recordings, perhaps you can outline the steps and how you compute it such that we can investigate further. Do you use that formula?
Vergence Angle
Hello, i would like to know what is the average end to end delay for the pupil data that is sent over network. Has it been estimated? With my empiric method of estimating delay(60fps camera footage and going through frame by frame), i was able to get around 100ms of delay between the eye movement and the appropriate data being received by Unity. I get around 100fps in the world camera, 110fps on each eye camera and way above that in the Unity scene .
@user-e16f11 What kind of network are you using to send the data? Also, just to clarify, the FPS of the cameras and the latency of the data transmission are two independent processes.
Hi. I'm facing an issue with the Pupil Core. I'm trying to calibrate (using Screen Marker), but I always get these error messages after trying to do the calibration. Even though both eyes' conf. = 1.
Hi @user-a190ae , could you share a screenshot of what the eye images look like when you get that message? You can send the screenshot via DM if you prefer and then we can continue conversation here.
Now I see this, I guess that means it has calibrated right?
Hi @user-a190ae , yes. And you should have briefly seen a Calibration Accuracy display after finishing the calibration. You can then proeed to validate the calibration with the T button, if you would like.
Hi , How can i detect Fixation and Saccades ? Is there any Tutorial or Github Repo for that ?
I had extracted lsl data into seperate csv , able to get gaze coordinates and based on that we got out Region of Interest but Fixation and Saccades are kindof New to me.
Hi @user-e83888 , great to hear about your progress!
Then, saccades are conceptually the gaze traces between two consecutive fixations, assuming that your experiment did not elicit smooth pursuit eye movements.
Hello @user-f43a29 , Our lab is planning to purchase a Pupil Labs eye-tracking device. After some research, I found the Surface Tracker plugin in Pupil Capture. Our goal is to automatically classify participants' eye fixations to a grid on a shelf in a visual world paradigm experiment, and to calculate the proportion of fixations per grid area. In this experiment, the participant, sitting on one side of the shelf, is talking to another person on the other side of the shelf and is following instructions from the other person such as "move the big funnel.''
Iām wondering whether this is achievable using Surface Tracker. For example, we are considering placing AprilTag markers on the corners of the shelf, then using the x_norm
and y_norm
values in gaze_positions_on_surface.csv
to determine which grid cell the participant is fixating on (as described in this tutorial).
In previous studies, we manually coded the fixated objects (from EyeLink II) in Adobe Premiere. We want to avoid such labor.
Hi @user-9cc9a4 , we also received your email and responded there. Briefly:
And yes, you can use that tutorial.
Hey @user-f43a29 We have the pupil cord running and we looked up, down, right, left. This figure is the LSL data and we're trying to understand this data. We have only 1 eye camera working. We are convinced about the confidence data points but are not sure about the other things coming from the LSL matrix. The number displayed is the number of rows from where we took the data. Can you assist us in understanding what each of these values are and from this value, how can we merge with our head movements that we have in motive system (Opti track)? We are doing everything in Matlab, and if there's any code you know that might help us. Will the microphone data (IMU) be synchronized with LSL with neon? Is there some type of information being transferred? This will help us reassure the purchase of the neon.
Hi @user-7c5b51 , the format of LSL's gaze metadata is here. In MATLAB, you can also find the channel layout in the following field:
xdf_data.info.desc.channels.channel
The meaning of each of these channels in the context of Pupil Core is described here and you might find the description of Pupil Core's coordinate systems helpful in interpreting those values.
With respect to Neon, just to clarify, I assume you meant "movement data", when referring to the IMU, and not "microphone data"? In either event, while the IMU & microphone data are not currently streamed to LSL, you can still post-hoc synchronize them with little effort.
This is because Neon's Events are also streamed/synced over LSL. If you send an Event, you will find it in Neon's timeline and in LSL's timeline and can use that as an anchor point to align all of Neon's other datastreams. This is possible because all of Neon's data are timestamped by the same high-precision clock. If you would like to see IMU data explicitly streamed over LSL, then feel free to upvote this feature request: https://discord.com/channels/285728493612957698/1323065816501194793/1323065816501194793
When it comes to integration with a motion capture system, then regardless of whether you are using Neon or Pupil Core, you need to determine the pose of the headset in the motion capture coordinate system. This requires a calibration process. Do I understand that you have already attached motion capture markers to your Pupil Core? As that will also be necessary. You can check this thread for details on the motion capture calibration process: https://discord.com/channels/285728493612957698/1352339286502150205/1352432635812515991
Hello, Guys,I am currently using core to complete a series of testing tasks, and I have encountered some issues. I urgently need to test the eyelid opening and closing degree, data recovery time under gaze tracking loss, average angle error of 3D estimation, and other key technical indicators. Does pupil complete the testing of these technical indicators?