๐Ÿ‘ core


user-26cf70 01 February, 2025, 15:59:22

Hi, i have recieved a folder from my colleague that contains the following files in image 01.png, I do wish to visualize the gaze on the world.mp4 video. I have checked your documentation but the gaze data is available in .csv file with world index being available, in my case that isnt available. could you guide me how can i get the gaze into the video.

Chat image

user-d407c1 03 February, 2025, 07:58:54

Hi @user-26cf70 ! That screenshot looks like the raw data, you would want to open it with Pupil Player, where you can already visualize the gaze on top or export the data onto convenient CSV files.

user-26cf70 04 February, 2025, 08:20:10

hi @user-d407c1 Thanks for your reply. I am working on a project that requires visualizing gaze data over a video for further processing based on specific project requirements. To achieve this, I need the world index to properly align the gaze data with the video frames. However, the data I received is in its current format without a readily available world index. Is there a method or script to convert this data into a CSV file for easier processing?

user-b0328a 03 February, 2025, 13:42:02

Hello everyone,

Iโ€™m currently working on a project that involves head pose estimation using AprilTags, and I have a question regarding the Euler angle convention used in this context. Could someone please clarify which convention is applied in pupil lab for the yaw, pitch, and roll angles? Any details about the coordinate system or the specific order of rotations would be greatly appreciated.

Thank you in advance for your help!

user-cdcab0 03 February, 2025, 13:54:14

If you're using rotation_x|y|z, these are actually a Rodrigues' rotation vector. If you're using pitch|yaw|roll, then the coordinate system is relative to your origin marker

user-b0328a 04 February, 2025, 08:46:37

Thank you for your answer

user-d407c1 04 February, 2025, 08:44:01

One way to achieve this is by loading that folder into Pupil Player and exporting the data. This will generate several CSV files, including gaze_positions.csv in the export folder.

With that, you could follow this tutorial for frame extraction.

user-26cf70 04 February, 2025, 09:03:32

Thank you, will do as you said ๐Ÿ™‚

user-94ac96 06 February, 2025, 13:10:36

Hello, I am working on a project using Pupil Labs, where we track participants' gaze as they complete a browser based maze game. I have a question about calibration. Previously I have been incorporating (a) one large calibration before the experiment, consist of around 30 calibration markers which i display on the screen covering the area of the maze in detail, plus (b) 3/4 markers before each trial to maintain calibration. I have then been using 2d gaze mapping post recording in Pupil Player.

Due to time limits, we now need to get rid of the calibration markers before each trial. However, ideally I want to maintain using 2d gaze mapping because 3d has too low an accuracy for what we want to do. Thus I am planning to do (a) the same large initial calibration (b) slippage test before each trial (comparing live data to a location on the screen). I think this means I have to do the gaze mapping in real time now instead of post recording, to do the slippage test. Thus, I have been trying to do a calibration using Pupil Core which involves many markers on the screen (the 5 marker default is not fine grained enough). Do you have any advice on how to do this?

I have managed to find a way to calibrate to different locations on the screen but not to detect markers. I have attached my code to do this.

Thank you!

pupil_core.py

user-d407c1 06 February, 2025, 14:00:59

Hi @user-94ac96 ! ๐Ÿ‘‹ Thanks for sharing details about your experiment.

A 30-marker calibration sounds quite extensiveโ€”itโ€™s up to you, of course, but this approach could be time-consuming and challenging for participants, potentially leading to fatigue or loss of attention. A 9-point calibration using for example this choreography might offer a good balance of efficiency and accuracy. Feel free to try it out and share your thoughts!

From there, you could perform a validation every N trials (e.g., every 5 trials) to check for slippage and ensure accuracy remains reliable. Some experiments use a 5-point validation, or if you need something quicker, you can present a single-point validationโ€”though that would only verify accuracy at that specific point.

For gaze mapping, yes, Iโ€™d definitely recommend using the 2D gaze mapping pipeline for screen-based tasks, as it provides the best accuracy.

To minimize slippage, you might consider using a chinrest if youโ€™re not alreadyโ€”it can help stabilize head movements and improve consistency.

For marker detection, you can find the marker detector implementation here:
๐Ÿ”— Circle Detector Module

Let me know if you have any questionsโ€”and I hope your participants find the exit! ๐Ÿ˜„

user-15edb3 07 February, 2025, 10:40:41

Hi so in earlier version of neon player, the downloaded CSV files world_index

user-f43a29 07 February, 2025, 11:04:52

Hi @user-15edb3 , it seems your message got cut off. Was there more text you wanted to send?

user-94ac96 07 February, 2025, 12:39:19

Thank you! This is very helpful! Is there any way to trigger the validation choreography through Pupil Remote (analogous to socket.send_string("C") but for validation rather than calibration)?

user-d407c1 07 February, 2025, 12:56:19

Yes, but the subject has to be validation.should_start, see this gist as an example.

user-94ac96 07 February, 2025, 13:10:32

Great thank you! Just one more question, how can I return the recorded accuracy from that validation?

user-d407c1 07 February, 2025, 15:02:02

I don't have at hand some examples, but my colleague @user-f43a29 pointed me, it should be in the notification validation.data.

user-94ac96 07 February, 2025, 15:55:09

Thank you again. I can't seem to find the angular accuracy in validation.data (mine is returning a dict with keys subject, gazer_class_name, gazer_params, pupil_list, ref_list, timestamp, record, topic and i can't seem to find it in any of these)

user-d407c1 07 February, 2025, 16:15:08

You are almost there, have a look at this part of the code where that dict is used to compute it.

user-c68c98 10 February, 2025, 14:34:22

Hello, What are the optimum working distances for the ocular camera? I thought I read something in the doc but I can't find it.

user-f43a29 11 February, 2025, 10:57:32

Hi @user-c68c98 , the Pupil Core headset typically places the eye cameras about ~30 mm from the pupils (see here: https://discord.com/channels/285728493612957698/285728493612957698/1052156091628265533). May I ask what your use case is? ๏ปฟ The optimal distance varies a bit from person to person. Generally, as long as the pupils are visible in the eye image and pupil detection confidence is high (ideally, >0.9), then you are good to go.

user-0f7e53 10 February, 2025, 16:05:17

Hi, what is the best way to calculate glance duration from a dataset where I have labeled gaze directions or glance proportions

user-f43a29 10 February, 2025, 17:34:59

Hi @user-0f7e53 , by "glance duration", do you mean "fixation duration"? If so, do you have the timestamps of the gaze data that contributed to the fixations?

user-af671d 10 February, 2025, 16:38:42

Hello! I am currently attempting to use the LSL plugin with pupil capture (Pupil Core), however I am not able to. Specifically this is what the command prompt is giving as an output. Thanks

Chat image

user-f43a29 10 February, 2025, 17:33:49

Hi @user-af671d , you want to install version 1.16.2 of pylsl.

user-45f4b0 13 February, 2025, 23:33:47

Hi, we want to have a world camera recording without distortion, so that we can get a more accurate surface gaze point mapping and position estimation in Pupil Player. We are using Pupil Cam1 ID2 with a resolution of 1280*720. Could someone help?

user-f43a29 14 February, 2025, 15:31:47

Hi @user-45f4b0 , the Surface Tracker internally does the mapping with undistorted coordinates.

If you want to represent the positions of the surfaces in undistorted camera space, then you might find this tutorial helpful.

May I ask if you are finding discrepancies in your surface mapped gaze data?

user-45f4b0 14 February, 2025, 17:00:41

Hi @user-f43a29 , thanks for you information. Because the recorded world camera video is distorted, I don't know if the position is accurate or not.

We are now trying to use OBS Studio to record the screen, and use the surface gaze data to draw each gaze point on the OBS recording. But we are worried that the distortion shown in the pupil player video will impact the surface gaze estimation. Because the QR codes are also distorted and we cannot make a plat surface.

user-f43a29 14 February, 2025, 17:13:06

Hi @user-45f4b0 , to clarify, the surface definition is the pink boundary shown in Pupil Capture. It will be taken as flat, and as mentioned, the AprilTag detection and subsequent mapping of gaze to surface are done internally in undistorted coordinates. So, the tags are not distorted when doing the mapping.

One tip is positioning the corners of the pink surface to be on the corners of the AprilTags or the corners of your surface of interest.

If you follow the tutorial that I listed above, then you can overlay the undistorted surfaces on the undistorted video as confirmation of your setup. OBS should not generally be necessary.

You can also follow this tutorial to see how the mapped fixations fall on the undistorted surface. You can also run a validation setup with that, as extra confirmation.

user-45f4b0 14 February, 2025, 17:13:03

We tried to use "Camera intrinsics estimation -- show undistorted image" in the pupil capture, which undistorted the camera very well. But the recorded video shown in the pupil player is still distorted. We also want the exported world camera video to be undistorted with surface and gaze mapping, so we can present. Is this possible? Thanks!

user-45f4b0 14 February, 2025, 17:17:31

Hi @user-f43a29, as you can see in the figure, the screen cannot be fully captured by the pink boundary due to distortion. Because I need to map gaze to a surface, which is basically the full screen, so itโ€™s better to have a more accurate gaze estimation. Is there any solution? Thanks

Chat image

user-d71076 17 February, 2025, 15:10:04

Hello! I have recorded a video of left and right eye frames using neon glasses. I have a set of frames of size 192 x 384. Then I take only the left eye frames. So I have frames of a shape 192 x 192. Then I want to detect a pupil in 2D (pixel) and 3D (in camera reference frame) So I take it as: pupil_2D = result_3d["ellipse"]["center"] pupil_3D = result_3d["circle_3d"]["center"]

where I get result_3d as follows:

grayscale_array = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
result_2d = detector_2d.detect(grayscale_array)
result_2d["timestamp"] = timestamp
result_3d = detector_3d.update_and_detect(result_2d, grayscale_array)

when I draw the first point on image it fits a pupil, when I draw projected pupil_3D (using camera_matrix) the result is not precise and stable. On the attached image the green point is pupil_2D and the blue point is a projected pupil_3D.

Q1: Do you have a video that shows that more accurate 3D pupil tracking is possible with these detectors

from pupil_detectors import Detector2D
from pye3d.detector_3d import CameraModel, Detector3D, DetectorMode

Q2: Do you have a minimal piece of code which demonstrates how I can get 3D pupil coordinates so that I cold try it on my data.

Chat image

user-39a67c 18 February, 2025, 02:43:07

Greetings. I'm assisting a patient with multiple sclerosis complicated by a series of strokes.

I've tried off the shelf eye tracking hardware. Unfortunately monitor mounted eye trackers are problematic for those with limited mobility and wheelchair-bound. From a positional standpoint there's a lot of variability relative to a monitor.

  • Muscle fatigue causing posture changes such as head droop or leaning torso

  • A person's position in the chair such as tilt or recline

  • Chair alignment to the monitor

  • Prescription glasses can interfere with tracking

With this constellation of variables, it's hard to provide reliable eye tracking.

Accessibility is so important for those that don't have much. I'm trying to do my part to bring a piece of the world to a few people in my life.

This brings me to Pupil Labs. So far the stack seems the open compared to other products. I would appreciate insights with eye tracking hardware and software that could be leveraged for computer control.

nmt 19 February, 2025, 02:34:39

Hi @user-39a67c! Thank you for your message. I'm sorry to hear you haven't been able to find a workable solution with monitor-mounted eye tracking. Indeed, the problem itself is highly nuanced and incredibly variable, given the wide range of assistive needs and challenges faced.

Wearable eye tracking, which is our area of expertise, can certainly overcome some challenges but also introduces new ones.

May I ask what sort of monitor-mounted eye tracking you have already tried? It's my understanding that assistive eye tracking systems workaround some of what you describe through customised positioning of the monitor/eye tracking for users.

user-131620 19 February, 2025, 02:12:29

Hi,

I am trying to do custom automated data loading and processing from recording of Pupil Core. (gaze.pldata and fixation.pldata). Specifically, I am borrowing this function: load_pldata_file() from this module: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules.

I have observed the following two problems:

  1. The data recorded is not alternatve between left eye and right eye (this is judged from the topics data field, with left eye marks as id 0 and right eye marks as id 1)
  2. There is an uneven number of data sample from left eye and right eye so that the total number of sample recorded is not always an even number
  3. The timestamp of individual data entry doesn't make sense, 1. left eye timestamp might starts at 4.03 while right eye timestamp starts at 0.08. 2. Some timestamp is not monotomically increasing, meaning it will jump from 4.87 to -0.05 then to 0.03...

I have the following questions:

  1. Are these expected behavior, or could originate from software or hardware defects?
  2. Assuming that these are expected behavior, and cleaning of it is integrated software wise, is it limited to using the raw data exporter plugin in Pupil Player? Is there any suggestion to preserve these data cleaning steps while not having to manually click the raw data exporter plugin to achieve custom automated data export
  3. Regarding of misalignment in timestamps, I use the Pupil realtime API to set timestamp to starts at 0 before starting recording, could this be the cause of misaslignment? It seems that at least the right eye camera follows the instruction and have timestamps starting around 0.

Thank you.

nmt 19 February, 2025, 02:41:40

Hi @user-131620! Thanks for your description - it's helpful. The first thing to note is Pupil Core's eye cameras are free-running. So the sequencing of your data, i.e. left/right datums not perfectly alternating is expected. As is the fact you can get different numbers of samples overall. You can read more about how we handle free-running cameras in this section of the docs: https://docs.pupil-labs.com/core/developer/#pupil-data-matching The second thing to note is that setting the timestamps to start at 0 could lead to unexpected timestamps. Is there a specific reason why you are trying to do this? It's usually not needed, because at recording start, current Pupil Time is written into the info.player.json file of the recording under the start_time_synced_s key.

user-131620 19 February, 2025, 02:21:47

Chat image

user-131620 19 February, 2025, 02:23:09

Chat image

user-131620 19 February, 2025, 03:10:54

Hi Neil, thanks for the speedy response, you are correct, actual timestamp is not needed as long as the relative ordering within each eye is preserved

user-131620 19 February, 2025, 03:13:28

A potential implementation for my case will be to create a list of left eye and right eye separately, by reading the eye id section to distinguish between the two, and then truncate them to obtain the same number of data points. However, there is an important underlying assumption: the relative order is correct and preserved from .pldata and the corresponding row from the two table are happening simultaneously in time(within acceptable range)

user-131620 19 February, 2025, 03:13:48

Do you think this is probable? Thanks

user-dbab7a 19 February, 2025, 07:53:14

Hello, I would like to know the intrinsic matrix of the eye camera in Pupil Core.

user-cdcab0 19 February, 2025, 11:30:27

Hi, @user-dbab7a - you can find those in the Pupil Core source code

user-dbab7a 20 February, 2025, 06:36:27

thank you very much

user-bd106e 20 February, 2025, 04:02:05

Hello, I would like to ask why the scene camera (the camera that captures the world) does not perform camera distortion correction? Wouldn't not correcting it make the gaze error larger?

user-f43a29 24 February, 2025, 18:25:59

Hi @user-bd106e , may I ask how this was determined? Internally, the calibration & gaze mapping pipelines are working in undistorted coordinates.

user-94ac96 22 February, 2025, 09:43:29

Hi, I am running an experiment where I calibrate at the start, and then check accuracy periodically throughout. I am using 2d gaze mapper. At the moment, it is set that if accuracy > 2 (or more generally a threshold) calibration is triggered at that point in the experiment. Unfortunately in piloting this has led to participants needing to calibrate a lot, often back to back. However, if I increase the threshold I find that accuracy can be poor in the data due to slippage. Is there a way to automatically shift calibration based on accuracy data throughout so I can posthoc do this instead of the participant needing to recalibrate throughout? (Would it be accuracte to do this -- is the calibration mapping linear)?

user-f43a29 24 February, 2025, 16:22:15

Hi @user-94ac96 , could you clarify what you mean by "automatically shifting calibration based on accuracy data"?

If you can share a recording folder with data@pupil-labs.com (e.g., via Google Drive), then we can take a closer look and provide better feedback.

user-11d287 22 February, 2025, 16:20:31

Hi I am using pupil-core hardware with pupils-labs GitHub source code to run the SDK and trying to communicate through another oy script to get gaze coordinate right now the problem I am facing is the the world camera feed is unable to open as the uvc.backend-utility is throwing init error I posted this issue on GitHub one month back but not getting any help can someone help me out from this problem

user-14d19c 24 February, 2025, 07:29:52

Hi there, I have completed my eye-tracking experiment using the Pupil Core and want to obtain time-to-first-fixation (TTFF) data. What are the possible methods to achieve this? Thank you~

user-480f4c 24 February, 2025, 07:32:09

Hi @user-be0bae! I moved your message to the ๐Ÿ‘ core channel since you're using Pupil Core. Could you maybe share more details on your setup? Have you defined areas of interest and would like to get the TTFF data for a specific area of interest?

user-dbab7a 25 February, 2025, 03:16:47

Hello, I would like to know the rotation and translation matrices between the eye camera and the scene camera of the Pupil Core product. Where can I find these parameters?

user-f43a29 25 February, 2025, 15:42:39

Hi @user-dbab7a , may I ask what your ultimate goal is? You would need to modify the Pupil source code to obtain such values.

user-11d287 25 February, 2025, 03:16:52

Hi comunity i am using Logitech c615 webcame camera as my world feed but the pupil capture through is throwing initialisation error in log What should I do Do even Logitech c615 camera support uvc As in my windows default camera the feed is able to come But not in pupil capture can any one help me out

user-f43a29 25 February, 2025, 15:44:51

@user-11d287 , this could be an indication that the camera is not UVC compatible, but you may want to check with Logitech about that.

user-11d287 28 February, 2025, 03:04:47

Can you provide any budget webcam links which community has been used for pupil capture

user-11d287 28 February, 2025, 03:02:22

I have complied and found Logitech c615 is compatible with the uvc library Yet it cannot receive video frames while capture

user-94ac96 26 February, 2025, 11:36:42

Hi Rob, I mean to measure the accuracy periodically to see how much the calibration has shifted, and then use this to update the calibration mapping to reshift the whole mapping through a translation. So do a full calibration at the start, assume the only source of noise is translational drift, measure accuracy periodically to see how much translational drift there is, then shift the preexisting calibration to account for this drift. I will try to share a folder with you asap

user-f43a29 26 February, 2025, 15:53:52

Hi @user-94ac96 , regarding "shifting" the calibration by translating it, such an approach doesn't sound feasible. The calibration determines the transformation from pupil positions in the eye camera frames to gaze positions in the world camera frames. As a result, the difference between two calibrations is significantly more than a translation in general.

Once you've shared a recording for review, including the calibration process, then we are in a better position to provide feedback about how to improve calibration accuracy in your scenario.

Also, if I remember correctly, you compute validation accuracy from Network API notifications with custom code?

user-94ac96 26 February, 2025, 11:43:48

furthermore, is there any way I can access the homography matrix used for the gaze mapping?

user-f43a29 26 February, 2025, 15:59:30

Hi @user-94ac96 , do you mean the matrix that is used by the Surface Tracker Plugin to map gaze to a screen, for example?

user-131620 26 February, 2025, 19:35:41

Hi, I have received the replacement part on the left eye camera, it is connecting to the computer and data coming through I am running into a weird problem on consistent offset on norm_pos of left and right pupil

user-d407c1 27 February, 2025, 07:57:37

Hi @user-131620 ๐Ÿ‘‹ ! That is actually expected and normal. Firstly, from your snippet, you are subscribed to pupil (pupil.) not to gaze(gaze.). Have a look at how the messages are organized here.

Then, you have to consider that one of the eye cameras in Pupil Core is upside down. You do not see it because the eye image is flipped in the software, mainly for aesthetics and easiness when setting up the camera, but you can turn this off in the eye camera settings.

However, we do not flip upside down the metrics themselves, as they do not affect gaze calculations or any other final metric and would increase the cpu load without any clear benefit.

In this case, you plot here the pupil norm position, , namely, the position of the pupil on the eye camera in normalize coordinates. Since one camera is upside-down, well you can clearly see one pupil is around 0.8 and the other 0.2, which is the inverse.

user-131620 26 February, 2025, 19:36:03

Chat image

user-131620 26 February, 2025, 19:36:12

In this task, we are measuring x and y position using the norm_position attributes on the IPC backbone The participant is instructed to stare at the middle of the screen which we can guaranteed but as you can see, there is a consistent offset of the x position and y position, and it differs between right and left camera We make sure to perform calibration before each collection Here is the code we use to generate this result: This is a snippet of code that generate the result:

user-131620 26 February, 2025, 19:36:21
user-131620 26 February, 2025, 19:36:31

We tested across different participant, and go through calibration, the error persist Could you help us in resolving this issues?

user-138bf5 27 February, 2025, 08:55:25

Hello! I was wondering what is your intended way to remove blinks from the gaze data. I figured not every blink timestamp actually has a correspondant time stamp for the gaze, probably due to the irregular sampling that we observed in the data, which makes it difficult to remove blinks. How would you suggest tackeling that problem? Also, we were discussing thresholds for blinks and were wondering if you'd suggest keeping the defaults in the blick detection plugin for a design where we mainly want to observe micromovement of the eyes and expect no larger eye movements during the experiment.

nmt 27 February, 2025, 10:21:29

Hi @user-138bf5! Pupil Core uses a data matching algorithm to handle free-running eye cameras. So it's expected that gaze datum timestamps don't perfectly correspond with other timestamps, e.g. of blinks. Finding the closest match would be a valid approach. You can read more about the data matching algorithm here: https://docs.pupil-labs.com/core/developer/#pupil-data-matching.

As for blink detection thresholds, when pupil confidence is good throughout the recording, the default thresholds tend to work well. But you can also tweak them on a per-recording basis if you notice missed classifications. You can quality control the classifications by watching the eye video overlay. Read more about the blink detector here: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector

user-7daa32 27 February, 2025, 17:17:21

Hello. Please what is the minimum XY deviation possible after calibration and validation using the 3D pipeline? The best I got today is 0.092 after calibration and 1.282 after validation

user-7daa32 27 February, 2025, 21:10:33

Hello, One of the eye camera is not showing the eye image. I am getting error : USB device not recognized

user-7daa32 27 February, 2025, 21:17:47

It is showing USB malfunction. Only the world window, and eye 1 are working?

End of February archive