Is it generally a good practice as part of the data pre-processing to exclude the Fixations that appear near the blinks? (e.g., 100-150ms before and after the blinks). If yes what is the easiest way to do that? Also, is it necessary to apply this? what are the drawbacks of not doing it? The only think that I am interested about in my analyses is the fixation duration within AOIs and I am using Windows system. Lastly, are there any common good practices that will guide me for preparing the datasets for the analyses?
Thank you very much, Panos
@user-412dbc , you may also find this tutorial about merging fixation and blink IDs into the gaze DataFrame useful.
Hi @user-412dbc , apologies that your earlier message had gotten lost.
pandas
library can be quite helpful for this.Thank you very much Rob for your help, I deeply appreciate your and your colleagues' efforts.
I already took a look at some of these papers and noticed that none of the papers that I have read (until now) explicitly stated that they filtered their data for blink artifacts. Nonetheless, I will try to work that out in Python. Or else I am thinking to adjust the minimum fixation duration to 130-150ms that will also act as a protective factor for blinking impact. From your answer I suspect that there are not any programs that can load Pupil Core's datasets and remove the blink artifacts automatically, right?
@user-412dbc no problem. We are happy to help!
No, we do not have a tool that automatically filters the data during a blink, since there is some variability in how different research groups do it and so that you have the flexibility to try different approaches and paradigms that best fit your use case.
Hello team, we have a question. We're analyzing an experiment with the help of an engineer who's processing the video images to determine where the participants are looking at each moment and categorize what they're looking at (whether it's the target, a distractor, etc.). However, he has some doubts about which is the best file to analyze. He's using 'world.mp4' to project and extract image data, 'world_timestamps.npy' to retrieve the correspondence between the timetags and the video frames. With the 'fixations_timestamps.npy' and 'fixations.pldata' files, a function has been used to extract the information and combine it into a single data structure. From these files located in the main folder, we primarily get the image coordinates. But there are more files within the 'offline_data' folder related to fixations. We want to make sure we're extracting data from the correct files. Thank you very much!
Another question is that the fixations have thresholds to determine whether something is considered a fixation or not (based on angle and time). My concern is that during execution, the gaze might move faster and not consider certain areas as fixations when, in fact, they have been viewed. Should we adjust the fixation thresholds to make the detection more accurate? Or instead of using fixations, should we rely directly on gaze data or another less processed file? Thank you!
Hi @user-4514c3! I have some follow-up questions to help me better understanding what you're trying to achieve and how I can help:
1. What exactly are you doing with the scene video frames that requires you to consume real-time fixation data stored in binary format (.pldata
)?
2. Regarding gaze vs fixations - what sort of task were your participants doing?
We have used the files that we found most convenient to work with since we don't have much experience or proficiency with the device. That’s why we wanted to ask in case we were doing something wrong. Thank you very much. The participants' task is simply to pick up a geometric piece of a certain color from a board.
Hello. I installed pupil core software v3.5 on Ubuntu 24.04.1 LTS , but pupil capture, player and service are crashing and not starting. I installed pupil_v3.5-8-g0c019f6_linux_x64. It says cysignals failed in the terminal. How can I get it to work on my system?
Hi @user-f7a2f7 , can you try the following:
Yes, I did use the mouse to click on the App icon and it crashed and I got the attached error. I don't see launch with dedicated GPU with right click. Is it missing any dependence? I see only New Window and Open in Terminal options with right click.
@user-f7a2f7 Just so you know, I moved our conversation to the 👁 core channel.
@user-f7a2f7 Are you on a system with an Nvidia graphics card?
My laptop has an AMD radeon graphics card and the pupil software crashes right way. I am running pupil capture in another computer that has an NVIDIA GeForce RTX™ 4090. In the other computer it starts and I can see the two eye cameras, but crashes when I hit record and nothing is saved.
On the system with the AMD card, the error message suggests that Ubuntu is using the default Wayland compositor. Wayland is in principle still a rather new technology that is not fully compatible with all software yet and this could be the cause. I recommend trying to start an X11/Xorg session and see if Pupil Capture starts without issues then. See here for a way to do that.
Your Nvidia machine will default to X11 if you installed the Nvidia drivers. Regarding the crash on that computer, can you send a copy of the Pupil Capture logs? You will find this in your home directory in the pupil_capture_settings
folder.
There's nothing inherently wrong with consuming those files directly. However, they are an intermediate format, saved by Capture and intended to be loaded by Pupil Player. You can then export data into .csv files as needed. If you haven't already, read the Player docs.
Both workflows have been employed by our users. However, if you want to tweak the fixation detector thresholds, you'll need to use Pupil Player. Choosing the right thresholds for your task can be worthwhile. For an overview, read this section of the documentation.
Another reason to use Pupil Player is Surface Tracking. You have Surface Tracking markers visible in the screenshot you've shared. Do you intend to create AOIs and map your fixations to them?
Thank you. Yes, we are taking the results through 'exports'. We are using the pldata because we believe that, even after post-calibration, this data is updated by Pupil Player. We found the function to parse these files in your code (https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py) (the load_pldata_file function). Since we have it integrated this way, it is easier for us to maintain it as such. If you think this could be a problem, we can switch to the CSV files. But if there’s no issue, using pldata should be fine for us.
What we have detected is that it seems some fixations in the data have the same timestamp. Is this correct? We're not sure if it's due to rounding issues in the float when encoded into the file or something else. In any case, it’s just a small amount (in the example we're analyzing, it's 178 out of 14,553 fixations)
I am currently using the Pupil Core eye-tracking device . The setup includes only one pupil detection camera on the right side. I have been trying to perform calibration, but I consistently receive the following message:
“gaze_mapping.utils: An unexpectedly large amount of pupil data (> 20%) was dismissed due to low confidence. Please check the pupil detection.” “world - [INFO] accuracy_visualizer: Angular accuracy: 3.308 degrees” “world - [INFO] accuracy_visualizer: Angular precision: 0.123 degrees”
I’ve also noticed that when I keep my head still and look towards the upper-left corner, the device has difficulty detecting my pupil. Attached is an image showing my pupil while looking in the upper-left corner. Could this issue be caused by my eyelashes, and is that why there are inaccuracies in the tracking data?
Additionally, I tried adjusting the Outlier Threshold in the Accuracy Visualizer from the default 5.0 degrees to 1.0 degree, but the error did not decrease. Could you provide advice on how to further improve the tracking accuracy?
Thank you for the helps. 😉
Hi @user-471c66 👋. The message that over 20% of pupil data was dismissed due to low confidence suggests that pupil detection can be significantly improved. Good pupil detection is the foundation of the entire pupil core gaze pipeline, so we should start there. Please share a recording so we can provide concrete feedback: 1. Restart Pupil Capture with default settings. 2. Start a recording in Capture. 3. Set up the eye camera as usual, ensuring the pupil is centred in the camera's field of view. 4. Ensure the eye model fits well. To do this, slowly move the eyes around (e.g., by rolling them) until the eye model (blue circle) adjusts to fit the eyeball and the red ellipse robustly overlays the pupil, as shown in this video. 5. Perform a calibration routine. 6. Perform a validation routine. 7. Stop the recording and share the folder, e.g., 001, with [email removed]
@user-471c66 On top of @nmt's answer. Given that in one of the images that you shared, it seems like the eye brow is confounding the pupil detector, you may want to try setting a region of interest to delimit the areas where pupil can be found, more info and a video on this message: https://discord.com/channels/285728493612957698/285728493612957698/1214561339947876392
Hi there, my lab has been using the raw MP4 videos from the Core to study blink amplitude. I was wondering if it is possible to set the frame rate of the eye cameras to a fixed one as the variable once means I cannot determine the blink duration outside of the pupil player.
Hi @user-ac0587 , Pupil Core's cameras are free-running. This means that it is not possible to set them to a fixed frame rate when using them with Pupil Capture.
When you export the data from Pupil Player, you can also export the timestamps (in Pupil Time) to numpy and CSV files for the world video and eye videos:
eyeX_timestamps.csv
and eyeX_timestamps.npy
world_timestamps.csv
and world_timestamps.npy
You can then close Pupil Player and load those files into Python, MATLAB, etc to calculate the duration between two eye video frames. The variable frame rate does not prevent such calculations.
@nmt Hi Neil, thank you for the respond. I have recorded a clip and followed your instruction. The folder is about 36MB, I have upload to the cloud. https://drive.google.com/file/d/1PsAMMwtR_TOAn7cIkg2zB11C9DoIy4_n/view?usp=sharing Thank you for the help.
@user-d407c1 Hi Miguel, thank you. I tried with different angles, but seems like it always can detected my eye brow. I saw a video from official website that core could use a extender? And that might help? But I checked my box, it does not have that item.
Hi @user-471c66 ! I do not think the extenders would help om this case, but setting the Region of Interest on the eye window definetely would.
Hi again, I have been trying to find out how the Blink Detector estimates blink duration. Compared to the Tobii eye-trackers and our own algorithm, the Pupil Lab's blink detectors have significantly longer blink durations and I was wanting to know if there is some coefficient that is applied seeing as the Blink Detector uses pupil signal to determine blink onset and offset?
@user-ac0587 , for this question, I recommend checking the Blink Detector documentation.
There is no coefficient applied. Blink onset/offset are computed directly from 2D pupil detection confidence and the associated timestamps are used directly.
Note that you can change the default Blink Detector Threshold.
If it helps, the code for the Blink Detector is available on Github here.
Hi Rob, thank you so much for getting back to me and for your response. I will look into the documentation and the timestamps folders.
Hello, I was doing a recording when the pupil capture software crashed with no error message. I now have a recording folder but if I try to open it in the pupil player program I get "player - [ERROR] launchables.player: InvalidRecordingException: There is no info file in the target directory.". Is it possible to recover the data in any way?
Also, while watching the following recordings I get the on-screen error "PLAYER: Error dc\PLAYER: error y=97 x=100" and the pupil data disappears. Any idea if this is recoverable as well?
Hi @user-4c48eb , yes, if Pupil Capture crashed during the recording, then the files in the folder will be in an incomplete state. Depending on the exact state, it can however be possible to recover some of the data. See here: https://discord.com/channels/285728493612957698/285728493612957698/1265299802208338000 and the linked message here: https://discord.com/channels/285728493612957698/285728493612957698/1097858677207212144
When you say "the following recordings", which recordings are you referring to?
Hello everyone, if I want to try to add the appearance-based deep learning line of sight estimation method to the Pupil core code, that is, complete the line of sight estimation through binocular images, is there any more convenient way, such as developing a plug-in; but I I have read the Plugin API tutorial on the official website, but I still don’t understand it in particular. Is there any faster way to learn it?
Hi @user-fb8431 , do you mean that you have developed your own deep learning gaze estimation model?
If so and you want to use it with Pupil Core, then yes, developing a plug-in is the fastest way.
What part of the process is not clear?
If it helps, you can also refer to plug-ins that others have made. The Gazers
plug-ins might be of interest to you
Sorry, after it crashed the first time I launched other recordings. That's what I mean with "the following recordings"
@user-4c48eb Could you try the following:
If you still see the error, then please send a copy of the logs in the pupil_player_settings
folder. This folder is found in your home directory.
Hello! We are currently analyzing data for a poster, but we’ve noticed that the fixations are not being marked correctly. We also have a few questions regarding this:
I’m attaching a picture in case it’s unclear. When I open Pupil Player, it shows that the subject is looking at one figure, but when analyzing the data with the files we’re using, it indicates they’re looking at a different figure, and at the same time, some fixations are lost.
Hello! We want to run code only about gaze mapping (producing and mapping the red circle ). Which code do we have to run from github?
Hi @user-3187b1 , are you trying to map gaze to a surface that is defined by AprilTags?
Hi We are a group of robotics Uni students and we are currently working on using the pupil labs core for some form of control interface. I am wondering if there is a way to access the live camera feed outside of pupil capture from the cameras on the core, i havent been able to find a solution for that, so i thought i'd ask here. Could there also be some compatibility issues with pupil capture and openCv?
Hi @user-11b3f8! The easiest way to do this is with our real-time API - this example script shows you how: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py
You might also be interested in our video backend: https://github.com/pupil-labs/pyuvc?tab=readme-ov-file#pyuvc
ok thanks i will try it.
Hello! We are using Pupil Labs Core products well. However, we have a question. When looking at the CSV file, it seems that gaze normal0 (x, y, z) and gaze normal1 (x, y, z) are used to infer norm_pos. We're curious about the procedure for how this happens. Could you point us to which part of the source code we should look at? I think it might be in the gaze mapping section of the shared module, but it's more difficult to analyze than expected, and it seems challenging to run only that specific part of the code. Also, are there any examples in Pupil Helpers or Pupil Tutorials related to the content mentioned above? As a graduate student in the related field, I have gained significant help from your products, I look forward to your response. Thank you.
Hello @user-fce73e , yes, for the 3D calibration pipeline, the gaze_normal_0/1
values are relevant for determining norm_pos_x/y
.
The relevant part of the gaze mapping (i.e., the one based on the default 3D calibration pipeline) is in gazer_headset.py
. It contains methods for monocular and binocular gaze mapping. In particular, check the predict
and _predict_single
methods. For example, the lines that transform gaze_normal
to norm_pos
are here.
While there are no specific Helpers or Tutorials for the transformation, some details for the 3D calibration pipeline are contained in the description of pye3d and in the associated publications linked at the bottom of that page. In particular, the publications contain descriptions of the gaze mapping procedure.
It is true that these classes/methods depend on other parts of the Pupil software, so you would need to extract a fair amount to run it separately. May I ask why you need to extract the code rather than use the standard Pupil software or the Network API?
@user-3187b1 , for a saved recording, the position of the red circle in pixels (i.e., in 2D world camera coordinates) is essentially provided by Pupil Player. You load the recording into Pupil Player, enable the Raw Data Exporter plugin, and scale the resulting norm_pos_x/y
value in the exported gaze_positions.csv
file. The exact scaling factor will depend on the resolution setting you have chosen for the world camera.
You can also obtain norm_pos_x/y
in real-time via the Network API.
Gaze position in 3D real-world coordinates is more involved. While the Raw Data Exporter plugin also exports gaze_point_3d_x/y/z
, the Z component of this value can be unreliable over about 1 meter distance.
However, it is of course possible to determine how gaze maps to 3D world coordinates. One approach is to use computer vision algorithms. I would recommend checking our publication list to see how others have approached this problem with Pupil Core.
Gaze Plugin
Hello - is there a product suitable for urban outdoor research? Is there a special offer for a university?
Hi @user-76014e , our latest eyetracker, Neon, is mobile and designed to be used outdoors and indoors. To be sure I can give you the best answer, may I ask exactly what you kind of research you are doing?
And yes, we offer academic discounts: a 700 EUR reduction off the cost of Neon bundles, for example.
If you'd like, you can schedule a 30-minute Demo and Q&A call to learn more. Just send an email to info@pupil-labs.com
Hi everyone! We have just purchased Pupil Lab's Core for our lab as I had the expectation I could still use the Android app to use the Core on the go. Does this mean I can only use the Core attached to a laptop/desktop to use it?
Hi @user-a3c818 , do you mean the Pupil Invisible or the Neon Companion App? Just to clarify, those apps are not compatible with Pupil Core.
Pupil Core can also be used when connected to a single board computer, but the preferred solution for mobile eyetracking is Neon. We can provide a Demo and Q&A call about it, if you'd like.
Hi all, can someone help me? One of the wires on my bare metal neon's PCB got disconnected. Can I resolder that or do the PCB and wire need to be replaced?
Hi @user-24f54b , we received your email and will continue communication with you there.
Hi, Has anyone added a second scene camera to the core?
Hi @user-335ee0! May I ask what the use-case is? Physically, it is of course possible to add a second scene camera to Core's frame. Although our software doesn't support the capture of two scene cameras concurrently per se.
Hello everyone, I have a question. My custom deep learning gaze estimation plug-in inherits 'PupilDetectorPlugin'. I want to get the images of the left and right eyes at the same time, but in this way 'frame = event.get("frame")' only The data of the left eye or the right eye can be obtained randomly, so I want to know if there is a way to obtain the data of the left and right eyes at the same time.
Hi, @user-fb8431 - let me see if I can help
I want to get the images of the left and right eyes at the same time
This is fundamentally impossible. The left and right eye cameras are two separate sensors on two separate devices. They will basically never capture images at the exact same moment in time - and if they ever do, it's a coincidence. You will need to capture one frame and save it in a variable to be processed with a frame from the other camera once that one arrives.
...only The data of the left eye or the right eye can be obtained randomly
Well it's not really random. The cameras operate at their configured sampling rate and, as soon as the sensor transmitted an image to the PC, you'll see an event for it in your plugin. The frame rates aren't exact though, so you'll see some fluctations in the sampling rate, but that's really as random as it gets
Hello everyone, I was wondering.. to use the surface tracker plug-in it is necessary to use “apriltag” markers or you can also use custom markers to define the surface of interest (for example: colored squares) ?
Hi, @user-a4aa71 - you do need to use AprilTag markers
Hello!
I have installed the Pupil repo on Ubuntu and was able to run python main.py capture
.
I want a minimalistic run without world camera and other things.
I just want to get 3D coordinates of the pupils in real time. Is it possible?
Do you have examples of code for similar tasks where I could learn?
Hi @user-d71076. I've moved your message to the appropriate channel. What you suggest is possible. There are a few ways you could achieve this. The easiest might be to run Pupil Service, which is a minimalistic capture software: https://docs.pupil-labs.com/core/software/pupil-service/#pupil-service
Would that meet your requirements?
Thank you.
If I run python main.py service
from this repo: https://github.com/pupil-labs/pupil
I see 2 eye images with tracked pupils colored in red.
I would like to have 3d coordinates of pupils or a full gaze message with frequency 200 fps.
Ideally I would prefer to get it in python. For example I tried to use pupil_labs.realtime_api.simple
and the following code:
from pupil_labs.realtime_api.simple import discover_one_device
device = discover_one_device()
gaze_sample = device.receive_gaze_datum()
left = (gaze_sample.eyeball_center_left_x, gaze_sample.eyeball_center_left_y, gaze_sample.eyeball_center_left_z)
right = (gaze_sample.eyeball_center_right_x, gaze_sample.eyeball_center_right_y, gaze_sample.eyeball_center_right_z)
but it requires a phone with installed Neon Companion. I tried to find a way to avoid using cell phone and connect glasses directly to my PC. I tried to avoid using the cell phone in order to simplify my setup and to receive the gaze datum as fast as possible (without use of network),
Is it possible to connect glasses to a PC (Ubuntu) directly and recieve gaze data in python code with 200fps?
@user-d71076, so you're actually using Neon? I was thrown off the scent given that you were talking about Pupil Capture, which is predominantly Pupil Core software.
Hey andre! I am also an enthusiast of direcly using sensors and not using a phone in between for simplicity 😉
Last week I started writing my own plugins for the capture software. https://github.com/paulijosey/pupillabs_capture_plugins I tried to document the code with everything I figured out (you can also check the "plugin.py" file in the pupil_src/shared_modules dir to get more info).
Honestly I am not sure about the Hz you can get the gaze data with but it should be easy to log with those pointers :). Have fun and keep me posted.
(P.S: if you figure out a way to get IMU data with the capture software I'd be greatful 😉 )
It's technically possible to run Pupil Capture/Service software with the Neon system. But note that it is experimental. Let me take a step back - can you elaborate on your end goal with all of this, what's your research question/application?
Hey everyone! 🍀
We're working on a new research project that involves tracking the eye movements of both a child (aged 3-5) and their parent while they watch cartoons together. We’re using two computers, each connected to a separate eye tracker. We’re doing calibration one after the other—first for the parent, then for the child. So far, everything works smoothly for individual recordings, but we’re about to start testing with dyads, and we want to make sure everything is set up correctly before we dive in 💪 .
I have a few questions and would really appreciate any advice:
Syncing two trackers: Do we need any additional tools to ensure proper synchronization between the two eye trackers running on separate computers? Is there a recommended way to align the data from two separate devices?
Security concerns: We’re using university computers, so we want to make sure that our experimental setup isn’t accidentally tampered with by others. Any tips on how to lock down our setup or prevent accidental interference?
Calibration tips: Do you have any suggestions on how to handle calibration for both participants (child and parent) to get the most accurate data? Should we be doing anything differently for dyads?
Additional tools or plugins: Are there any extra plugins, software, or tweaks we should consider for running two eye trackers simultaneously? Anything that might help us with data collection or processing?
We’re writing everything in Python using PsychoPy, and it’s been great so far for individual tests. Just want to make sure we’re covering all the bases as we move to working with parent-child pairs.
Thanks so much in advance for your help! We’d be super grateful for any insights or recommendations 😊
Hi @user-cc6071 👋 Thanks for sharing the details of your research project!
Yes, it is possible to synchronize two Pupil Core devices. To start/stop recordings simultaneously on two different computers running Pupil Capture, use the Pupil Groups plugin. For temporal synchronization between the two Pupil Capture instances, I recommend the Time Sync plugin.
As for securing your setup, Pupil Capture does not have built-in features for locking down the experimental environment. However, it keeps any enabled plugins active for future sessions and has a folder structure that prevents accidental overwriting of recordings through progressive numbering. Since you're sharing computers at the university, I strongly recommend backing up both your recordings
and pupil_capture_settings
folders after each session. This will help ensure that your data and settings are safely stored, preventing unintentional loss. If any settings in Pupil Capture are changed, you can easily restore them by replacing the current settings folder with your backup. Additionally, consider backing up the recordings
folder after using Pupil Player to safely store the export folders generated during data export.
Hi, I have a question about intrinsic/extrinsic parameters. I have learned that intrinsic parameters can be obtained from camera_models.py. After performing calibration, is there a way to export the extrinsic parameters separately?? Thank you
also interested here 👆
Hi @user-3187b1 and @user-b96a7d , I assume you are looking for the rotation and translation components of the extrinsics (aka, the rvecs
and tvecs
)? When you run a Camera Intrinsics Estimation for your Pupil Core cameras, then that is provided:
You will need to run Pupil Capture from source and modify the code to output those values.
May I ask what you will be doing with these values?
For the calibration, there aren’t any specific tips tailored for dyadic screen-based experiments. However, ensure that the pupil is centered in the camera's field of view when positioning the eye cameras, and that the 3D model is properly fitted around the eye. I also recommend reviewing our best practices for conducting experiments with Pupil Core to ensure accurate data collection.
Regarding data processing, could you elaborate on how you intend to work with your data? This will enable me to provide specific recommendations.
Hey all, I would like to use Core on a participant and an external camera to film him while wearing it. Is there an already developed way to syncronise both recordings? Or, is it possible to get the clock value (or something similar) from Pupil Capture?? This way I can leftJoin the clock value from both and then sync it
Hi @user-3b5a61 , you may find this message about syncing with other cameras helpful: https://discord.com/channels/285728493612957698/446977689690177536/903634566345015377
To get the clock value (i.e., timestamps) for data (in a CSV file), you can load a recording into Pupil Player and export the Raw Data. The timestamps will be in Pupil Time, which can be converted to System Time using the code here.
The connector and wiring of the eye camera have come off. Is it possible to reconnect them?
@user-eac531, can you share a photo so we can better understand the situation and provide feedback?
Hi. I have a question regarding calibration. In Core, there are three calibration methods: (1) using a screen marker, (2) using a single marker, and (3) using natural features. When I tried methods (2) and (3), they didn't work well, and it displayed that the confidence was below 20%. Is there any reference video that shows how to actually perform calibration?
Hi @user-76ebbf ! This confidence parameter arises from the confidence in pupil detection. How do your eye cameras look like? Have you adjusted them and built a model before calibrating? https://docs.pupil-labs.com/core/getting-started/#_3-check-pupil-detection
Hi @user-d407c1 , thank you for your response.The eye camera's video is working without any issues, and I am able to capture the pupil and eyeball. Calibration using the screen marker works fine, with a confidence level of almost 100%. I've checked the documentation about calibration on the website, but it's possible that I might be misunderstanding it. Therefore, I would appreciate it if you could provide a detailed explanation or any reference videos regarding methods (2) using a single marker and (3) using natural features.
Thanks for confirming! For the single marker, you can use either a printed or screen-based marker, and there are a couple of ways to perform the calibration choreography:
For the natural features calibration, the process can be trickier, as it requires clear communication to direct the subject to look at and select specific points.
I suggest that you share a recording with us [email removed] so we can review it and provide more tailored feedback.
If you haven't already, I'd also recommend checking out the best practices guide for additional tips.