Hello, I have question about a weird gaze offset compared to what I am looking at and what the pupil core DIY thinks I am looking at.
For example, in this picture I am looking at the cursor here but the gaze dot is always a bit offset, this was a problem throughout the session. I'm not sure exactly what the problem is, (world camera focusing, how far I should be looking at the screen, etc,) but I would like some general pointers on what I should do, to debug this.
Hi @user-4a1dfb , is this happening for all recordings or just this one?
Hello, Is it possible to get saccade information in core ? If yes how to set it up.
Hi @user-e6fb99. The Core system does not output saccades by default. But there are some community-contributed tools that compute saccades. You can find more int he community repo.
[email removed] , is this happening for all
Hi everyone, Iβm analyzing Pupil Core recordings offline and need guidance on two points: 1. Time to First Fixation (TTFF) β Whatβs the recommended way to compute TTFF on a surface/AOI using Pupil exports? 2. Aggregated heatmap across participants β Best practice to generate a grand heatmap on a static stimulus?
Hi @user-74b1c6 π
@user-f43a29 Thanks, Rob! π Yes β I want to compute TTFF with respect to the start of the stimulus presentation, not the recording start.
Ok! Do you have Annotations already saved in your recording for those moments?
I didnβt use the Annotations during recording, but I designed the task in PsychoPy, so I have the exact stimulus onset timestamps for each trial.
Hi everyone, I defined a Surface manually in one recording, and Iβd like to use exactly the same surface coordinates and size in all my other recordings (same monitor setup, same AprilTags). What should I do?
Hi @user-74b1c6 , please try my colleague's, @user-4c21e5 , tip here and let us know how it goes: https://discord.com/channels/285728493612957698/285728493612957698/1305841351916523561
Hi Rob, it was helpful. thanks a lot. I have another question. According to the picture, sometimes β even though everything seems correct (surface tracking, calibration, etc.) β the software doesnβt detect any fixations?
Hello, for the Pupil Core eye cameras, are there any "ideal" settings that should be used or any best practices on what settings to change if certain things happen (E.g., model not sticking well to the pupil, should you then increase the brightness of the IR lights or if someone is using glasses while using the device?) Thank you.
Hi @user-4a1dfb , Pupil Capture does not provide a setting to control the intensity of the IR illuminators and while others have made it work, using Pupil Core with glasses can be tricky.
Otherwise, the default settings are usually sufficient for most use cases. With respect to the model not sticking well, be sure to first have the pupils well-centered in the eye camera images.
Hi @user-74b1c6 , you are welcome. Would you be open to sharing the recording with us for closer inspection? If so, you can share it with data@pupil-labs.com via Google Drive, for example.
OK
Hi everyone. I have connected eye tracker with the laptop and using pupil capture. Eye0 and Eye 1 camera is working but I am not able to see world camera view. I unplugged and connected it and did restart it many times but its not working. what should i do?
Hi @user-45048d π ! Is it the first time that you connect Pupil Core to that laptop? Could you share whether you are using Windows, Mac or Linux?
Hello! We now want to use four eye cameras in pupil capture to obtain pupil positions and shoot videos. Is this possible? I only found demo for capturing video frames in the GitHub repository. Many thanks!
Hi @user-b02f36 , you could write a Pupil Capture plugin for that, or use Annotations to synchronise the different cameras. Lab Streaming Layer would also be an option. See here for more details on some of these options.
Hi, Rob. I tried to write a Pupil Capture plugin for using four eye cameras. However there is something wrong when starting my plugin. I found that in Line 386, capture.log, there was an error: 2025-10-24 16:59:50,247 - world - [ERROR] launchables.world: Attempt to load unknown plugin: 2. I also found that this error was related to ~/pupil_capture_settings/user_settings_world. In addition, in capture.log, it seemed that the written plugin was successfully scanned. Is there any possible methods to solve this problem? Here I give you my capture.log and plugin code. I use Pupil capture 3.5.1 and I write my plugin based on python 3.11.14 in miniconda. Many Thanks!
I see. Thank you, Rob! btw, is it possible to run pupil capture from source on Windows? I only tried to run it from source on Ubuntu 20.04.
Yes, running from source on Windows, Linux, and Mac is possible.
I see. It is possible to run from source using miniconda! Thank you, Neil!
I believe so. Best to give it a try!
Hello, what operations should be performed for dynamic areas of interest (tracking moving pedestrians)? We have built a scenario in Unity where pedestrians are crossing the road, and participants are required to wear eye-tracking devices to see whether they can notice these pedestrians.
Hi @user-d9be4a , are you using Unity for VR stimuli or are you simply presenting stimuli on a computer monitor?
Hi,Rob.Stimuli are presented on the computer monitor, such as the pedestrians crossing the street in the figure below.Thank youοΌ
Hi @user-b02f36 , I am not able to do code reviews or provide detailed support for adding custom DIY cameras to Pupil Core. Upon a quick inspection of the code, I do not immediately see anything erroneous. Perhaps try trimming down the Plugin or progressively commenting out sections until you find the troublesome bit of code.
Also, do you mean you are running Pupil Capture from within a miniconda installation? You may want to rather just use plain Python, installed from the Python Org website, with a fresh virtual environment. Sometimes conda can inadvertently introduce issues.
Otherwise, perhaps members of the community who have tried similar can share their experiences.
OK Rob, I will try to simplify my code first. If I still have questions or find a solution, I will reply to you in the channel. Thanks sincerely!π
Hi @user-d9be4a , did you also use AprilTags with Pupil Core's Surface Tracking capabilities?
Yes, before using the PupilCore, I set up an AprilTag in the corner of the computer monitor. Since we need to observe the entire process of pedestrians crossing the street, I'm not entirely sure if this was the correct approach.
Hi there, I'm performing a research data collection using Pupil Core, but some of my participants have prescription glasses. I saw another thread where it was explained that it will be challenging, but I was wondering if there is a way to still process the eye video data offline to get better gaze detection results despite the prescription glasses being there? I could not put the eye cameras under the person's glasses so I kept them on top. Can I use the egocentric video and eye videos to generate the gaze trackig in post-hoc? Thank you in advance!
Hi @user-0b1050 , you can simply start by trying the standard procedure of loading the recordings into Pupil Player and seeing how that looks.
Hi @user-d9be4a , using AprilTags is correct, but if you only used one tag, then the Surface Tracking plugin will not work correctly or robustly. As an alternative, if you only need to know if gaze was on/off of pedestrians, then you could try an object detection routine, such as those provided with YOLO to detect the pedestrians in Pupil Core's world camera video and then check if the gaze data overlap with the segmentation.
Thank you for your answer. Is it feasible to post-hoc define areas of interest with Pupil Player and make the AOIs follow the moving pedestrians?
Thanks @user-f43a29, I forgot to mention that I am not making the recordings using Pupul Capture. I am using custom system that integrated PupilLabs.PupilFacade for online gaze detection and the saved data is not in the format that Pupil Player expects it. I also don't have the calibration data being saved. Would it still be possible to do any post-processing of the videos with this type of approach?
@user-0b1050 may I first ask what PupilLabs.PupilFacade is?
Yes. Defining AOIs with Pupil Player requires use of the Surface Tracking plugin, which works best with at least three AprilTags, preferably more.
I will try placing a few more AprilTagsοΌThanks
Hello β my goal is to calculate pupil angular velocity and detect microsaccades using Pupil Core. May I ask following questions:
Which columns should I use? Should I use theta and phi from pupil_positions.csv, or compute theta and phi from gaze_positions.csv, or use other fields?
Is a sampling rate β₯ 200 Hz required? How do I set that sampling rate? If the actual sampling rate is lower than 200 Hz, is upsampling (interpolation) acceptable?
Thanks for any guidance!
Hi @user-ee0a9a , you can of course ask questions.
If you want pupil angular velocity, which is essentially the speed at which the eyeball is rotating, then you want to work with pupil_positions.csv. However, if your interest is microsaccades, then you would typically look at gaze_positions.csv. It comes down to whether you are interested in the optical axes or the visual axes.
Pupil Core does not record faster than 200 Hz.
Will a computer with this 13-inch model run properly when using Pupil Core? https://www.microsoft.com/ja-jp/surface/devices/surface-pro#tech-specs-uidee13
Hi @user-ee61dc ! You may find some compatibility issues with that ARM processor, you may want to look for an x86 chipset.
Thank you so much Miguel! Should microsaccade events be limited to the time periods in fixations.csv, since in theory they occur only during fixations?
Hi @user-ee0a9a , that sounds like a reasonable first approach.