Hi im trying to get a good calibration value but I can't get the angular accuracy to be less than 3 degrees and when I test it out it is too inaccurate. I have it on default settings currently and have just been doing the online calibration on pupil capture. Could someone who has a good calibration technique please help?
Hi @user-357ae8 , if you could share a recording of the calibration process with [email removed] then we can give direct feedback. You can put it on Google Drive, for example.
Hallo everyone,
Question Regarding Eye Tracking Core and Surface Plugin
In my experiment, I have defined two surfaces (two screens), each marked with four fully visible QR codes. The screens are positioned side by side.
However, I consistently encounter an issue where the surfaces are not stable, they frequently lose tracking, and only the surface boundaries appear (highlighted in pink). I've tried using the “Frozen Screen” option in the Surface Plugin multiple times, but it doesn’t seem to help reliably. I'm unsure if this is expected behavior or if there’s something wrong with my setup.
Additionally, I have a second question: On one of the screens, is it possible to accurately determine whether a person is looking at a specific widget or region within the screen? If so, what’s the recommended way to achieve that?
Thank you!
Hi @user-ad361c , if you could share a Pupil Capture recording of the Surface Tracking with [email removed] then we can give direct feedback. You can put it on Google Drive, for example.
Hello everyone. I use Pupil Player to export the eye-tracking data. During the process, the message shows that "failed: not enough markers with the defined origin marker id were collected"; therefore, I want to ask what this means. Did I miss some process or something doing wrong?
With respect to your question about the recording, are you trying to use the Surface Tracking plugin in Pupil Player? In other words, do you want to know where they look on the tablet screen?
Hi @user-dd61bd , this could indicate that the AprilTags for your Surface were not detected or perhaps there was some motion blur. If you could share the recording with us [email removed] then we can take a look and provide more direct feedback. You can put it on Google Drive, for example.
@user-f43a29 Thank you for your responses. I am providing the Google Drive link that contains the original eye recording files, including the exported files. Please check it out. Thanks again for your help.
Hi @user-dd61bd , please note that you have shared this recording publicly. If this recording must stay private, then I'd recommend removing the link and sharing it with us at data@pupil-labs.com
@user-f43a29 Thank you for your reminder. I will send you the video information link [email removed] Additionally, the Discord message has been retracted. The Surface Tracking plugin is currently disabled during the export process. I need to know the position of the subject's fixation on the tablet screen. Based on this information, should I activate the Surface Tracking plugin? Thank you.
If you want to know where they are looking on the tablet screen, then using the Surface Tracking plugin with AprilTag markers is the standard approach.
@user-f43a29 Thank you for your suggestion. I will give it a try.
I have a pupilcore device, and I want to know the detailed camera internal parameters of the human eye camera. How can I find this parameter? Can you tell me?
Hi @user-bd106e , do you mean the intrinsics of the eye cameras? If so, then as documented here:
When a recording is started in Pupil Capture, the application saves the active camera intrinsics to the world.intrinsics, eye0.intrinsics, and eye1.intrinsics files within the recording.
Hi @user-e8726c ! We have replied by email too, the license of pye3d has been changed to LGPLv3, you can find it on the repo.
Anyone can answer this question. Can Pye3D be used for non-profit academic purposes? I can't find any clear license on it's github
More specifically, I wanted to ask:
Can I use pye3d in academic publications?
May I include it in a public GitHub repository?
Oh nice! Convenient, We come from Mainz/Wiesbaden! 😮 Thanks for the heads-up!
Oh, its 500 bucks?
Yes, depending on when one registers, that is roughly the price of registration. The organizing committee of ECVP is better positioned to provide info about pricing, though.
I asked per E-Mail: As a visitor it seems to be free. So we will see each other 🙌
Hm. I'm wondering if I even need to pay anything when I'm not planning to present anything. I will find that out.
NOOOUUU
@user-84387e This ☝️ is also of relevance to you, if I remember correctly (https://discord.com/channels/285728493612957698/446977689690177536/1326152860131524618).
YEEEEAASS
Hello all, I used the equipment and when i export the csv, not all the timestamps has the diameter for both eyes, sometimes it has only for the left or for the right. Any reason for this happening?
Hi @user-5c9c29 , can you share an example recoding with [email removed] Then, we can provide more direct feedback. You can share it via Google Drive, for example.
It could be that the pupils were sometimes simply not detected.
So sometimes we have an intercalated detection of left and right detection for like 15 time stamps and then it comes back to normal (normal is to one row of timestamp, two diameters, one for right and one for the left eye) And i would like to know possible causes, for i can understand if it’s needed to discard these parts of data. Thank you for the answer
Hi @user-5c9c29 , may I ask for more clarification about "intercalated detection"?
An effective way to determine what was going on is if we can see a screenshot or a video of how Pupil Core was imaging the eyes during those moments in the experiment. If you cannot provide the original recording, then that is okay. You can also send us a screenshot or video via DM.
If that is also not possible, then you may want to check pupil detection confidence and blinks at those timepoints.
Intercalated detection means that for 1° timestamp i have the diameter for the left eye (and don’t have for the right eye); for 2° timestamp i have the diameter for the right eye (and don’t have for the left eye), and go on…. This happens sometimes for like 15 timestamps and then comes back to normal.
Are you looking at the diameter_3d
column of pupil_positions.csv
and only using the 3d c++
method rows?
I’m looking for the pupil_positions.csv and looking for the column pupil_timestamp and the column eye_id and for both methods that i have
Ok, I see.
As mentioned, if you can share a recording with us or send a photo of what the eye images from Pupil Core looked like during those moments, then we can better assist you.
Otherwise, you will want to check pupil detection confidence and blinks at those timepoints.
Pupil Camera Not Detected via cv2.VideoCapture (Index 0–100 All Failed) Hi, everyone, I'm using a single Pupil Labs eye camera connected to my Windows system. The device appears in Device Manager under "libusbK USB Devices", so I believe it's recognized by the system. However, when I try to access the camera using OpenCV's cv2.VideoCapture, none of the indices from 0 to 100 work. I looped through all of them, and none returned a valid frame — or even opened successfully. In Pupil Capture, the camera appears but doesn't stream any video.
Is this because the Pupil camera doesn't expose itself as a standard UVC device?
Do I need a specific SDK or driver to access it outside of Pupil software?
Any advice would be greatly appreciated.
Thanks!
Hi! I'm using the Network API in Python to receive data in real time (I adapted the exmaple code from its document). However, I noticed that if I'm not requesting data at all time, the data would be "buffered", and the next time I request data (let's say 10s later), it would send over old data first from 10s ago. But I would like to get the lastest eye data at the moment I request it. Is it possible to set this up?
Thanks!
Hi, @user-764356 - with the Network API, you need to consume messages as quickly as they are produced or you risk losing data. You may choose to do nothing with messages if you are not ready for some or do not need them, but you do need to pull every message out of the delivery queue
Gotcha, thanks so much for the info!
hi for diy pupil core the world camrea not recognized by the software. I uninstalled the driver and did the troubleshooting process but still not work
Hi @user-f3c898 ! Is the camera you are using UVC compliant?
Hi Where can I find the API file that directly requests the current user's perspective file and the position of this gaze point in the image? Thanks
Hi @user-bd5142 👋 ! Just to confirm, are you looking to stream the scene camera and overlay gaze data using Pupil Core?
Pupil Core’s Network API uses ZeroMQ for communication. You can find an example here that shows how to stream the scene camera and visualize it.
To overlay gaze, you’ll also need to subscribe to the gaze stream. Here’s a basic example of how to filter and handle messages.
Once subscribed, you can simply plot the gaze data on top of the video stream.
Thanks
Thanks! Additional question: how do you count the FoV angle? When I do experiment using Pupil Core, how to count the angle? I believe there is no such value in the exported files. Is this calculated based on the real distance from observed objects?
Hi, @user-45f4b0. I've moved your message to the appropriate channel for Core 🙂. The FoV can change depending on which resolution and lens you use. You can read the specific values in this section of the docs.
Hello, I'm working with a group that is trying to use the Pupil Core headset in an experiment that includes the use of mirrors, and was wondering if there is any information regarding working with mirrors, or if there could be possible discrepancies in the tracking.
Hi @user-c6d54b , could you describe a bit more what you mean by "use of mirrors" or share a photo? I'm not sure I understand the end goal.
we were wondering if the pupil tracker would have any issues reporting eye tracking with a participant looking at a mirror from an angle
Hi @user-c6d54b , sure, you can still use Pupil Core’s Surface Tracker to determine where they look on a mirror. Just put a few AprilTags around the edges.
thank you 🙏
Hi, may I ask where the origin of 'gaze_point_3d_x', 'gaze_point_3d_y', and 'gaze_point_3d_z' in gaze_positions.csv is located? Is the unit in meters?
Thank you so much.
Hello, @nmt I have successfully collected my data from pupil core. I have an annotation file that annotates 1 event. It also has the time stamps for those events. I want to find the same time points from my other files like fixation and gaze position. I have a few question regarding that.
What does the time stamps mean?
If I want to look at fixations 300 ms before and after the annotated event how to do that?
^ in the same lab as her and was wondering what the time points represent with them not starting from zero along with a way to know the sample rate of the device
Hi, @user-6c5c32 and @user-e6fb99 - timestamps in Core use what's known as "Pupil Time" - seconds measured on an independent clock with an arbitrary start, guaranteed to be monotonically increasing even if the system clock changes mid-recording.
The sampling rate is actually up to you - before you start a recording you can adjust the refresh rate of the eye cameras up to 200 Hz.
Can we check the sampling frequency post recording? If yes what would be the step?
You can compute it
number_of_samples / (last_timestamp - first_timestamp)
what is the independent clock
Independent as in "separate from the system clock" - meaning it's not affected by changes to the system clock (like automatic NTP synchronization or manual user adjustments)
and is there any way to make it start at zero or is that always going to happen ?
You could simply subtract the first timestamp from all the others. This computed value will be measured in seconds starting at 0
. Alternatively, you could convert to system time
and is there any way to make it start at zero or is that always going to happen ?
Perfect thank you
Hello, when I look at the different exported files the number of data points is different across the different files. gaze_position.csv has a different number of data points than pupil_positions.csv. Which one of these should I use for calculating the sampling frequency? And why are they different. Thanks
Also in gaze_timestamp column sometimes there are same values for more than one row. What does that mean?
Hi @user-e6fb99 , note that the pupil_positions.csv
contains rows for each eye as described here https://discord.com/channels/285728493612957698/285728493612957698/1176712871892226170 and for different methods (2d and 3d).
Okay. Thank you
Also in gaze_timestamp column sometimes there are same values for more than one row. What does that mean?
Also the link is broken
Sy! I just realised this was moved. The link above works fine, it does explain how gaze datum are formed and timestamp matched.
Thank you. I have one more question. After processing the surfaces in pupil player. It generated a folder called surfaces which has gaze position on surface 1 , 2, 3. In these files how to look at the world_timestamps? Because in this file some times the world time stamps are repeating 45 times.
It depends on the sampling rate you’ve set for the scene and eye cameras, if for example you have 30 Hz vs. 120 Hz for the eye cameras. The eye cameras are running at a higher rate, so you may end up with multiple gaze points for the same scene camera frame.
In the gaze_positions_on_surface_XXX.csv
file, you’ll note there is also a gaze_timestamp column that reflects this.
45 is a random number. But it’s repeating more than 1. Then how do I consider each time points?
Yes there is a gaze timestamp which is different from the world timestamp by a few decimals
Is it best to first convert to system time? Because I am totally confused as to what each data points mean? And it’s crucial for me because I want to know when the participant left one surface and looked at the other.
The surf_positions_XXX.csv file indicates when a surface is detected in the scene camera. It provides the world_timestamp and the corresponding frame index where the surface was found.
The gaze_positions_on_surface_XXX.csv file contains gaze points — as reported in gaze_positions.csv — but mapped to the surface coordinate space. These points only appear when the surface was detected in the scene.
To analyze whether a gaze point falls on a surface, you’ll want to:
- Merge each gaze_positions_on_surface_XXX.csv file with gaze_positions.csv using the gaze_timestamp.
- For each file, add a new column named XXX to gaze_positions.csv.
- If gaze_timestamp is present in the surface file and the on_surf
column is True, mark that row as True in the new column.
This will give you a per-surface boolean flag indicating whether the gaze point was on that surface at that timestamp in the gaze file.
If using python, something like this:
import pandas as pd
import glob
# Load main gaze data
gaze_df = pd.read_csv("gaze_positions.csv")
# Iterate through all gaze_position_on_surface_*.csv files
for surf_file in glob.glob("surfaces/gaze_positions_on_surface_*.csv"):
surface_name = surf_file.split("gaze_positions_on_surface_")[-1].split(".csv")[0]
surf_df = pd.read_csv(surf_file)
# Create a new column with default False
gaze_df[surface_name] = False
# Merge on 'gaze_timestamp' where 'on_surf' is True
valid_gaze = surf_df[surf_df["on_surf"] == True]
valid_timestamps = set(valid_gaze["gaze_timestamp"])
# Mark those timestamps as True in the new column
gaze_df.loc[gaze_df["gaze_timestamp"].isin(valid_timestamps), surface_name] = True
# Save updated gaze file
gaze_df.to_csv("gaze_positions_with_surface_flags.csv", index=False)
count_true = gaze_df[surface_name].sum()
print(f"{surface_name}: {count_true} gaze samples on surface")
Wow that makes sense. Now this gaze_positions.csv would have all the time points which would correspond to the sampling frequency and I’d know when each data point was generated or the time between each data point which would be 5ms in case the system was sampling at 200 hz. Is my understanding correct?
That's correct, the sampling rate would depend on the one set in the settings as well as the computational capacities of your PC, if it can't handle you might see frame drops.
Also note the matching of timestamps on the gaze signal, might make it differ from the camera sampling rate, as noted in the link above.
If it is then it solves the gaze position issue. But how to do the same for the fixation position.
Hi Folks! I am wondering if it is possible to toggle gaze_mapper setting under calibration through code? Usually it will be done through GUI, but it would be wonderful if we can automate it? I wonder if it can be done via the IPC backbone?
Hi @user-131620 , this is not implemented in Pupil Core's Network API.
@user-f43a29 Thanks, I assume this control is not exposed and can only be control via GUI?
@user-131620 Correct, but if you absolutely need it, you can modify the source code.
Hello, a quick question. I made a recording using both eye0 and eye1. What I am interested in is eye fixations within AOIs. Post hoc, after I opened the recording, I found out that eye0 was problematic. Is there any way to export the recording using only info from Eye1 and ignore Eye0?
Hi @user-412dbc , sure. First, may I ask in what way is eye0 problematic?
Otherwise, did you record everything, including the calibration choreography?
Hello, there is an issue with the connection cable of our Pupil Core device, and the camera feed is not displaying. Is there anyone who can provide a repair service or assistance?
Hi! We have followed up on the ticket
Hi everyone! I have a question regarding the Pupil Labs eye tracker. I’m currently using it to track, in real time, the words on the screen which the user are looking at in the World view. However, I’ve noticed that the gaze or heat map often has a lot of jitter/noise — it moves left, right, up, and down rapidly, making it hard to stay focused on a single word for a longer time. Are there recommended settings to reduce this jitter (e.g., smoothing or filtering in the default configuration)? Or is it possible to implement custom code to make the gaze data more stable/accurate? Thanks for any advice!
Hi @user-13e552 , may I ask for more clarification about "making it hard to stay focused on a single word"? Would you be able to provide a screen capture/video of what you mean? You can share it via DM or email [email removed] if you prefer.
Hi! I had a session where the two eye cameras went out of sync while running. Since then there was a slight delay between the recordings of the two eyes and the world view always show two gazes, connected by a red line. Are accurate gaze positions still recoverable? Thanks!
Hi @user-764356 , would you be able to share this recording with us [email removed] and we can take a closer look?