Hi, i have recieved a folder from my colleague that contains the following files in image 01.png, I do wish to visualize the gaze on the world.mp4 video. I have checked your documentation but the gaze data is available in .csv file with world index being available, in my case that isnt available. could you guide me how can i get the gaze into the video.
Hi @user-26cf70 ! That screenshot looks like the raw data, you would want to open it with Pupil Player, where you can already visualize the gaze on top or export the data onto convenient CSV files.
hi @user-d407c1 Thanks for your reply. I am working on a project that requires visualizing gaze data over a video for further processing based on specific project requirements. To achieve this, I need the world index to properly align the gaze data with the video frames. However, the data I received is in its current format without a readily available world index. Is there a method or script to convert this data into a CSV file for easier processing?
Hello everyone,
Iโm currently working on a project that involves head pose estimation using AprilTags, and I have a question regarding the Euler angle convention used in this context. Could someone please clarify which convention is applied in pupil lab for the yaw, pitch, and roll angles? Any details about the coordinate system or the specific order of rotations would be greatly appreciated.
Thank you in advance for your help!
If you're using rotation_x|y|z
, these are actually a Rodrigues' rotation vector. If you're using pitch|yaw|roll
, then the coordinate system is relative to your origin marker
Thank you for your answer
One way to achieve this is by loading that folder into Pupil Player and exporting the data. This will generate several CSV files, including gaze_positions.csv
in the export folder.
With that, you could follow this tutorial for frame extraction.
Thank you, will do as you said ๐
Hello, I am working on a project using Pupil Labs, where we track participants' gaze as they complete a browser based maze game. I have a question about calibration. Previously I have been incorporating (a) one large calibration before the experiment, consist of around 30 calibration markers which i display on the screen covering the area of the maze in detail, plus (b) 3/4 markers before each trial to maintain calibration. I have then been using 2d gaze mapping post recording in Pupil Player.
Due to time limits, we now need to get rid of the calibration markers before each trial. However, ideally I want to maintain using 2d gaze mapping because 3d has too low an accuracy for what we want to do. Thus I am planning to do (a) the same large initial calibration (b) slippage test before each trial (comparing live data to a location on the screen). I think this means I have to do the gaze mapping in real time now instead of post recording, to do the slippage test. Thus, I have been trying to do a calibration using Pupil Core which involves many markers on the screen (the 5 marker default is not fine grained enough). Do you have any advice on how to do this?
I have managed to find a way to calibrate to different locations on the screen but not to detect markers. I have attached my code to do this.
Thank you!
Hi @user-94ac96 ! ๐ Thanks for sharing details about your experiment.
A 30-marker calibration sounds quite extensiveโitโs up to you, of course, but this approach could be time-consuming and challenging for participants, potentially leading to fatigue or loss of attention. A 9-point calibration using for example this choreography might offer a good balance of efficiency and accuracy. Feel free to try it out and share your thoughts!
From there, you could perform a validation every N trials (e.g., every 5 trials) to check for slippage and ensure accuracy remains reliable. Some experiments use a 5-point validation, or if you need something quicker, you can present a single-point validationโthough that would only verify accuracy at that specific point.
For gaze mapping, yes, Iโd definitely recommend using the 2D gaze mapping pipeline for screen-based tasks, as it provides the best accuracy.
To minimize slippage, you might consider using a chinrest if youโre not alreadyโit can help stabilize head movements and improve consistency.
For marker detection, you can find the marker detector implementation here:
๐ Circle Detector Module
Let me know if you have any questionsโand I hope your participants find the exit! ๐
Hi so in earlier version of neon player, the downloaded CSV files world_index
Hi @user-15edb3 , it seems your message got cut off. Was there more text you wanted to send?
Thank you! This is very helpful! Is there any way to trigger the validation choreography through Pupil Remote (analogous to socket.send_string("C") but for validation rather than calibration)?
Yes, but the subject has to be validation.should_start
, see this gist as an example.
Great thank you! Just one more question, how can I return the recorded accuracy from that validation?
I don't have at hand some examples, but my colleague @user-f43a29 pointed me, it should be in the notification validation.data
.
Thank you again. I can't seem to find the angular accuracy in validation.data (mine is returning a dict with keys subject, gazer_class_name, gazer_params, pupil_list, ref_list, timestamp, record, topic and i can't seem to find it in any of these)
You are almost there, have a look at this part of the code where that dict is used to compute it.
Hello, What are the optimum working distances for the ocular camera? I thought I read something in the doc but I can't find it.
Hi @user-c68c98 , the Pupil Core headset typically places the eye cameras about ~30 mm from the pupils (see here: https://discord.com/channels/285728493612957698/285728493612957698/1052156091628265533). May I ask what your use case is? ๏ปฟ The optimal distance varies a bit from person to person. Generally, as long as the pupils are visible in the eye image and pupil detection confidence is high (ideally, >0.9), then you are good to go.
Hi, what is the best way to calculate glance duration from a dataset where I have labeled gaze directions or glance proportions
Hi @user-0f7e53 , by "glance duration", do you mean "fixation duration"? If so, do you have the timestamps of the gaze data that contributed to the fixations?
Hello! I am currently attempting to use the LSL plugin with pupil capture (Pupil Core), however I am not able to. Specifically this is what the command prompt is giving as an output. Thanks
Hi @user-af671d , you want to install version 1.16.2 of pylsl.
Hi, we want to have a world camera recording without distortion, so that we can get a more accurate surface gaze point mapping and position estimation in Pupil Player. We are using Pupil Cam1 ID2 with a resolution of 1280*720. Could someone help?
Hi @user-45f4b0 , the Surface Tracker internally does the mapping with undistorted coordinates.
If you want to represent the positions of the surfaces in undistorted camera space, then you might find this tutorial helpful.
May I ask if you are finding discrepancies in your surface mapped gaze data?
Hi @user-f43a29 , thanks for you information. Because the recorded world camera video is distorted, I don't know if the position is accurate or not.
We are now trying to use OBS Studio to record the screen, and use the surface gaze data to draw each gaze point on the OBS recording. But we are worried that the distortion shown in the pupil player video will impact the surface gaze estimation. Because the QR codes are also distorted and we cannot make a plat surface.
Hi @user-45f4b0 , to clarify, the surface definition is the pink boundary shown in Pupil Capture. It will be taken as flat, and as mentioned, the AprilTag detection and subsequent mapping of gaze to surface are done internally in undistorted coordinates. So, the tags are not distorted when doing the mapping.
One tip is positioning the corners of the pink surface to be on the corners of the AprilTags or the corners of your surface of interest.
If you follow the tutorial that I listed above, then you can overlay the undistorted surfaces on the undistorted video as confirmation of your setup. OBS should not generally be necessary.
You can also follow this tutorial to see how the mapped fixations fall on the undistorted surface. You can also run a validation setup with that, as extra confirmation.
We tried to use "Camera intrinsics estimation -- show undistorted image" in the pupil capture, which undistorted the camera very well. But the recorded video shown in the pupil player is still distorted. We also want the exported world camera video to be undistorted with surface and gaze mapping, so we can present. Is this possible? Thanks!
Hi @user-f43a29, as you can see in the figure, the screen cannot be fully captured by the pink boundary due to distortion. Because I need to map gaze to a surface, which is basically the full screen, so itโs better to have a more accurate gaze estimation. Is there any solution? Thanks
Hello!
I have recorded a video of left and right eye frames using neon glasses.
I have a set of frames of size 192 x 384. Then I take only the left eye frames.
So I have frames of a shape 192 x 192.
Then I want to detect a pupil in 2D (pixel) and 3D (in camera reference frame)
So I take it as:
pupil_2D = result_3d["ellipse"]["center"]
pupil_3D = result_3d["circle_3d"]["center"]
where I get result_3d
as follows:
grayscale_array = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
result_2d = detector_2d.detect(grayscale_array)
result_2d["timestamp"] = timestamp
result_3d = detector_3d.update_and_detect(result_2d, grayscale_array)
when I draw the first point on image it fits a pupil, when I draw projected pupil_3D
(using camera_matrix) the result is not precise and stable.
On the attached image the green point is pupil_2D
and the blue point is a projected pupil_3D
.
Q1: Do you have a video that shows that more accurate 3D pupil tracking is possible with these detectors
from pupil_detectors import Detector2D
from pye3d.detector_3d import CameraModel, Detector3D, DetectorMode
Q2: Do you have a minimal piece of code which demonstrates how I can get 3D pupil coordinates so that I cold try it on my data.
Greetings. I'm assisting a patient with multiple sclerosis complicated by a series of strokes.
I've tried off the shelf eye tracking hardware. Unfortunately monitor mounted eye trackers are problematic for those with limited mobility and wheelchair-bound. From a positional standpoint there's a lot of variability relative to a monitor.
Muscle fatigue causing posture changes such as head droop or leaning torso
A person's position in the chair such as tilt or recline
Chair alignment to the monitor
Prescription glasses can interfere with tracking
With this constellation of variables, it's hard to provide reliable eye tracking.
Accessibility is so important for those that don't have much. I'm trying to do my part to bring a piece of the world to a few people in my life.
This brings me to Pupil Labs. So far the stack seems the open compared to other products. I would appreciate insights with eye tracking hardware and software that could be leveraged for computer control.
Hi @user-39a67c! Thank you for your message. I'm sorry to hear you haven't been able to find a workable solution with monitor-mounted eye tracking. Indeed, the problem itself is highly nuanced and incredibly variable, given the wide range of assistive needs and challenges faced.
Wearable eye tracking, which is our area of expertise, can certainly overcome some challenges but also introduces new ones.
May I ask what sort of monitor-mounted eye tracking you have already tried? It's my understanding that assistive eye tracking systems workaround some of what you describe through customised positioning of the monitor/eye tracking for users.
Hi,
I am trying to do custom automated data loading and processing from recording of Pupil Core. (gaze.pldata and fixation.pldata). Specifically, I am borrowing this function: load_pldata_file() from this module: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules.
I have observed the following two problems:
I have the following questions:
Thank you.
Hi @user-131620! Thanks for your description - it's helpful.
The first thing to note is Pupil Core's eye cameras are free-running. So the sequencing of your data, i.e. left/right datums not perfectly alternating is expected. As is the fact you can get different numbers of samples overall.
You can read more about how we handle free-running cameras in this section of the docs: https://docs.pupil-labs.com/core/developer/#pupil-data-matching
The second thing to note is that setting the timestamps to start at 0 could lead to unexpected timestamps. Is there a specific reason why you are trying to do this? It's usually not needed, because at recording start, current Pupil Time is written into the info.player.json
file of the recording under the start_time_synced_s
key.
Hi Neil, thanks for the speedy response, you are correct, actual timestamp is not needed as long as the relative ordering within each eye is preserved
A potential implementation for my case will be to create a list of left eye and right eye separately, by reading the eye id section to distinguish between the two, and then truncate them to obtain the same number of data points. However, there is an important underlying assumption: the relative order is correct and preserved from .pldata and the corresponding row from the two table are happening simultaneously in time(within acceptable range)
Do you think this is probable? Thanks
Hello, I would like to know the intrinsic matrix of the eye camera in Pupil Core.
Hi, @user-dbab7a - you can find those in the Pupil Core source code
thank you very much
Hello, I would like to ask why the scene camera (the camera that captures the world) does not perform camera distortion correction? Wouldn't not correcting it make the gaze error larger?
Hi @user-bd106e , may I ask how this was determined? Internally, the calibration & gaze mapping pipelines are working in undistorted coordinates.
Hi, I am running an experiment where I calibrate at the start, and then check accuracy periodically throughout. I am using 2d gaze mapper. At the moment, it is set that if accuracy > 2 (or more generally a threshold) calibration is triggered at that point in the experiment. Unfortunately in piloting this has led to participants needing to calibrate a lot, often back to back. However, if I increase the threshold I find that accuracy can be poor in the data due to slippage. Is there a way to automatically shift calibration based on accuracy data throughout so I can posthoc do this instead of the participant needing to recalibrate throughout? (Would it be accuracte to do this -- is the calibration mapping linear)?
Hi @user-94ac96 , could you clarify what you mean by "automatically shifting calibration based on accuracy data"?
If you can share a recording folder with data@pupil-labs.com (e.g., via Google Drive), then we can take a closer look and provide better feedback.
Hi I am using pupil-core hardware with pupils-labs GitHub source code to run the SDK and trying to communicate through another oy script to get gaze coordinate right now the problem I am facing is the the world camera feed is unable to open as the uvc.backend-utility is throwing init error I posted this issue on GitHub one month back but not getting any help can someone help me out from this problem
Hi there, I have completed my eye-tracking experiment using the Pupil Core and want to obtain time-to-first-fixation (TTFF) data. What are the possible methods to achieve this? Thank you~
Hi @user-be0bae! I moved your message to the ๐ core channel since you're using Pupil Core. Could you maybe share more details on your setup? Have you defined areas of interest and would like to get the TTFF data for a specific area of interest?
Hello, I would like to know the rotation and translation matrices between the eye camera and the scene camera of the Pupil Core product. Where can I find these parameters?
Hi @user-dbab7a , may I ask what your ultimate goal is? You would need to modify the Pupil source code to obtain such values.
Hi comunity i am using Logitech c615 webcame camera as my world feed but the pupil capture through is throwing initialisation error in log What should I do Do even Logitech c615 camera support uvc As in my windows default camera the feed is able to come But not in pupil capture can any one help me out
@user-11d287 , this could be an indication that the camera is not UVC compatible, but you may want to check with Logitech about that.
Can you provide any budget webcam links which community has been used for pupil capture
I have complied and found Logitech c615 is compatible with the uvc library Yet it cannot receive video frames while capture
Hi Rob, I mean to measure the accuracy periodically to see how much the calibration has shifted, and then use this to update the calibration mapping to reshift the whole mapping through a translation. So do a full calibration at the start, assume the only source of noise is translational drift, measure accuracy periodically to see how much translational drift there is, then shift the preexisting calibration to account for this drift. I will try to share a folder with you asap
Hi @user-94ac96 , regarding "shifting" the calibration by translating it, such an approach doesn't sound feasible. The calibration determines the transformation from pupil positions in the eye camera frames to gaze positions in the world camera frames. As a result, the difference between two calibrations is significantly more than a translation in general.
Once you've shared a recording for review, including the calibration process, then we are in a better position to provide feedback about how to improve calibration accuracy in your scenario.
Also, if I remember correctly, you compute validation accuracy from Network API notifications with custom code?
furthermore, is there any way I can access the homography matrix used for the gaze mapping?
Hi @user-94ac96 , do you mean the matrix that is used by the Surface Tracker Plugin to map gaze to a screen, for example?
Hi, I have received the replacement part on the left eye camera, it is connecting to the computer and data coming through I am running into a weird problem on consistent offset on norm_pos of left and right pupil
Hi @user-131620 ๐ ! That is actually expected and normal. Firstly, from your snippet, you are subscribed to pupil (pupil.
) not to gaze(gaze.
).
Have a look at how the messages are organized here.
Then, you have to consider that one of the eye cameras in Pupil Core is upside down. You do not see it because the eye image is flipped in the software, mainly for aesthetics and easiness when setting up the camera, but you can turn this off in the eye camera settings.
However, we do not flip upside down the metrics themselves, as they do not affect gaze calculations or any other final metric and would increase the cpu load without any clear benefit.
In this case, you plot here the pupil norm position, , namely, the position of the pupil on the eye camera in normalize coordinates. Since one camera is upside-down, well you can clearly see one pupil is around 0.8 and the other 0.2, which is the inverse.
In this task, we are measuring x and y position using the norm_position attributes on the IPC backbone The participant is instructed to stare at the middle of the screen which we can guaranteed but as you can see, there is a consistent offset of the x position and y position, and it differs between right and left camera We make sure to perform calibration before each collection Here is the code we use to generate this result: This is a snippet of code that generate the result:
We tested across different participant, and go through calibration, the error persist Could you help us in resolving this issues?
Hello! I was wondering what is your intended way to remove blinks from the gaze data. I figured not every blink timestamp actually has a correspondant time stamp for the gaze, probably due to the irregular sampling that we observed in the data, which makes it difficult to remove blinks. How would you suggest tackeling that problem? Also, we were discussing thresholds for blinks and were wondering if you'd suggest keeping the defaults in the blick detection plugin for a design where we mainly want to observe micromovement of the eyes and expect no larger eye movements during the experiment.
Hi @user-138bf5! Pupil Core uses a data matching algorithm to handle free-running eye cameras. So it's expected that gaze datum timestamps don't perfectly correspond with other timestamps, e.g. of blinks. Finding the closest match would be a valid approach. You can read more about the data matching algorithm here: https://docs.pupil-labs.com/core/developer/#pupil-data-matching.
As for blink detection thresholds, when pupil confidence is good throughout the recording, the default thresholds tend to work well. But you can also tweak them on a per-recording basis if you notice missed classifications. You can quality control the classifications by watching the eye video overlay. Read more about the blink detector here: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
Hello. Please what is the minimum XY deviation possible after calibration and validation using the 3D pipeline? The best I got today is 0.092 after calibration and 1.282 after validation
Hello, One of the eye camera is not showing the eye image. I am getting error : USB device not recognized
It is showing USB malfunction. Only the world window, and eye 1 are working?