Hello! I have a question about the norm space in gaze datum format. I'm not sure which position is normalized. Please give me some description. Thanks sincerely!
Hey @user-b02f36. This section of the docs will be useful for you. It has an overview of Core's coordinate systems: https://docs.pupil-labs.com/core/terminology/#coordinate-system
Hello! I am analyzing pupil data from a Pupil Core recording session. I ran a post hoc blink detection analysis that returned a CSV file with the starting and ending timestamps of the blinks. I would now like to extract the frames from eye0.mp4 to observe the blinks. There is an eye_timestamp.npy file to link with the blink timestamps (I suppose), but I donβt know how to link the eye0_timestamps to eye0.mp4 since the camera's sampling rate is much higher. Should I downsample the eye0 video to match the eye0_timestamp sampling rate? Thank you.
Hi. How can I record the audio when using Pupil Capture? I found a button called βaudio modeβ and switch it to βsound onlyβ, but in the recording folder I donβt see audio files. Could you please help me?
Hi @user-cbde4a ! Please see this message https://discord.com/channels/285728493612957698/285728493612957698/1219221290972479498 about Pupil Core and audio.
Hello ! Is there a way to export gaze_positions.csv and pupil_positions.csv without going through the pupil player ui, eg via a python script ?
Hi @user-ea64b5 ! Are you loading the eye0_timestamps.npy as shown here ?
will anyone help with a question I have or do I need to make a ticket for it ????
Hi there! You can share your questions here, and if itβs something that relates to hardware issues, we might suggest creating a ticket to streamline the communication. Whatβs your question?
π find your I reply at https://discord.com/channels/285728493612957698/285728635267186688/1224834965108293853
Hi, when I define the surface size, what's the unit of the width and height? Is it 'meter' or just a ratio? I don't know if it would affect coordinates mapping process.
Hi @user-cbde4a π ! The Surface Tracker docs contain more details. You want to set it to the real world size of the surface, so meters, centimeters, feet, etc., whichever you prefer. It is not a ratio.
Hello. I'm attempting to use LSL LabRecorder to sync audio input with pupil data. Is there any advice for how to read/interpret the LabRecorder output? Alternatively, does anybody have another solution for syncing pupil data with audio?
XDF Browser provides a simple but nice GUI for inspecting LabRecorder output, and it's a good tool to have on hand, but I don't think it'll be that helpful for audio. I'm not aware of any other existing tools that would be helpful here - you may have to roll your own solution.
Having said that, I'm not an LSL expert. There are certainly LSL users among us who will be more familiar with the tooling than me, so I'll also be watching for replies
This python script can help extracting audio file from .xdf. Put it under the same directory as your .xdf file and make sure your recording sample rate is set to 48000 Hz (if not, just change the corresponding value in the script, but I would recommend to use 48000 Hz which is optimal for syncing with your video later).
Hi there! Iβm analyzing pupil data from Pupil Core recorded using both the UI and the python API. My current settings for both eyes are (192, 192) resolution and 120 frame rate. At that frame rate, I would expect to get a sample every ~8 milliseconds, so when I calculate the difference between timestamps, whether Iβm using the API or the files created by Pupil Capture (for instance, βeye0_timestamps.npyβ), the results should be roughly 8ms consistently.
Overall, the results seem to be aligned with my expectations, but for some reason every 40 samples or so I get a 12ms difference, and then it goes back to 8ms. Does anyone know the reason for this or has experienced this behavior before? See the chart below. Any help would be greatly appreciated!
Hi @user-4f63f1 , after consulting with my colleague, @user-cdcab0 , this looks like the influence of each scene camera frame being processed as it comes in. It's stealing a bit of system resources from the eye camera stream in that moment. From a quick test I did, these spikes seem more likely to happen on less powerful systems and they tend to go away on more powerful systems. Can I ask what your system specifications are and what OS you were using? And what is your program doing during the recording?
I'm very curious why this happens on my pupil cloud?
Hi @user-359513 π ! I have responded in the π neon channel.
yes~thank you
Hello, I'm using LSL with the pupil core headset. I have negative pupil diameter (3d) values and I'm not sure why. How can the diameter value be negative? Thanks.
Hi @user-2c86b2 , may I ask if you froze the 3d eye model after it was well fit? Also, are these negative estimates happening when the pupil is occluded, such as during a blink, or when pupil confidence is low? Or does it happen when the eye is turned to an extreme side angle?
Hi all. Is there currently any guidance or tutorials on calculating normalized fixation duration , normalized to the area of the AOI ?
Also in here https://docs.pupil-labs.com/core/terminology/, what does it mean by 'surface-normalized coordinates'?
The coordinates are normalized based on the width and height of a surface. The bottom-left corner is (0,0) and the top-right one is (1,1). In other words, the surface is considered as an 1 x 1 square and it calculates the gaze position on it.
Thanks for the support @user-cbde4a ! Always great to see users helping users π₯°
Hi @user-3c6ee7 π ! To provide some additional info/context, you can check out the Surface Tracker plugin documentation. After setting the surface size and once you have exported the data, the plugin will provide you with surface normalized coordinates, which are the original dimensions of the surface, scaled by the sizes you provided. You can also use the "x_scaled" and "y_scaled" columns in the exported "gaze_positions_on_surface_<surface_name>.csv" file, if you want the coordinates in the original units of the surface (e.g., meters or feet).
Regarding your analysis question, it sounds like you want the ratio of fixation duration to AOI area, so something in units of seconds/m^2, correct? I recommend checking out the pupil-tutorials repo for examples of how to work with the data. In your case, this tutorial for loading surface data and this other one that shows how to load fixation data should be helpful.
Hi. A conceptual question: In the first 4 rows, the fixation position changes a little bit, but they have the same fixation id (6). When I calculate durations, should I add up all the durations? E.g. The duration of fixation 6 is 297+297+297+297, about 1.2 seconds.
Hi @user-cbde4a , my colleague @user-d407c1 has provided a great explanation of why this occurs here. I recommended checking it out
Hi,everyone. I want to know how the Pupil Capture calibrates cameras and what algorithm is used. Where can I find the relative information?
Hi @user-b02f36 π ! Do you mean how Pupil Core can estimate the intrinsics of the cameras? Have a look at the Camera Intrinsics Estimation Plugin and its code
Hello, I have some questions, I want to know what are the rotation matrix and translation vector between the scene camera (when the camera is looking straight ahead) and the eye camera for pupil-core? Secondly, what are the internal parameters of the eye camera?
Hi @user-873c0a ! Could you specify which internal parameters you're interested in? If you're looking at camera distortion coefficients or the camera matrix, you can find some sensitive defaults here.
For translation and rotation, moving the eye cameras to ensure the pupil is visible is often required to be able to get quality data with Pupil Core. But this also implies that the relationship between the eye cameras and the scene camera can change. When using the 3D pipeline a "bundle adjustment" is performed to determine the physical relationship between the cameras.
You can find more details in this discussion and see the implementation here
Hi, weβre having some persistent issues with the right hand camera popping off the frames. This has been persistent since first use and makes it very hard to adjust the right hand side of the glasses. Do you have any advice on how to secure the camera to the frame to avoid this? π
Hi @user-f76a69 ! Could you create a ticket at π troubleshooting and share there the Order ID?
Additionally, it would be very useful, if you could perform this simple tests:
First test: Carefully unplug the eye cameras and swap them. While Pupil Core is connected to the computer, open Pupil Capture and check if the issue is still on the same camera or on that side of the cable tree.
Second test: Carefully unplug the eye cameras. Check if the metallic pins are all straight. If no bent pin is found, plug the JST connector back in, making sure it's fully connected.
Hi @user-f76a69! When you say pop off, do you mean that the camera physically pops off its mount? Actually, can you share a video of this so that I can better understand and provide some concrete feedback.
HI! I'm now using Pupil Capture to achieve gaze data and pupil data of the eye tracking devices built by myself with three cameras. Now I got the intrinsic parameters, extrinsic parameters and distortion of my cameras through MATLAB. I want to know whether these parameters is useful because I want to make the gaze point data from Pupil Capture more accurate. And when should I use them? After getting the whole data or during the recording? Here I give the appearance of my eye tracking devices.
Hi @user-b02f36 , always great to see people building on the Pupil Core ecosystem!
Do you mean that you are using these cameras with Pupil Capture and you want to enter the camera parameter values into Pupil Capture to improve the gaze estimation data?
You might find the Camera Intrinsics Estimation section of the Docs useful.
Hi, all. I'm using Pupil Core to record gaze movements during reading on the screen, which is similar to "web browsing" cuz participants may scroll up and down during reading. I found this article Map Gaze Onto Dynamic Screen Content but this is for Neon and Pupil Invisible. Is there any way to achieve similar things with Pupil Core? Thanks!
Hey @user-cbde4a π. So that guide specifically uses a markerless tracking approach that runs in Cloud. So it's not compatible with Core recordings. But you could do something similar by using the surface tracker plugin in Pupil Capture/Player (more info on those in the docs). The code would need some adaptation, though.
Hello, I am using the dual-moncular plugin to get the gaze positions of the two eyes separately. We need to do that because we plan to record eye movements of patients with strabismus and amblyopia who may show different oculomotor performance in the two eyes.
If the gaze_point3d_z is computed as the intersection of the gaze positions of the two eyes using the binocular gaze mapper, I don't understand how this variable is computed using the monocular plugin and how we can have different gaze_point3d_z for the two eyes. Can anyone help with this?
Thanks in advance!
Hi @user-956ccc π. The short answer is you can't get gaze point 3d from the monocular gazer. That said, depending on what exactly you're trying to achieve, that might not be an issue. What metrics do you want to compute to define occulomotor performance with your patients?
Check out the theta and phi variables.
Thank you. Then, would we lose the ability to "link" the degrees of visual angle (theta and phi) in the pupil_positions file to objects in the scene? We are using apriltags to define surfaces in the scene in case they are useful.
Yes that's correct. The surface tracker will map 2d gaze points from scene camera to surface coordinates.
Hello, is there any way to map gaze onto a template image manually via point-and-click (e.g., a plugin for Pupil Player, or any other data analysis tools)? Also, would there be any way to draw AOIs manually? Thank you in advance.
Hey @user-97fc54 π. We don't have a point and click manual mapper for Core. However, you can use the annotation plugin in Pupil Player for manual coding. Its fast and lightweight. You can assign hotkeys and skip through frame-by-frame or fixation-by-fixation.
Alternatively, if the Core data were saved directly to an offline computer, would there be any possible way to upload those data to the Pupil Cloud software for analysis?
Pupil Core is not compatible with Cloud, unfortunately.
Alternatively, you can use the Surface Tracker plugin. That enables AOI analysis.
Hi Pupil team, I have a question regarding the start_timestamp in the fixation file. I want to calculate the time differences between each fixations and I wonder in what unit is the pupil time counting forward? Can I just used the start time of one fixation to minus the previous fixation start time to directly get the time differences? Than you!
Yes, you can. Check out the timing documentation: https://docs.pupil-labs.com/core/terminology/#timestamps π
Hello Pupil Team, I've got a recording which I exported to the pupil player from the pupil cloud to do some offset correction in a video. That worked fine, but now I'm trying to import the video from the player back into the cloud (to use the enrichment function) is there a way to do so? If not is it possible to generate a heatmap in the pupil player? And I have a second question: Is there any way to use specific sections from a video in a heatmap or is there a way to cut the video? Looking forward to your answer π
Hi @user-964cca π ! Are you using Pupil Invisible? If so, lets continue the discussion at πΆ invisible .
Hey Pupil Labs Team! How to do the exporting of current date/time data from pupil Core?
Hi @user-870276 ! Have you seen Convert Pupil Time to System Time ? Is that what you are aiming for?
@user-d407c1 Thank you so much!
Hey! Is it possible to use glasses while using the Core? If its possible, how much worse would the accuracy be for the gaze position?
Hi @user-585ca3 - It is sometimes possible to put on the Pupil Core headset first, then glasses on top. The eye cameras need to be adjusted to capture the eye region from below the glasses frames. This is not an ideal condition but might work for someΒ people. Ultimately, it depends on physiology and eyeglasses size/shape.
Hi, I am using Neon eye tracker, today when i finished recording the video, i clicked on it, a message:" no scene video, this recording contains no scene video". Please what is the solution for this issue?
Hi @user-5be4bb - Can you please open a ticket in our β π troubleshooting channel? Please select π neon as a product. Then, we can assist you with debugging steps in a private chat.
Hi, I have a problem getting the heatmap on my reference image. At the side under tools I can only edit Area of Interest but there is no option for the heatmap like it is shown in the example video provided by you. Can you help me?
Hi @user-861e5b π ! It sounds like you are using Pupil Cloud , which is not compatible with Pupil Core. Which eye tracker do you have π neon ? πΆ invisible ? Could we move the conversation to the right channel? That said, in the Project view if you go to Visualisations on the left side and create new visualisation, you should be able to create a heatmap.
thank you, I moved over to that channel
Hi everyone, I was wondering if anyone would have any recommendations on how to convert from 'pupil time' to 'system time' on a large scale? I'm working on a project that has loads of data that needs the time to be converted. Thank you in advance!
As a first question, have you been able to run this code for one recording: https://docs.pupil-labs.com/core/developer/#convert-pupil-time-to-system-time ?
Hi. I'm using this repo for preprocessing. But here's a bug:
Traceback (most recent call last):
File "/home/willow/Project/202401-low-vision/codes/bdd-driveratt/test.py", line 1, in <module>
from eye_tracking.preprocessing.functions.et_preprocess import preprocess_et
File "/home/willow/Project/202401-low-vision/codes/bdd-driveratt/eye_tracking/preprocessing/functions/et_preprocess.py", line 14, in <module>
from .detect_events import make_blinks, make_saccades, make_fixations
File "/home/willow/Project/202401-low-vision/codes/bdd-driveratt/eye_tracking/preprocessing/functions/detect_events.py", line 6, in <module>
from . import detect_fixations
File "/home/willow/Project/202401-low-vision/codes/bdd-driveratt/eye_tracking/preprocessing/functions/detect_fixations.py", line 4, in <module>
from eye_tracking.lib.pupil.pupil_src.shared_modules import fixation_detector
File "/home/willow/Project/202401-low-vision/codes/bdd-driveratt/eye_tracking/lib/pupil/pupil_src/shared_modules/fixation_detector.py", line 46, in <module>
import background_helper as bh
ModuleNotFoundError: No module named 'background_helper'
Although this bug can be fixed this way: from . import background_helper
, since many scripts are calling each other under the shared_modules
folder, it is laborious to modify every importing line in the codes. Is there another way to fix this?
Hi @user-cbde4a! Have you tried reaching out to the author of that repository already?
Is Core the only open source product that pupil labs have?
Hi @user-585ca3 ! At Pupil Labs, we advocate for open-source and while everything is open-source when using Pupil Core, the rest of products also strive for their openness. The neural network remains proprietary, but many other components are open-source, including the fixation and blink detectors, Neon Player software, our real-time API, and all code from our alpha-lab tutorials, the geometries, and much more.
Hello, I am using Pupil Labs (core) and I am currently trying to set up an experiment where I will present multiple visual stimuli. The whole experiment is written in Presentation by NBS, and I am using the Presentation Python interface to run it in python so I can incorporate the time synchronization, start recording, and stop recording codes. My question is if there is any other way to send a ''start recording'' and ''stop recording''messages (apart from using the Python code provided). Is is possible to sent a start and stop recording messages to the eye tracking through a QR code or something similar that I can Include in my visual stimuli that will be presented?
Thank you in advance.
HI @user-412dbc π ! The QR code feature that you propose is not available in Pupil Capture, at least not via the world camera, but you could potentially write a custom plugin that does it. Otherwise, the Network API (i.e., the ZMQ-Python route) is the standard approach. You could also consider our Lab Streaming Layer integration.
May I ask for more clarification about why you are searching for an alternative?
Also, you may want to consider using annotations for marking the start/end of trials, rather than saving a separate recording for each stimulus. See our Best Practices for more info, in particular the "Record Everything" section
Hello Rob and thanks for your reply.
I will try to be as explanatory as able as I am relatively new to the domain.
The reason I am searching for an alternative is because my whole experiment is written in Presentation by Neurobehavioural Systems and not in Python. However, I needed to use Python to incorporate the start and stop recording parts for the use of Eye tracker
Therefore, I used the Presentation Python Interface (PresPy) which lets me run my experiment in python (see below):
import PresPy pc = PresPy.Presentation_control() pc.open_experiment(r"C:\Users[...]\Food-Low-High_Mixed (complete)") scen = pc.run_experiment(0) del pcWith the code above I could incorporate the use of the eye tracking (include the start/stop recording in the code) in my experiment. However with the use of PresPy I do not have access to the experiment's trials\stimuli, and thus I cannot send annotations before each trial.
The problems that arise are the following: 1) I will not be able to know the number of each trial displayed (there will be 30 trials in every block) 2) I will not be able to determine the area of interest (AOI) where the participant's initial fixation occurs for each trial (Each trial will feature two ROIs displayed on the screen).
Hopefully, I was explanatory enough on why the above problems exist.
Hi, I've read this message about fixations on a surface, but still have one problem: I want to get a list of fixations with their time and positions (x, y, t_start, t_end). How should I convert the fixation on surface data? Currently I treat the points with the same id as the same fixation, and use the average coordinates as the position. Is it a proper way? https://discord.com/channels/285728493612957698/285728493612957698/1186984396805390347
Hi @user-412dbc , my apologies, I thought you were using a Python interface from within Presentation to send messages to Pupil Capture. I see now.
Does your lab potentially use the Lab Streaming Layer or would they be open to using it? It is a robust way to synchronize multiple data streams from different systems. I see that Presentation also offers an LSL integration, with events included in the stream. You could run a test with it and our LSL relay to see if all works, as that would cut out a number of messy details and workload.
If that is not an option, my colleague, @user-cdcab0 , has also pointed out to me that Presentation offers a Network Interface. You could use this to communicate between Presentation and Python, similar to how you use our Network API to communicate between Python and Pupil Capture. Your Python script would then become something of a central routing point between Presentation and Pupil Capture.
Hello, I wanted to know if there is any way to use pupil core wirelessly? Any 3rd party adapters work?
Hi @user-4ba9c4, the community here had success with different ways to use Core wirelessly on tablets/PCs in a backpack. Please check out https://discord.com/channels/285728493612957698/285728493612957698/1129316201424748684 and https://discord.com/channels/285728493612957698/285728493612957698/1042712204140630117. Do note that tablets should meet the minimal requirements: https://discord.com/channels/285728493612957698/285728493612957698/968160918770958386
Alternatively, you can also stream data from Core wirelessly: https://discord.com/channels/285728493612957698/285728493612957698/1101035840374841435.
Hello, I am using Core and Pupil Mobile in an experiment. In the previous experiment, the post-hoc calibration works perfectly as the eye gaze landed on the calibration mark. But today, something happened that the eye gaze deviates so far from the calibration mark. We tried setting the world camera and the eye camera, but this didn't work. When we do the test in Pupil Capture (PC), it works like a charm. What did go wrong in the experiment?
Picture attached. As you can see, both eyes are visible in the eye camera and it has good confidence levels. Thank you!
Hi @user-75ea17 π ! I just want to point out that Pupil Mobile is deprecated, but could you send a recording via DM or send us an email [email removed] with a link?
Also, looking at your image on the left for Pupil Capture, I see that the 3D eye models are not well fit. If you are using the 3D pupil detection pipeline, then make sure that the 3D eye models are well fit first before proceeding to calibration. You can find details on the process here.
Hello everyone. I have started pre-testing my experiment which uses Pupil Invisible (protocol is : participants stare at a white wall while recalling personal memories). When looking at the data, even if the majority of the time, the cercle around the eye is dark blue, suggesting good detection, I have very, very small pupil diameter data (around 1 mm, which seems physiologically impossible). Do you know where the problem could come from please ? I am really puzzled by this. Thanks in advance for your help.
Hi @user-ee70bf, could you share some screenshots of the eye images so that we can evaluate them? Have you also looked at our best practices for measuring pupillometry using Core?
Update : I have taken a careful look into the eye recordings, and find that even though the 3D eye models are well fit in the beginning, I very quickly (about 30s into the protocol) lose the fitness and get a light blue circle. You are right, I think that it might be linked to losing goodness of fit of the 3d model. My protocole is long (about an hour) and I run it via PsychoPy so I don't have a constant eye on the quality of the data.