Combine heatmaps
In the course of the chat, the topic of converting timestamps was discussed more than once. Unfortunately, I still can't get any further. I use a Pupil Labes Core, exported the raw data and imported it into RStudio. I know meanwhile how relative time range and absolute time range are related... But now I need the time of start and end of the recording. Is there a step by step guide for this available somewhere that is not Python related or at least easily reproducible in R? Thanks!
In the course of the chat the topic of
Hi Pupil Core community, I am a new pupil core user. I am working on an experiment where I need to track a participant's gaze on a computer screen. I understand that I can use on screen choreography to calibrate the eye tracker, but what would happen during the actual experiment when the markers won't be visible. If the participant turn their head won't it miscalculate the gaze location?
Hi @user-4ba9c4 ! You are absolutely correct! To track/ re-map your gaze onto a surface like a computer screen, you will need to use the Surface Tracker Plugin and AprilTags markers. Check it out here https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
The Surface Tracker plugin in Pupil Core allows users to define planar surfaces in their environment and track areas of interest (AOIs) using Apriltag markers. These markers can be placed on paper, stickers, or displayed on a screen, and multiple markers can be used to define a single surface for greater tracking robustness. Surfaces can be defined in real-time with Pupil Capture or post-hoc with Pupil Player, and the resulting data can be used to generate gaze heatmaps for the defined surfaces.
Hello, thanks for the reply. I was wondering if I use the surface tracker plugin to define my computer screen, how will I calibrate the device? Should I use the on-screen choreography or natural markers or should I use the marker provided with the device (concentric circles)? Thanks again.
Hello pupil community, I would like to know how the position on the surface is calculated? I assume it is calculated from the direction of gaze? Can I have the mathematical formulation? My lab uses a different system to calculate the gaze direction and we want to use the same system that the pupil lab uses to calculate the position on the surface. Thanks in advance.
Hi @user-80123a The code for the SurfaceTracker plugin is OpenSource, you can find the module here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/
Moreover, if you check here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L154 you can see how the mapping of points from image pixel space to normalized surface space is done.
The transformation is mostly done using the function cv2.getPerspectiveTransform in the OpenCV library.
Hi, Pupil team, I want to do an experiment with the pupil core, but I just started using it and there is still a lot I didn't understand. I need your help! Specifically, the experiment involves having the tester use a mobile phone and using the pupil core to see where the tester's gaze is on the screen and how the eye moves. The phone's screen appears smaller in the world video, I want to know how can I map gaze onto user interface by pupil core? (just like the YouTube video you contributed to)
Hi @user-2ce8eb ๐. Mapping gaze onto a phone screen is possible via the Surface Tracker Plugin (see @user-d407c1's recent responses). Note that phone screens tend to cover a very small area of the visual field. So you'll need to obtain a good calibration. I'd recommend using the physical marker and concentrating the calibration on the area of the visual field that the phone covers. Something like this is what's possible in the best-case scenario: https://drive.google.com/file/d/110vBnw8t1fhsUFf0z8N8DZMwlXdUCt6x/view?usp=sharing
Hello! when I try to install pupil-labs-realtime-api using the command :pip install pupil-labs-realtime-api I always get the error: ERROR: Could not find a version that satisfies the requirement pupil-labs-realtime-api (from versions: none) ERROR: No matching distribution found for pupil-labs-realtime-api I am using Python 3.7.9 32bits It works properly when I use Python 3.9.12 64bits but I use another device that requires a 32 bits version can you help with that? thanks!
hello, any input about this question? I looked online but couldn't find an answer Thanks!
Hello, I wanted to access the eye data in real time. I.e suppose in my experiment I wanted to know while the trails are going on if the participant is making any saccades Or not. Or even microsaccades also.
I wanted to reduce the foresight error as much as possible . So is there any way to tap on the pupil data at the time of recording itself?
Like can you suggest, anything or information regarding the data packets and Ports
Hi @user-6e1219 ๐ It's possible to access to real-time data through the Core Network API. Check out our documentation to learn more: https://docs.pupil-labs.com/developer/core/network-api/#network-api
Hi@papr๏ผin my use of invisible, it often fails to connect, like this
It usually takes 100 to 200 attempts to connect successfully, and I'm not sure if this is a problem with the invisible or the phone.
Sorry for sending in the wrong channel.
Hey guys, does anyone know how to do it or if you already have a gaze plot in pupil labs?Something similar to this!
Hi, is it possible to get real-time world camera image and corresponding gaze value (x_norm & y_norm in the wold image) through python? I need the world image and gaze value to control an object in real-time through python. I'm currently using pupil core.
You can subscribe to gaze and scene video frames using examples in this helper repository: https://github.com/pupil-labs/pupil-helpers/tree/master/python ๐
Fixation path over a surface in Core
Hi! Iโm new to using pupil core. I was wondering, if the settings in pupil capture (like pupil detector 2D Settings or the video source settings) are stored in one of the recording files?
Hi @user-d2515e ๐. These aren't stored in the recording.
Something wired, I cannot modify the preferred_names, whatever in world.py or in eye.py. For example, when I comment https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/eye.py#L245-L246, preferred_names in UVC_Source class still contains "HD-6000", however, the "HD-6000" only appears once in the whole project folder.
I solved this problem by clicking "Resart with default settings" button.
Hello everybody ! I have a question about the 0,6 degrees of accuracy. Is it the radius or the diameter of the circle around the real gaze point ? Thank you for your answer in advance and have a nice day !
It's the average angular distance between fixation locations and the corresponding locations of the fixation targets. Read more about gaze mapping and accuracy here: https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy
Sorry, there is no 32bit support. Would you mind sharing what info from the real-time api would you like to expose to your 32bit device? You could potentially cast those values in a 32 bit, but that is prone to errors
Hello, we have trouble to achieve stable confidence levels in the recording for a pupillary light reflex. The 3d target on the pupil is always jumping away from the pupil. What can we do?
Hi @user-022bcd ๐. Please ensure the cameras are positioned appropriately in the first instance: https://docs.pupil-labs.com/core/#_3-check-pupil-detection Then feel free to share an example recording with data@pupil-labs.com such that we can provide more concrete feedback ๐
Is there a way to call someone from pupil labs? Im also german speaking
@user-022bcd if you haven't already, it's also worth checking out our pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry
Hello Pupil team, I have a quick question about the 3d & 2d pipeline. We are currently using the default 3d detection for calibration and surface datum collection, to generate users' gaze on a surface (computer screen). However, we saw a bias in our surface data, especially for surface data that are near the edges of the surface. Our experiment requires participants to put their head on a chin rest to keep their head stationary, so according the the best practices, 2d pipeline is probably better for us. It's easy to switch from 3d to 2d in calibration process via Capture software; however, I'm not sure how to specify 2d detection in surface datum collection, which I believe uses the default gaze.3d data for mapping. Any suggestions would be greatly appreciated!
(update): I realized that switching calibration to 2d will automatically perform 2d detection for any future gaze data collection. Please ignore this question unless my understanding is wrong, thank you!
I have another question about the surface data. We are collecting surface data tracking 5 targets on monitor one by one (center-out). The surface data we collected seemed to have a systematic bias in the y-axis (see attached figure). I tried both 3d and 2d pipelines and that bias did not go away. I wonder if this issue has been seen elsewhere and what are the possible reasons for this issue. Any thoughts?
Pupil detect results are not correct, is this normal?
Their confidence are 0.99
Hi @user-9f7f1b Would you mind sharing what eye camera are you using? and what settings do you have in the 2D detector? I.e. max pupil size and minimum?
Hi, I use LPW and pupilnet dataset, and my test code is like your demo https://github.com/pupil-labs/pupil-detectors#usage
For the LPW dataset, you should be able to tune the parameters to work better on that resolution of video. We'll follow up with some values early next week!
@user-9f7f1b, it's worth trying out the 2d detector parameters that were used back when the LPW dataset was collected (in Pupil Capture v0.8): https://github.com/pupil-labs/pupil/blob/v0.8/pupil_src/capture/pupil_detectors/detector_2d.pyx#L54. These are different to the current default parameters as the eye cameras have changed since the LPW paper was released. Thanks @marc for your input here!
Hi, i can not nat install Pupil Invisible Companion to my motorala Z2 Play Android 8.0 from play store.
I have used pupil mobile before, but i can not install. What shouฤฑld i do ?
I get a message "This app isn't compatible with your device anymore. "
How can i use my Pupil Core tool as a mobile ?
Best Regards
Hi @user-77b334
- Pupil Invisible Companion does only work with Pupil Invisible and on supported devices, Companion Device (One Plus 6/8/8T).
- Pupil Mobile app is no longer maintained and is not available on the Google Play Store.
You can try a small form-factor tablet-style PC to make Pupil Core more portable. If the specifications of such a device are low-end, you can record the experiment with the real-time pupil detection disabled to help ensure a high-sampling rate. Pupil detection and calibration can then be performed in a post-hoc context
Thanks for the clear explanation.
Hi, we're using the pupil core eye-tracker and we need to set up a calibration that also takes into account depth, i.e. looking through a transparent screen and we need the coordinates for the 2D screen as well as when participants look at each other through the screen. Has this already been documented somewhere? Thanks!
Hi @user-969d9d ๐. I'm not sure that I fully grasp what you're trying to achieve. Would you be able to elaborate a bit? Describing your research question might help.
Hello, when using pupil core with goggles, pupils cannot be detected stably and continuously. Could you tell me the most appropriate distance and angle to detect the pupils?
@user-313077, what goggles are you using? Really the goal would be to get an unobstructed view of the pupils. Occluding lenses or frames will likely hinder pupil detection. For reference, the default Core headset typically positions the cameras around 30 mm from the pupils.
Hello. I am currently measuring eye gaze while doing daily activities at home and am facing some problems. I would like to have 3 dimensional information about gaze (gaze point 3D or 3 dimensional direction of gaze) First of all, I have found that my collaborators turn their pupils downward more than I had expected, so their gaze may be lower than the scene camera. In such a case, can we still record the information for calculating gaze point 3D or 3D direction of gaze? Secondly, if the pupils are turned down considerably to begin with, as shown in the attached figure, the detection accuracy of the pupils drops considerably, and the gaze information cannot be taken well.
Sorry, I forgot to attach a picture. I hope you can answer my question above.
Hi @user-746d07 ๐ It looks like the eye cameras are not properly set up. To avoid pupil being occluded by the eyelid when looking down, you need to adjust camera positioning to get a clear image of both eyes - with pupil visible also when looking at extreme angles. Please take a look at our guide for some video examples about eye cameras setup https://docs.pupil-labs.com/core/#_3-check-pupil-detection
Just to add to @user-c2d375's response, it is possible to record gaze outside of the scene cameras field of view. Note, however, that this typically corresponds to gaze angles with low-confidence, since the pupils are often covered by the eye lids (if looking down). You can also adjust the scene camera by rotating it up and down if you want to focus your data collection in a certain region of the visual field!
So how do I open recordings with pupil capture? I am looking to update them. I get a calibration not found error. Thanks!
@user-908b50, let's continue this in the existing email thread ๐
Hi, I have a question about 2d_detector. As we all know, pupil has very dark color in gray image, and can be distinguished easily with other region, so why donnt you use cv2.threshold API to convert eye image to binary image? and then use canny and other algorithm to fit pupil ellipse๏ผ
Hi @user-9f7f1b! In simple scenarios something like that would work as a baseline approach. However many things one encounters in practice would break such an algorithm. E.g. reflections on the cornea on top of the dark pupil are white and break the thresholding. For more details, I suggest looking into the papers of e.g. the "Labelled pupils in the wild" dataset or the "PuRe" pupil detector, which describe all the various challenges.
Hello, I'm currently receiving real-time gaze information by using filter_message.py code. In the code, what does 'gaze_normals_3d' (a key value of dictionary 'msg') mean? I want to get gaze vector information.
Check out this page for an overview and description of the data made available: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
Hello, I was wondering if you used an artificial eye during the development of the Pupil Core, to assess its performances. I am working on a comparison between different kinds of eye trackers, including Pupil Core, on a test bench, and I try to compare the precision of each devices. Thanks in advance!
Hello, I am still wondering which kind of artificial eye you might have used, to assess the precision of the Pupil Core on bench. I am looking for a model of eye compatible with your gaze tracking process. Do you have some advices? Thanks in advance!
Hey everyone, I'm new to eye tracking and looking into the various options. Do any of you know of mac-based analysis software for the pupil core?
You can download the software directly.https://github.com/pupil-labs/pupil/releases
To be clear, we need a world camera for the gaze data, right? Is there any way we can gather the gaze data only with Binocular Add-on?
Maybe you can use pye3d, https://github.com/pupil-labs/pye3d-detector. It will return gaze vector and other information.
Whether or not you need a scene camera depends on what you are trying to do with the VR add-on. If you want to track the user's eye movements in a VR environment, you will need to cast the virtual scene camera from Unity to Pupil Capture. However, if you only want to use the Binocular add-on to track the user's eye movements without a VR environment or without any relationship to the world, you may not need a scene camera.
Note that you will need scene information to calibrate and to correspond gaze vectors to coordinates.
@user-d407c1 Hello, our current work wants to cite URLs and papers. The papers have been cited (Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction), but we also want to quote the URL of the pupil core to provide an introduction. I would like to know if there is any information about URL citations (https://docs.pupil-labs.com/core/)
We do not have any specifics on URL citations, you can cite it as any other web:
โCore - Getting Started.โ Pupil Labs, docs.pupil-labs.com/core. Accessed on XX/XX/XXXX
Given the fact, we also have the documentation online in Github, you can include the latest commit (6f529ffcb9737cfacef9ffe8a8a4ade43e2c042c). To make your citation more atemporal
Hello, I am writing to ask about the gaze sampling rate. When I subscribe to the gaze topic, I received data closer to 250 Hz instead of the 200 Hz update rate of the eye cameras. Is it normal or is there a transmission issue happening?
Hi @user-a3e405. That's a result of the pupil data matching algorithm, whereby single eye images can be used more than once in gaze data generation. Check out this page for a full description: https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching
Hello, is it possible to compute a dispersion coefficient for a heatmap based on the x/y gaze coordinates information of a surface? If so, what would you recommend to achieve such computation? Thank you!
Do you mean the dispersion coeff of fixations on surfaces? a dispersion matrix/covariance matrix?
Yes, a dispersion coeff of fixations on surface. Is it possible to obtain?
On exporting, you can find the dispersion (as column) of surface detected fixations on the fixations_on_surface_<Surface Name>.csv
Oh, Im sorry. I believe this column is related to each fixation. What I meant was a general dispersion coeff for the surface. In this sense maybe a dispersion value that represent de heat map.
Hi @user-f3048f In statistics, dispersion is a class of metrics, and there is no single definition. Below, i provide you with a snippet which could fit some definitions of gaze dispersion across the surface, but ultimately depends on what you aim to achieve.
import pandas as pd
import numpy as np
def get_surf_norm_spatial_dispersion_coeff(csv_path="gaze_positions_on_surface_<surface_name>.csv"):
# Read csv file into a pandas DataFrame
df = pd.read_csv(csv_path)
# Remove gaze not detected on surface
df = df[df["on_surf"] == True]
# Grab gaze points as np array of shape (n, 2)
gaze_points = df[["x_norm", "y_norm"]].to_numpy()
# Calculate mean position
mean_pos = np.mean(gaze_points, axis=0)
# Calculate spatial dispersion as Euclidean distance between the mean position and each gaze data point
dispersion = np.sum(np.linalg.norm(gaze_points - mean_pos, axis=1)) / len(
gaze_points
)
print(dispersion)
return dispersion
For context, fixation detectors are usually either velocity-based or dispersion-based, i.e. they either check if the velocity of the eye is low enough for enough time, or if the dispersion of gaze samples is small enough for enough time. Check out this paper https://link.springer.com/article/10.3758/s13428-021-01762-8 for more information In the fixation detector of Core, dispersion is defined as the maximum pairwise distance of all points. You can find how the dispersion of fixations is computed here: https://github.com/pupil-labs/pupil/blob/318e1439b9fdde96bb92a57e02ef719e89f4b7b0/pupil_src/shared_modules/fixation_detector.py#L132 Thank @marc for the input
Hello, I am getting an error whenever I try recording Lab streaming layer (C# or even labrecorder): WORLD: Error Extracting Gaze Sample:0. The error goes away when I stop lsl recording.
Hi! Have you ever tried to measure Interpupillary Distance? For example, calibrate eye cameras intrinsic and extrinsic, then use Detector3D to get two eye centers, finally, obtain Interpupillary Distance by the relation of two eye cameras. Does this work?
Hi, I have tried this method, but the result is wrong, could you please give me some advice? Specifically, I recorded a video with a binocular camera, one camera recorded one eye. Then, ellseg algorithm was used to obtain pupil ellipse and pye3d was used to obtain eye center. After that, I used extrinsic to convert both eye centers to the same coordinate system. Finally, I used the Euclidean distance between the two centers as Interpupillary Distance. IPD range is 44-50mm. Obviously, the result is wrong.
Hi, I want to display gaze coordinates as an overlay in Pupil Capture. I have looked at the example plugin frame_index.py (https://gist.github.com/papr/c123d1ef1009126248713f302cd9fac3) to write a plugin myself. The plugin is displayed in Pupil Capure and I can also see output from the logger, but I don't see the frame index in the video. How can I display a text in the video?
Hi @user-6d7fe6 ๐ frame_index.py is a Pupil Player plugin, so you need to add the script into Player plugin folder and then enable the plugin to visualize the frame index over the world video.
I know PupilMobile has been deprecated, but I was told it should still work. We originally had issues with our University wireless network blocking the signal via firewall, however, we found another wireless network that should be firewall-free. I've tried connecting the android mobile device running PupilMobile and the computer running PupilCapture to this network, and the mobile device is still not registering in PupilCapture. Can anyone provide any guidance? Or alternative options to remain mobile during capture?
I read through posts mentioning PupilMobile on this channel, and the only solution I could find was using an iPad with PupilCapture to capture the data and stream wirelessly to the main computer using PupilCapture instead of an android mobile device with PupilMobile, however this will not work for our lab. Many of our research participants are small children, and an iPad is too big for them to wear during movement tasks.
Hi @user-2e5a7e! We have answered to your email
Hi What values are usually set for the parameters in this? Thank you
Check out the docs for an explanation of how the blink detector works, and for tips on setting appropriate values: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
Hi, I want to understand the detailed method to get the 3d gaze point by using the pupil positions from eyes. Where can I get the information?
Hi @user-348329 ๐. See the message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1013835049717731368