core


user-f3048f 01 December, 2022, 01:32:06

Combine heatmaps

user-736bf7 01 December, 2022, 15:38:07

In the course of the chat, the topic of converting timestamps was discussed more than once. Unfortunately, I still can't get any further. I use a Pupil Labes Core, exported the raw data and imported it into RStudio. I know meanwhile how relative time range and absolute time range are related... But now I need the time of start and end of the recording. Is there a step by step guide for this available somewhere that is not Python related or at least easily reproducible in R? Thanks!

user-c2d375 02 December, 2022, 08:22:48

In the course of the chat the topic of

user-4ba9c4 05 December, 2022, 02:29:06

Hi Pupil Core community, I am a new pupil core user. I am working on an experiment where I need to track a participant's gaze on a computer screen. I understand that I can use on screen choreography to calibrate the eye tracker, but what would happen during the actual experiment when the markers won't be visible. If the participant turn their head won't it miscalculate the gaze location?

user-d407c1 05 December, 2022, 08:04:45

Hi @user-4ba9c4 ! You are absolutely correct! To track/ re-map your gaze onto a surface like a computer screen, you will need to use the Surface Tracker Plugin and AprilTags markers. Check it out here https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

The Surface Tracker plugin in Pupil Core allows users to define planar surfaces in their environment and track areas of interest (AOIs) using Apriltag markers. These markers can be placed on paper, stickers, or displayed on a screen, and multiple markers can be used to define a single surface for greater tracking robustness. Surfaces can be defined in real-time with Pupil Capture or post-hoc with Pupil Player, and the resulting data can be used to generate gaze heatmaps for the defined surfaces.

user-4ba9c4 05 December, 2022, 10:07:38

Hello, thanks for the reply. I was wondering if I use the surface tracker plugin to define my computer screen, how will I calibrate the device? Should I use the on-screen choreography or natural markers or should I use the marker provided with the device (concentric circles)? Thanks again.

user-80123a 05 December, 2022, 08:13:46

Hello pupil community, I would like to know how the position on the surface is calculated? I assume it is calculated from the direction of gaze? Can I have the mathematical formulation? My lab uses a different system to calculate the gaze direction and we want to use the same system that the pupil lab uses to calculate the position on the surface. Thanks in advance.

user-d407c1 05 December, 2022, 08:28:24

Hi @user-80123a The code for the SurfaceTracker plugin is OpenSource, you can find the module here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/

Moreover, if you check here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L154 you can see how the mapping of points from image pixel space to normalized surface space is done.

The transformation is mostly done using the function cv2.getPerspectiveTransform in the OpenCV library.

user-2ce8eb 05 December, 2022, 09:31:19

Hi, Pupil team, I want to do an experiment with the pupil core, but I just started using it and there is still a lot I didn't understand. I need your help! Specifically, the experiment involves having the tester use a mobile phone and using the pupil core to see where the tester's gaze is on the screen and how the eye moves. The phone's screen appears smaller in the world video, I want to know how can I map gaze onto user interface by pupil core? (just like the YouTube video you contributed to)

nmt 05 December, 2022, 10:18:18

Hi @user-2ce8eb ๐Ÿ‘‹. Mapping gaze onto a phone screen is possible via the Surface Tracker Plugin (see @user-d407c1's recent responses). Note that phone screens tend to cover a very small area of the visual field. So you'll need to obtain a good calibration. I'd recommend using the physical marker and concentrating the calibration on the area of the visual field that the phone covers. Something like this is what's possible in the best-case scenario: https://drive.google.com/file/d/110vBnw8t1fhsUFf0z8N8DZMwlXdUCt6x/view?usp=sharing

user-c8ad1f 05 December, 2022, 13:56:09

Hello! when I try to install pupil-labs-realtime-api using the command :pip install pupil-labs-realtime-api I always get the error: ERROR: Could not find a version that satisfies the requirement pupil-labs-realtime-api (from versions: none) ERROR: No matching distribution found for pupil-labs-realtime-api I am using Python 3.7.9 32bits It works properly when I use Python 3.9.12 64bits but I use another device that requires a 32 bits version can you help with that? thanks!

user-c8ad1f 08 December, 2022, 12:36:02

hello, any input about this question? I looked online but couldn't find an answer Thanks!

user-6e1219 06 December, 2022, 12:10:06

Hello, I wanted to access the eye data in real time. I.e suppose in my experiment I wanted to know while the trails are going on if the participant is making any saccades Or not. Or even microsaccades also.

I wanted to reduce the foresight error as much as possible . So is there any way to tap on the pupil data at the time of recording itself?

Like can you suggest, anything or information regarding the data packets and Ports

user-c2d375 06 December, 2022, 14:21:08

Hi @user-6e1219 ๐Ÿ‘‹ It's possible to access to real-time data through the Core Network API. Check out our documentation to learn more: https://docs.pupil-labs.com/developer/core/network-api/#network-api

user-a98526 06 December, 2022, 13:54:26

Hi@papr๏ผŒin my use of invisible, it often fails to connect, like this

user-a98526 06 December, 2022, 13:55:29

Chat image

user-a98526 06 December, 2022, 13:56:37

It usually takes 100 to 200 attempts to connect successfully, and I'm not sure if this is a problem with the invisible or the phone.

user-a98526 06 December, 2022, 13:59:47

Sorry for sending in the wrong channel.

user-d381c3 06 December, 2022, 23:22:04

Hey guys, does anyone know how to do it or if you already have a gaze plot in pupil labs?Something similar to this!

Chat image

user-348329 07 December, 2022, 08:00:51

Hi, is it possible to get real-time world camera image and corresponding gaze value (x_norm & y_norm in the wold image) through python? I need the world image and gaze value to control an object in real-time through python. I'm currently using pupil core.

nmt 07 December, 2022, 10:15:37

You can subscribe to gaze and scene video frames using examples in this helper repository: https://github.com/pupil-labs/pupil-helpers/tree/master/python ๐Ÿ™‚

user-d407c1 07 December, 2022, 11:57:53

Fixation path over a surface in Core

user-d2515e 07 December, 2022, 12:29:20

Hi! Iโ€˜m new to using pupil core. I was wondering, if the settings in pupil capture (like pupil detector 2D Settings or the video source settings) are stored in one of the recording files?

nmt 08 December, 2022, 09:43:47

Hi @user-d2515e ๐Ÿ‘‹. These aren't stored in the recording.

user-9f7f1b 08 December, 2022, 08:54:24

Something wired, I cannot modify the preferred_names, whatever in world.py or in eye.py. For example, when I comment https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/eye.py#L245-L246, preferred_names in UVC_Source class still contains "HD-6000", however, the "HD-6000" only appears once in the whole project folder.

user-9f7f1b 08 December, 2022, 09:16:43

I solved this problem by clicking "Resart with default settings" button.

user-6586ca 08 December, 2022, 12:40:56

Hello everybody ! I have a question about the 0,6 degrees of accuracy. Is it the radius or the diameter of the circle around the real gaze point ? Thank you for your answer in advance and have a nice day !

nmt 08 December, 2022, 14:52:29

It's the average angular distance between fixation locations and the corresponding locations of the fixation targets. Read more about gaze mapping and accuracy here: https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy

user-d407c1 08 December, 2022, 13:38:18

Sorry, there is no 32bit support. Would you mind sharing what info from the real-time api would you like to expose to your 32bit device? You could potentially cast those values in a 32 bit, but that is prone to errors

user-022bcd 08 December, 2022, 13:53:07

Hello, we have trouble to achieve stable confidence levels in the recording for a pupillary light reflex. The 3d target on the pupil is always jumping away from the pupil. What can we do?

nmt 08 December, 2022, 14:55:43

Hi @user-022bcd ๐Ÿ‘‹. Please ensure the cameras are positioned appropriately in the first instance: https://docs.pupil-labs.com/core/#_3-check-pupil-detection Then feel free to share an example recording with data@pupil-labs.com such that we can provide more concrete feedback ๐Ÿ™‚

user-022bcd 08 December, 2022, 14:47:15

Is there a way to call someone from pupil labs? Im also german speaking

nmt 08 December, 2022, 15:01:12

@user-022bcd if you haven't already, it's also worth checking out our pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry

user-a3e405 08 December, 2022, 18:14:07

Hello Pupil team, I have a quick question about the 3d & 2d pipeline. We are currently using the default 3d detection for calibration and surface datum collection, to generate users' gaze on a surface (computer screen). However, we saw a bias in our surface data, especially for surface data that are near the edges of the surface. Our experiment requires participants to put their head on a chin rest to keep their head stationary, so according the the best practices, 2d pipeline is probably better for us. It's easy to switch from 3d to 2d in calibration process via Capture software; however, I'm not sure how to specify 2d detection in surface datum collection, which I believe uses the default gaze.3d data for mapping. Any suggestions would be greatly appreciated!

(update): I realized that switching calibration to 2d will automatically perform 2d detection for any future gaze data collection. Please ignore this question unless my understanding is wrong, thank you!

user-a3e405 09 December, 2022, 01:43:15

I have another question about the surface data. We are collecting surface data tracking 5 targets on monitor one by one (center-out). The surface data we collected seemed to have a systematic bias in the y-axis (see attached figure). I tried both 3d and 2d pipelines and that bias did not go away. I wonder if this issue has been seen elsewhere and what are the possible reasons for this issue. Any thoughts?

Chat image

user-9f7f1b 09 December, 2022, 06:14:31

Pupil detect results are not correct, is this normal?

Chat image Chat image

user-9f7f1b 09 December, 2022, 06:17:09

Their confidence are 0.99

user-d407c1 09 December, 2022, 08:01:54

Hi @user-9f7f1b Would you mind sharing what eye camera are you using? and what settings do you have in the 2D detector? I.e. max pupil size and minimum?

user-9f7f1b 09 December, 2022, 09:54:10

Hi, I use LPW and pupilnet dataset, and my test code is like your demo https://github.com/pupil-labs/pupil-detectors#usage

nmt 09 December, 2022, 16:24:36

For the LPW dataset, you should be able to tune the parameters to work better on that resolution of video. We'll follow up with some values early next week!

nmt 12 December, 2022, 12:12:51

@user-9f7f1b, it's worth trying out the 2d detector parameters that were used back when the LPW dataset was collected (in Pupil Capture v0.8): https://github.com/pupil-labs/pupil/blob/v0.8/pupil_src/capture/pupil_detectors/detector_2d.pyx#L54. These are different to the current default parameters as the eye cameras have changed since the LPW paper was released. Thanks @marc for your input here!

user-77b334 10 December, 2022, 15:30:36

Hi, i can not nat install Pupil Invisible Companion to my motorala Z2 Play Android 8.0 from play store.

user-77b334 10 December, 2022, 15:34:50

I have used pupil mobile before, but i can not install. What shouฤฑld i do ?

user-77b334 10 December, 2022, 15:39:46

I get a message "This app isn't compatible with your device anymore. "

user-77b334 10 December, 2022, 15:40:32

How can i use my Pupil Core tool as a mobile ?

user-77b334 10 December, 2022, 15:52:38

Best Regards

user-d407c1 12 December, 2022, 07:50:32

Hi @user-77b334
- Pupil Invisible Companion does only work with Pupil Invisible and on supported devices, Companion Device (One Plus 6/8/8T). - Pupil Mobile app is no longer maintained and is not available on the Google Play Store. You can try a small form-factor tablet-style PC to make Pupil Core more portable. If the specifications of such a device are low-end, you can record the experiment with the real-time pupil detection disabled to help ensure a high-sampling rate. Pupil detection and calibration can then be performed in a post-hoc context

user-77b334 15 December, 2022, 17:52:04

Thanks for the clear explanation.

user-969d9d 12 December, 2022, 13:40:30

Hi, we're using the pupil core eye-tracker and we need to set up a calibration that also takes into account depth, i.e. looking through a transparent screen and we need the coordinates for the 2D screen as well as when participants look at each other through the screen. Has this already been documented somewhere? Thanks!

nmt 13 December, 2022, 09:30:35

Hi @user-969d9d ๐Ÿ‘‹. I'm not sure that I fully grasp what you're trying to achieve. Would you be able to elaborate a bit? Describing your research question might help.

user-313077 13 December, 2022, 03:34:15

Hello, when using pupil core with goggles, pupils cannot be detected stably and continuously. Could you tell me the most appropriate distance and angle to detect the pupils?

nmt 13 December, 2022, 09:32:40

@user-313077, what goggles are you using? Really the goal would be to get an unobstructed view of the pupils. Occluding lenses or frames will likely hinder pupil detection. For reference, the default Core headset typically positions the cameras around 30 mm from the pupils.

user-746d07 13 December, 2022, 13:31:37

Hello. I am currently measuring eye gaze while doing daily activities at home and am facing some problems. I would like to have 3 dimensional information about gaze (gaze point 3D or 3 dimensional direction of gaze) First of all, I have found that my collaborators turn their pupils downward more than I had expected, so their gaze may be lower than the scene camera. In such a case, can we still record the information for calculating gaze point 3D or 3D direction of gaze? Secondly, if the pupils are turned down considerably to begin with, as shown in the attached figure, the detection accuracy of the pupils drops considerably, and the gaze information cannot be taken well.

user-746d07 14 December, 2022, 05:34:18

Sorry, I forgot to attach a picture. I hope you can answer my question above.

Chat image

user-c2d375 14 December, 2022, 10:30:25

Hi @user-746d07 ๐Ÿ‘‹ It looks like the eye cameras are not properly set up. To avoid pupil being occluded by the eyelid when looking down, you need to adjust camera positioning to get a clear image of both eyes - with pupil visible also when looking at extreme angles. Please take a look at our guide for some video examples about eye cameras setup https://docs.pupil-labs.com/core/#_3-check-pupil-detection

nmt 14 December, 2022, 10:45:20

Just to add to @user-c2d375's response, it is possible to record gaze outside of the scene cameras field of view. Note, however, that this typically corresponds to gaze angles with low-confidence, since the pupils are often covered by the eye lids (if looking down). You can also adjust the scene camera by rotating it up and down if you want to focus your data collection in a certain region of the visual field!

user-908b50 13 December, 2022, 23:03:58

So how do I open recordings with pupil capture? I am looking to update them. I get a calibration not found error. Thanks!

nmt 14 December, 2022, 08:52:46

@user-908b50, let's continue this in the existing email thread ๐Ÿ™‚

user-9f7f1b 14 December, 2022, 03:47:21

Hi, I have a question about 2d_detector. As we all know, pupil has very dark color in gray image, and can be distinguished easily with other region, so why donnt you use cv2.threshold API to convert eye image to binary image? and then use canny and other algorithm to fit pupil ellipse๏ผŸ

marc 14 December, 2022, 11:29:31

Hi @user-9f7f1b! In simple scenarios something like that would work as a baseline approach. However many things one encounters in practice would break such an algorithm. E.g. reflections on the cornea on top of the dark pupil are white and break the thresholding. For more details, I suggest looking into the papers of e.g. the "Labelled pupils in the wild" dataset or the "PuRe" pupil detector, which describe all the various challenges.

user-348329 14 December, 2022, 08:36:29

Hello, I'm currently receiving real-time gaze information by using filter_message.py code. In the code, what does 'gaze_normals_3d' (a key value of dictionary 'msg') mean? I want to get gaze vector information.

nmt 14 December, 2022, 10:03:24

Check out this page for an overview and description of the data made available: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

user-e15aa5 14 December, 2022, 10:23:28

Hello, I was wondering if you used an artificial eye during the development of the Pupil Core, to assess its performances. I am working on a comparison between different kinds of eye trackers, including Pupil Core, on a test bench, and I try to compare the precision of each devices. Thanks in advance!

user-e15aa5 13 February, 2023, 09:00:19

Hello, I am still wondering which kind of artificial eye you might have used, to assess the precision of the Pupil Core on bench. I am looking for a model of eye compatible with your gaze tracking process. Do you have some advices? Thanks in advance!

user-6e186e 14 December, 2022, 18:55:14

Hey everyone, I'm new to eye tracking and looking into the various options. Do any of you know of mac-based analysis software for the pupil core?

user-9f7f1b 15 December, 2022, 02:14:46

You can download the software directly.https://github.com/pupil-labs/pupil/releases

user-4dc2c6 15 December, 2022, 07:25:22

To be clear, we need a world camera for the gaze data, right? Is there any way we can gather the gaze data only with Binocular Add-on?

user-9f7f1b 15 December, 2022, 08:23:30

Maybe you can use pye3d, https://github.com/pupil-labs/pye3d-detector. It will return gaze vector and other information.

Chat image

user-d407c1 15 December, 2022, 08:29:23

Whether or not you need a scene camera depends on what you are trying to do with the VR add-on. If you want to track the user's eye movements in a VR environment, you will need to cast the virtual scene camera from Unity to Pupil Capture. However, if you only want to use the Binocular add-on to track the user's eye movements without a VR environment or without any relationship to the world, you may not need a scene camera.

Note that you will need scene information to calibrate and to correspond gaze vectors to coordinates.

user-8a20ba 15 December, 2022, 13:02:11

@user-d407c1 Hello, our current work wants to cite URLs and papers. The papers have been cited (Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction), but we also want to quote the URL of the pupil core to provide an introduction. I would like to know if there is any information about URL citations (https://docs.pupil-labs.com/core/)

user-d407c1 15 December, 2022, 13:29:06

We do not have any specifics on URL citations, you can cite it as any other web:

โ€œCore - Getting Started.โ€ Pupil Labs, docs.pupil-labs.com/core. Accessed on XX/XX/XXXX

Given the fact, we also have the documentation online in Github, you can include the latest commit (6f529ffcb9737cfacef9ffe8a8a4ade43e2c042c). To make your citation more atemporal

user-a3e405 15 December, 2022, 23:34:39

Hello, I am writing to ask about the gaze sampling rate. When I subscribe to the gaze topic, I received data closer to 250 Hz instead of the 200 Hz update rate of the eye cameras. Is it normal or is there a transmission issue happening?

nmt 16 December, 2022, 14:52:17

Hi @user-a3e405. That's a result of the pupil data matching algorithm, whereby single eye images can be used more than once in gaze data generation. Check out this page for a full description: https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching

user-f3048f 17 December, 2022, 03:31:09

Hello, is it possible to compute a dispersion coefficient for a heatmap based on the x/y gaze coordinates information of a surface? If so, what would you recommend to achieve such computation? Thank you!

user-d407c1 19 December, 2022, 13:08:36

Do you mean the dispersion coeff of fixations on surfaces? a dispersion matrix/covariance matrix?

user-f3048f 19 December, 2022, 13:29:47

Yes, a dispersion coeff of fixations on surface. Is it possible to obtain?

user-d407c1 19 December, 2022, 13:55:45

On exporting, you can find the dispersion (as column) of surface detected fixations on the fixations_on_surface_<Surface Name>.csv

user-f3048f 19 December, 2022, 14:02:19

Oh, Im sorry. I believe this column is related to each fixation. What I meant was a general dispersion coeff for the surface. In this sense maybe a dispersion value that represent de heat map.

user-d407c1 20 December, 2022, 11:47:45

Hi @user-f3048f In statistics, dispersion is a class of metrics, and there is no single definition. Below, i provide you with a snippet which could fit some definitions of gaze dispersion across the surface, but ultimately depends on what you aim to achieve.

import pandas as pd
import numpy as np
def get_surf_norm_spatial_dispersion_coeff(csv_path="gaze_positions_on_surface_<surface_name>.csv"):
    # Read csv file into a pandas DataFrame
    df = pd.read_csv(csv_path)

    # Remove gaze not detected on surface
    df = df[df["on_surf"] == True]
    # Grab gaze points as np array of shape (n, 2)
    gaze_points = df[["x_norm", "y_norm"]].to_numpy()

    # Calculate mean position
    mean_pos = np.mean(gaze_points, axis=0)

    # Calculate spatial dispersion as Euclidean distance between the mean position and each gaze data point
    dispersion = np.sum(np.linalg.norm(gaze_points - mean_pos, axis=1)) / len(
        gaze_points
    )
    print(dispersion)
    return dispersion

For context, fixation detectors are usually either velocity-based or dispersion-based, i.e. they either check if the velocity of the eye is low enough for enough time, or if the dispersion of gaze samples is small enough for enough time. Check out this paper https://link.springer.com/article/10.3758/s13428-021-01762-8 for more information In the fixation detector of Core, dispersion is defined as the maximum pairwise distance of all points. You can find how the dispersion of fixations is computed here: https://github.com/pupil-labs/pupil/blob/318e1439b9fdde96bb92a57e02ef719e89f4b7b0/pupil_src/shared_modules/fixation_detector.py#L132 Thank @marc for the input

user-4ba9c4 20 December, 2022, 21:31:06

Hello, I am getting an error whenever I try recording Lab streaming layer (C# or even labrecorder): WORLD: Error Extracting Gaze Sample:0. The error goes away when I stop lsl recording.

user-9f7f1b 21 December, 2022, 02:59:06

Hi! Have you ever tried to measure Interpupillary Distance? For example, calibrate eye cameras intrinsic and extrinsic, then use Detector3D to get two eye centers, finally, obtain Interpupillary Distance by the relation of two eye cameras. Does this work?

user-9f7f1b 03 January, 2023, 12:04:42

Hi, I have tried this method, but the result is wrong, could you please give me some advice? Specifically, I recorded a video with a binocular camera, one camera recorded one eye. Then, ellseg algorithm was used to obtain pupil ellipse and pye3d was used to obtain eye center. After that, I used extrinsic to convert both eye centers to the same coordinate system. Finally, I used the Euclidean distance between the two centers as Interpupillary Distance. IPD range is 44-50mm. Obviously, the result is wrong.

user-6d7fe6 21 December, 2022, 12:58:16

Hi, I want to display gaze coordinates as an overlay in Pupil Capture. I have looked at the example plugin frame_index.py (https://gist.github.com/papr/c123d1ef1009126248713f302cd9fac3) to write a plugin myself. The plugin is displayed in Pupil Capure and I can also see output from the logger, but I don't see the frame index in the video. How can I display a text in the video?

user-c2d375 21 December, 2022, 15:13:32

Hi @user-6d7fe6 ๐Ÿ‘‹ frame_index.py is a Pupil Player plugin, so you need to add the script into Player plugin folder and then enable the plugin to visualize the frame index over the world video.

user-2e5a7e 21 December, 2022, 18:38:01

I know PupilMobile has been deprecated, but I was told it should still work. We originally had issues with our University wireless network blocking the signal via firewall, however, we found another wireless network that should be firewall-free. I've tried connecting the android mobile device running PupilMobile and the computer running PupilCapture to this network, and the mobile device is still not registering in PupilCapture. Can anyone provide any guidance? Or alternative options to remain mobile during capture?

I read through posts mentioning PupilMobile on this channel, and the only solution I could find was using an iPad with PupilCapture to capture the data and stream wirelessly to the main computer using PupilCapture instead of an android mobile device with PupilMobile, however this will not work for our lab. Many of our research participants are small children, and an iPad is too big for them to wear during movement tasks.

user-d407c1 22 December, 2022, 08:55:44

Hi @user-2e5a7e! We have answered to your email

user-4bc389 22 December, 2022, 06:38:53

Hi What values are usually set for the parameters in this? Thank you

Chat image

nmt 27 December, 2022, 07:41:01

Check out the docs for an explanation of how the blink detector works, and for tips on setting appropriate values: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector

user-348329 27 December, 2022, 06:26:11

Hi, I want to understand the detailed method to get the 3d gaze point by using the pupil positions from eyes. Where can I get the information?

nmt 27 December, 2022, 07:30:38

Hi @user-348329 ๐Ÿ‘‹. See the message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1013835049717731368

End of December archive