core


user-908b50 01 August, 2021, 00:20:19

I am having trouble wrapping my head around pupil remote. Can anyone kindly talk to me about it? Also, do remote annotations works for offline data processing? I get the sense that it works during remote data collection that is not not post hoc? Just following up on earlier comment (from June) about annotating data automatically. We are annotating for missing or problematic gaze estimates post-hoc manually. Sometimes we notice the marker without the red gaze marker during our data check. Is that problematic?

user-908b50 01 August, 2021, 00:22:57

Also I am still wondering about the a fixation I'd on a surface that have multiple x and y values. The more I look at my data the more I think they just describe dispersion around the space for that specific fixation I'd on the surface. The primary fixations files averages the x and y values over a few frames. That's correct right? So if when I am averaging it out for across participant analysis it would it make sense to average the averages? These averages (per fixation id) is what would need to be graphed when visually observing the data for participant engagement?

user-7d2ccc 01 August, 2021, 22:55:00

How can I import/export calibration?

nmt 02 August, 2021, 10:30:47

Please see this video for reference: https://youtu.be/eEl3sswsTms

user-7daa32 02 August, 2021, 00:55:58

Hello everyone

Please why is the keyboard not compatible with powerpoint? If I cascade my screen with powerpoint presenter view and capture open on the laptop and the PowerPoint slide show on the desktop, annotations won't be sent. Annotations don't work unless you mouse click the capture last. Won't you click the ppt presenter view to control the slide, the annotation won't work unless you mouse click the capture. Please how can I get both the PPT presenter view and the Capture to remain active on the laptop while the slide is showing on the desktop?

user-7daa32 02 August, 2021, 02:00:49

Capture don't work when PPT is active. It don't send annotation. When capture is active, animation on PPT won't work

nmt 02 August, 2021, 09:08:21

Hi @user-a0389d. During the time between fixations, other ocular-related events, e.g. saccades & blinks occur. It would perhaps be helpful to think more about what is happening in your data before applying a blanket term such as fixation-changing time.

user-a0389d 02 August, 2021, 13:43:35

Thank you!

user-35fbd7 02 August, 2021, 09:20:42

core invisible hi, guys! today during recordings we met problem: the recordings suddenly stops and the data is missing. we got the report

Chat image

user-35fbd7 02 August, 2021, 09:21:16

invisible what could you advice for fixing the mistake?

nmt 02 August, 2021, 09:54:19

Hi @user-7b683e. Some notes on calculating eye and head rotation speed: 1) Eye rotation: There are several ways to calculate eye rotation in degrees using the data made available by Core. You can work with circle_3d->normal, as suggested by @papr. Or you could use the gaze_point_3d. The point is you will have to do some processing. Have a look at this tutorial which describes how to calculate gaze velocity post-hoc: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb for reference. 2) Head rotation speed: You can convert the rotation output of the real-time headpose tracker to angles using this code: https://discord.com/channels/285728493612957698/633564003846717444/856517135580528700

user-7b683e 03 August, 2021, 21:34:00

There is a last stuff more we want to convey. As you have said, we will use gaze velocity script to calculate angular eye velocity. However, we want to reach velocities in just x-axis. Well, in this case, should we receive only psi (so roll) values?

nmt 02 August, 2021, 10:26:49

Hi @user-908b50. • Pupil remote works in real-time, e.g. to stop/start recordings, or to send remote annotations to Pupil Capture over the network. Have a look at our Pupil Helpers repository for examples: https://github.com/pupil-labs/pupil-helpers/tree/master/python • Please confirm what you mean by red marker – are you referring to the red circles that overlay the Pupils? • The fixation on surface is showing the dispersed gaze locations grouped by fixation, and yes, the fixation detector (results in primary fixation file) provides fixation locations as the centroid of these. Conceptually, if you wanted to visualise data from multiple participants, I think it would make sense to produce an aggregated heatmap

user-908b50 03 August, 2021, 18:44:51

Okay, got it! No, I mean the red marker that detects gaze on screen. Sometimes that is missing but the heatmap and the marker id still shows up when tracking gaze. What does that mean?

nmt 02 August, 2021, 10:36:22

Hi @user-7daa32, this is standard behaviour for how an OS directs keyboard input to a given program instance – it must be in focus/clicked on. You might have to send annotations to Pupil Capture via our network API for your desired functionality

user-7daa32 02 August, 2021, 11:37:27

Do you think using a third computer might work ?

user-f5b6c9 02 August, 2021, 11:10:54

Hi! I am wondering, if the 3.4. release causes spontaneous CPU overload since the sampling frequency sometimes seems to pause:

Chat image

user-f5b6c9 02 August, 2021, 11:12:50

This didn't happen in 3.3. I'd like to use 3.4., but these little breaks may be problematic during important events etc. Any idea or help? 🙂 Thanks in advance!

nmt 02 August, 2021, 17:54:41

Hi @user-f5b6c9, I'm not aware of this. Would you be able to share example recordings of normal & missing events with [email removed]

user-98789c 02 August, 2021, 15:33:54

After recording with Pupil Core, we came to realise that even if we pick exactly defined windows (8 secs) of raw pupil diameter data, still there would not be the same amount of samples in these windows. Can this be because at some instances, the devices does not record?

nmt 03 August, 2021, 09:13:35

Hi @user-98789c. The eye cameras have a variable sampling rate, so it is unlikely you can extract epochs corresponding exactly to 8.0 s, but approximating will give you slightly different amounts of frames in each bin. As @papr suggested, interpolation is a useful method if you want to compare specific epochs: https://discord.com/channels/285728493612957698/285728493612957698/869558681363177472 with evenly spaced samples

user-98789c 03 August, 2021, 06:53:55

I'd appreciate your thoughts on this!

user-ee72ae 02 August, 2021, 17:37:52

hello, I didn't know from this info, is there any app that will replace it? I'm using a j6 phone from samsung

nmt 02 August, 2021, 17:57:54

There will be no replacement app. We made the decision to phase out development for Pupil Mobile after the launch of Pupil Invisible as Pupil Invisible provides a more robust and user-friendly solution for eye tracking in mobile + dynamic environments

nmt 02 August, 2021, 17:56:13

You can certainly send remote annotations form a third computer over the network, so long as it is connected to the same network as the computer running Pupil Capture

user-7daa32 04 August, 2021, 20:57:03

We resolved this by using a second laptop. Thanks

user-ee72ae 02 August, 2021, 18:51:11

@nmt Thanks for the answer, and I have the Eye tracking core, will I have to change my eye tracking? or buy a new one, because today I only have this equipment

nmt 03 August, 2021, 09:14:59

Yes, Pupil Core was only designed to work with the now deprecated Pupil Mobile application. It should still work, but it is not supported.

user-ee72ae 02 August, 2021, 18:56:37

@nmt For eye tracking core, only the pupil mobile that works correctly?

user-7b683e 03 August, 2021, 09:55:33

Which field should I use for head rotation? I didn't see on the post you have conveyed.

nmt 03 August, 2021, 15:46:05

Subscribe to head_pose->camera_poses The format is: [rotation_x, rotation_y, rotation_z, translation_x, translation_y, translation_z] You will need the first three components to convert to degrees

user-4bad8e 03 August, 2021, 11:21:26

Thank you for the answer. I agree with the possibility, but we can also find a large variance when confidence is high. Is there any other possibility of this large variance? (It relates to an error in building 3D eyes model..?? Some subjects tend to be found this big variance frequently.)

Chat image

nmt 03 August, 2021, 15:48:11

This could also be caused by a poorly fitted 3d model. If you wish, share an example recording with data@pupil-labs.com and we will provide feedback

user-ee72ae 03 August, 2021, 12:45:52

@nmt tks

user-562b5c 03 August, 2021, 14:00:23

Hi @papr

I would like your help with the blinks.csv file.

1) Is there any documentation explaining the csv file coloumns? I have gone through the blink detectors github page (https://bit.ly/3jh7zRt) and https://docs.pupil-labs.com/core/software/pupil-capture/#blink-detection

2) I would like to get the blink rate (blinks/minute) and I guess I can calculate it based on the start and end timestamps in the csv; do I need to consider something else for calculating blink rate?

3) What is the confidence value in the csv informing us? Is it something similar to the confidence value in the pupil_positions.csv, such that I can use it to filter out erroneous blinks as I could remove low quality pupil data below 0.6 confidence values? Should I remove data below a certain blink confidence value? I have seen your previous comment -"blink confidence is the mean of the absolute filter response during the blink event, clamped at 1." I am just trying to understand the practical use of it.

nmt 03 August, 2021, 16:05:14

HI @user-562b5c 👋 . Have a look at this message for reference of how the blink detector works: https://discord.com/channels/285728493612957698/285728493612957698/842046323430916166. After reading that, examine your recordings and set the onset/offset thresholds such that they accurately classify blinks. Use the Eye overlay plugin in Pupil Player to view the eye videos and confirm the thresholds are correct and blinks correctly classified. If you want to calculate blink rate, you can indeed use the number of blinks in given period of your recording, assuming that the blinks are accurately classified

user-562b5c 03 August, 2021, 21:49:09

Thanks @nmt , I now have an understanding of the confidence values and thresholds, but like in the case of pupil detection it is recommended to drop values below 0.6 confidence, so is there some recommended confidence value for blinks, with respect to the data in blinks.csv file ?

user-908b50 03 August, 2021, 18:46:18

I have another question: how do we quantify the amount of data missing in a single participant? We want to exclude participants with more than x % missing, but the number of fixations detected for each person obviously differs. Do you have any solutions on that?

nmt 04 August, 2021, 08:55:28

• The Surface Tracker renders markers green when they are detected. It is possible that the Head pose tracker is running simultaneously, which renders markers red when a 3d model has been built. Sometimes they are green with a poor model. If both plugins were running, you could end up with red/green markers while the heatmap and marker ids were still shown. Double check which plugins are loaded in the plugin manager. • We run the 2D detector on every frame of eye data and assign a confidence value. Gaze estimates with <0.6 can be inaccurate. You could use this to filter low quality data/calculate a % of low quality or missing data. Be aware that blinks also correspond to drops in confidence.

user-a0389d 03 August, 2021, 20:13:17

Hello everyone! I am writing to ask if anyone happens to have the methodology for measuring baseline pupil diameter? I do not want to just create my own way and I believe it should be more reasonable if I can cite some references related to this topic. Thank you so much!

user-f5b6c9 04 August, 2021, 08:05:51

Hi @nmt . Sure, this wouldn't be a problem, but I already noticed that these gaps occur at the beginning and ending of calibrations. Is this expected?

nmt 04 August, 2021, 10:12:19

What exactly is your 'signal' data?

nmt 04 August, 2021, 09:14:20

The head pose tracker results are given relative to the coordinate system of the origin marker. If you position the marker vertically onto a computer screen, such that it is perpendicular to the Core scene camera viewing direction, and ensure that the marker is pointing vertically upwards (see pic), the output of the rod_to_euler function should correspond to [ "pitch", "yaw", "roll"], with roll representing left/right head tilts

Chat image

user-7b683e 04 August, 2021, 12:46:49

Thanks a lot of @nmt . We are no longer worried about head pose topic and its usage.

nmt 04 August, 2021, 09:27:41

See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/704257189904384051 While the blink confidence is certainly useful, I definitely recommend visually inspecting at least some of the blink results to ensure that blinks are correctly classified

user-562b5c 04 August, 2021, 10:33:10

Thanks!

user-7b683e 04 August, 2021, 12:50:11

Well, is same way suite for gaze velocity? Sould we receive only roll amount? When we use this amount, should we use an ad-hoc procedure?

nmt 04 August, 2021, 15:20:49

That function is only valid for the headpose tracker. You will have to calculate eye rotation in degrees using a different method, e.g. like in the tutorial I linked to. Whether you do that in real-time or post-hoc depends on your experiment

user-7b683e 04 August, 2021, 15:29:30

I have already intended the method you linked.

user-7b683e 04 August, 2021, 15:38:03

I conveyed this question for calculating eye rotation via gaze points in 3d.

nmt 05 August, 2021, 07:40:21

See this tutorial for reference: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb

user-90c44a 04 August, 2021, 18:58:36

Hi. I am trying to use the Pupil Core in my research and need a mobile way to gather data. I read through a few discussions about using the Pupil Mobile App. It seems like it's no longer supported, but should still work.

When I try running the app on my Google Pixel 4a, it asks me to grant permissions for the cameras, freezes and then restarts my phone. Is there any documentation on what the best type of phone to use is? Or maybe system requirements for that phone?

nmt 05 August, 2021, 07:48:40

Hi @user-90c44a 👋 . Pupil Mobile was designed with Android 8 in mind, and was compatible with OnePlus phones. Your mileage may vary with other Android versions/phones.

user-7daa32 04 August, 2021, 20:51:35

Please I will like to know what is responsible for this blank. Thanks

Chat image

user-7daa32 04 August, 2021, 20:56:26

Another question: How can I create a very good heat map? The only idea I thought is that I will screenshot out what look like heat map when surfaces are overlay on the AOI in player.

user-7daa32 05 August, 2021, 11:58:12

Please answer to this . Thanks

nmt 05 August, 2021, 13:12:33

Can you provide some further information, e.g. have you tried disconnecting and reconnecting the headset/restarting Pupil Capture?

user-7daa32 05 August, 2021, 13:14:12

What I simply asked: what might have led to that ? Thanks

nmt 05 August, 2021, 13:45:53

It's difficult to say (e.g. disconnect/drivers). If it happens in future, feel free to share the Capture .log file

user-d1ed67 05 August, 2021, 18:43:47

Hi. I have some questions about how to understand & utilise the exported data from Pupil Player.

1) Is ALL the data in pupil_positions.csv defined within the eye camera space (either 2d or 3d space)? That is, even if there is no world camera, a full pupil_positions.csv file can still be generated solely based on eye camera video.

2) How is the 3d positions of eyeballs in the 3d world camera space calculated? Is this calculation dependent on any empirical data (like average distance between 2 eyeballs among human)?

3) What's the relationship between gaze_point_3d, eye_center_3d, and gaze_normal? That is, is gaze_point_3d calculated based on eye_center_3d and gaze_normal? Or gaze_normal is calculated based on gaze_point_3d and eye_center_3d?

nmt 09 August, 2021, 10:33:12

HI @user-d1ed67 👋. 1) Yes, all data is relative to the eye camera coordinate system. Even with no world camera, data in pupil_positions.csv are generated 2) The 3d calibration pipeline uses bundle adjustment to find the physical relation of the cameras to each other and uses this relationship to linearly map pupil vectors that are yielded by the 3D eye model. Calibration thus allows us to estimate the centre of the 3D eye in world camera coordinates 3) The gaze normals describe the visual axes of the left and right eyes (i.e. line between eyeball centres and object that’s looked at). We find the intersection of the gaze normal vectors to yield gaze_point_3d. In some cases, convergence of the eyes is difficult to detect, e.g. when the gaze normals are almost parallel. When this happens, we take the average point between the two gaze normals

user-a0389d 05 August, 2021, 20:40:26

Hello! I would like to ask a few follow-up questions regarding the exported data from Pupil Player based on @user-d1ed67 : 1. I noticed that there are “norm_pos_x” and “norm_pos_y” for datasets of “gaze_positions” and “pupil_positions”, and “fixations”. I took a quick look at the x and y numbers in these three datasets, they are not the same under a timestamp. Therefore, my question is which XY coordinates are recommended for use? 2. I would like to measure the distance between fixation and the center of the screen. For fixation, it is quite easy to detect the normalized x and y from the exported “fixations” dataset. Are there any solutions to identify the x and y coordinates for the center point of the screen? I remembered that a subject went through the calibration process before the experiment starts, which the subject stares at the center of the screen as one calibration point. Would it be possible that we can identify the coordinates of the center point based on the calibration? Any answers will be much appreciated! Thank you!

user-02b39e 06 August, 2021, 13:40:57

Hi all! I have a small question or remark regarding the capture software. I am running pupil labs from source on an Ubuntu 20.4 machine and followed the installation instructions from the github page: installed all the dependencies that is recommended for Ubuntu 18 and up, I got an error for the scikit-learn package (==0.24.2 is not compatible with pye3d==0.1.1), but after downgrading, everything worked fine. I also had problems with the cv2 bindings, but the stackoverflow post you mentioned for troubleshooting cv2 issues worked also. However, pupil capture's eye windows crashed at every startup, with a seemingly very similar problem to this issue: https://github.com/pupil-labs/pupil/issues/2109. I managed to work around the problem the same way as in this post, by using pyzmq==20.0.0 instead of the newest versions. Right now pupil capture works adequately, although it starts up quite slow and my machine even offers to close the application but then after a few moments it starts to load the cameras and works fine. This issue persists after disconnecting the hardware and launching again. I know that Ubuntu 18 is the officially supported version but not expected to bump into problems with 20. Not really a question, but an obsevation, any reply will be appreciated though! Thank you!

user-312660 06 August, 2021, 15:50:26

Hi! What are the units for the diameter in the csv file?

nmt 09 August, 2021, 11:47:03

diameter is given in pixels as observed in the eye videos and diameter_3d in millimetres, as provided by the 3d eye model

user-7b683e 06 August, 2021, 20:23:31

Hello @nmt ,

I have performed the method you have linked for head rotation, and have placed markers as you have mentioned.

I plotted data that are incame from the method as x(blue), y(green) and z(red). You can see this in below.

In this point I have an issue about continuity of data. The relevant data has discontinous form.

I think any marker was't be detected correctly when the test subject rotate the headset.

Well, is this case normal? So, same result can be got by your experiments? Otherwise, can the source of this issue be the performance of my test machine?

Chat image

nmt 09 August, 2021, 11:50:27

If the markers are visible to the scene camera and well-detected, you should absolutely get a continuous signal. The intermittent nature of your data suggests either one of those has gone wrong somewhere

user-7b683e 06 August, 2021, 20:30:28

If this status is normal, is there any different method to be able to getting gyro data for headset except for usage of a different gyro sensor together with the currently headset?

user-98789c 09 August, 2021, 08:34:07

another question about this, do we have any information about how the sampling rate varies? does it always have a common pattern of changing? does the change come from some distribution?

user-98789c 09 August, 2021, 10:32:22

also from what I hear about other eye trackers, they seem to have this interpolation as an internal pre-processor. isn't it the case with pupil core?

nmt 09 August, 2021, 10:56:41

1) pupil_positions are relative to eye camera space (this type of data does not require calibration); gaze_positions/fixations are relative to world camera space (this type of data requires calibration), and contains information about where the subject was looking within the world camera’s field of view 2) To determine the relationship between the headset wearer's gaze and a computer screen, add/present fiducial markers on the screen and use the Surface Tracker plugin. You can then map gaze to the screen and know the distance of gaze points relative to the screen’s centre: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker (Note that the calibration procedure you mentioned is a separate process.)

user-a0389d 11 August, 2021, 13:52:51

Hello! I have a follow-up question about the first point. Since the fixations are relative to the world camera space. Should I just directly assume that the center of the screen to be (0.5, 0.5) in terms of the normalized coordinates? Also, I took a look at my collected video to validate my above assumption. I attached a screenshot where I believe is very close to the center of my screen (Fixation ID=120). However, I noticed that the Fixation (ID=120) has a coordinate of (0.497, 0.254). My question is which coordinate ((0.497, 0.254) vs. (0.5, 0.5)) should be best in describing the coordinates of the screen center? Any suggestions will be much appreciated!

Chat image

nmt 09 August, 2021, 11:44:37

Hi @user-02b39e 👋. Was there a specific reason that you are running from source? Nearly everything can be done using the plugin or network api of the bundled applications

user-02b39e 10 August, 2021, 09:08:45

Hi @nmt! There isn't actually! 😄 thank you for pointing it out, using the bundled version is much more stable (what happened probably is I just scrolled through the github page and only checked the developer's section, silly mistake).

nmt 09 August, 2021, 12:16:36

I’ve collated some of @papr ‘s previous responses to a similar question which should be useful for others (https://discord.com/channels/285728493612957698/285728493612957698/700281606778257428): On Windows 10, the eye video timestamps are calculated based on the time they are received by Pupil Capture, where an approximated fixed offset is applied to compensate for the transmission delay from camera to Pupil Capture. However, the transmission delay isn’t really fixed, and the variance you see results from a variable transmission delay. On operating systems where we can measure the time exposure, we see a much smaller variance in the timestamp differences. Thus, it should be possible to assume a fixed inter-sample duration, e.g. 1/average sampling rate, but rely on pupil timestamps to detect dropped frames, i.e. if the time difference between pupil timestamps > 2*1/average sampling rate. We don’t do any interpolation as an internal pre-processor.

user-98789c 10 August, 2021, 06:21:14

Thanks a lot @nmt, I'm starting to understand the whole issue. With the help of Pablo, I had calculated this average transmission delay throughout my whole experiment (from Matlab to Capture), by averaging the offsets from different instances during the experiment. Now to move forward to do the interpolation correctly, I need to know where I had missing samples; and this should be possible using your explanations: I calculate the inter-sample duration and if the time difference between pupil timestamps is more than twice the inter-sample duration, I do the interpolation, correct? and the interpolation could be a simple backward fill interpolation, correct?

user-e84f8e 09 August, 2021, 18:53:10

Can I please ask how I can analyze multiple video recordings at one time (an aggregated view)?

nmt 10 August, 2021, 13:20:32

Hi @user-e84f8e 👋 . Pupil Core software does not have built-in support for multi-participant analysis. What is it exactly you wanted to do with multiple recordings?

user-1672dc 10 August, 2021, 08:58:44

Hello everyone, when annotating fixations, pupil player creates a file called "annotation_player.pldata". Does someone know what happens to that file when you change the fixation filter in pupil player and go back to annotating in the same video? Does it change every fixation entry in the annotation file or only add the new ones? In concrete, what would happen to a fixation (to be annotated) of 250 ms duration total in this scenario: If the maximum duration was set to 200 ms at first, that would mean that 50 ms don't get detected as a fixation (minimum duration of 80ms). Now what if you increase the maximum duration setting in Pupil player to over 250ms? Would it create a new fixation besides the old one or overwrite the old fixation entirely or wouldn't it change at all in the annotation file? Thanks in advance!

nmt 10 August, 2021, 13:44:48

Hi @user-1672dc. Annotations are completely independent of the fixation detector – re-running the fixation detector with different threshold won’t change anything in the annotation file. It is possible to export the fixation results, which are saved to fixations.csv. This file contains each fixation and associated timestamp (https://docs.pupil-labs.com/core/software/pupil-player/#fixation-export). There usually isn't a need to manually annotate fixations, but rather you could annotate important events in your recording and correlate the fixations with them using the timestamps

user-a0389d 10 August, 2021, 13:26:27

Thank you so much, @nmt! I believe my next question is regarding the calibration of gaze_positions/fixations' relation to world camera space. Is there a specific section from the documents that introduce this calibration part? Much appreciated!

nmt 10 August, 2021, 13:51:11

See these docs for reference: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration

nmt 10 August, 2021, 13:32:54

Correct, a sample duration of twice what is expected would indicate a dropped frame. You could indeed resample your timestamps (assuming a fixed sampling frequency) and interpolate your missing pupil data. The optimal interpolation method is somewhat of an open question from what I know.

user-98789c 10 August, 2021, 13:35:17

thanks @nmt, would you say interpolating for all the samples with duration of twice what is expected, would cover for all the missing samples?

nmt 10 August, 2021, 13:57:59

That should account for most dropped frames

user-98789c 10 August, 2021, 14:01:09

thanks a lot

user-7daa32 10 August, 2021, 15:34:14

Hello guys

Please do you have idea how I can have a scanpath data from Pupil Lab?

nmt 10 August, 2021, 15:45:23

You can use the Vis Polyline Plugin: https://docs.pupil-labs.com/core/software/pupil-player/#vis-polyline

user-7daa32 10 August, 2021, 15:46:54

Thanks... Can I visualize the whole path, screenshot or export for high resolution image? We need to have it as part of what to publish

user-7daa32 10 August, 2021, 15:55:42

Something like this

Chat image

nmt 10 August, 2021, 16:04:22

See this tutorial for reference: https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb

user-562b5c 10 August, 2021, 22:33:55

Hi @papr,I am trying to use the 'unified formula for light-adapted pupil size' (https://jov.arvojournals.org/article.aspx?articleid=2279420) to factor in the change in pupil size due to ambient light. One of the parameters that the formula requires is the field diameter in degrees. There are some issues with which I need your assistance. -

1)Should I assume field diameter is the same as FOV? or is the field diameter, the diameter of the largest circle that can fit in the FOV? Since, the FOV of a camera is represented in three parameters (horizontal, vertical and diagonal), but, the formula requires a single value for field diameter. How do I get a single FOV value from these three values?

2)I plan to use the FOV of the world camera. On this page https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation, I am not sure which one is the world camera here. Thanks.

user-562b5c 12 August, 2021, 21:11:08

Hi @nmt , could you please help with this

user-98789c 11 August, 2021, 08:57:24

To calculate the peak pupil diameter in trials of a recording, except the routine pre-processing, are there any extra processings necessary or can it be a simple calculation of maxima?

nmt 11 August, 2021, 11:25:33

If you have done adequate pre-processing, such as checking for trend-line deviation outliers as described in https://doi.org/10.3758/s13428-018-1075-y, the maximum pupil size estimates should be accurate. Note, when recording pupil diameter using Core, try to ensure that both eye cameras are of equal distance to the pupils (if using data from both eyes) and that the 3d model for each eye is well-fit (blue circle matches the eyeball and is stable). Once the model fits well, you can also freeze the model (you can do this in each eye window) for more stable pupillometry. This procedure is not robust to headset slippage though. Therefore, only use this option if you are in a controlled environment without a lot of head movement.

user-98789c 11 August, 2021, 12:29:29

thanks a lot @nmt this was so helpful! I guess freezing the model is not explained on your best practices page, right? It would be good if it's added.

nmt 11 August, 2021, 12:34:29

You are correct that it isn't explained in our best practices, and I agree that it would be a useful addition. Thanks for the feedback!

nmt 11 August, 2021, 16:05:11

Just to confirm, are you using the Surface Tracker plugin as per my previous message?

user-a0389d 11 August, 2021, 16:09:53

Hi! I am not using it because I tried to add surface but the pupil player says "Cannot add a new surface: No markers found in the image".

nmt 11 August, 2021, 16:11:06

@user-a0389d If you want gaze coordinates relative to your screen, you need to use the Surface Tracker Plugin. This will map gaze from world camera coordinates to screen-based coordinates via Fiducial markers. Did you add Fiducial markers to your monitor? Read about the surface tracker here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker ; and the tracking markers here: https://docs.pupil-labs.com/core/software/pupil-capture/#markers. The method you have described is problematic because any relative movement between the headset and the screen will invalidate your assumption. Moreover, the world camera lens distortion is not accounted for.

nmt 11 August, 2021, 16:16:58

You can print the markers and affix them to the monitor, or you can present them digitally on screen

user-e84f8e 11 August, 2021, 17:21:29

I'm hoping we can select a page in a magazine (say, page 22) and then have 50 people read page 22 (using our 1 Pupil Lab Core and therefore sequentially, one by one). Then, I'm hoping we can merge the videos into a single video showing the reading behavior of all 50 people. Is this possible?

nmt 12 August, 2021, 08:46:46

@user-7b683e is correct, there is no way to do this in Pupil Player. To show the reading behaviour of 50 participants, I think it would make sense to consider visualization of aggregate scanning strategies (e.g. via scanpath clustering and aggregation) or to generate an aggregated heatmap. The first step would be to add fiducial makers to the magazine page and use the Surface Tracker Plugin: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker. That would give you gaze points relative to the magazine page. You would need to then do post-processing on the 50 participants’ data for aggregated analyses. For reference, the Surface Tracker Plugin can generate a heatmap for single participant, and this tutorial shows how to visualize a scanpath for a single participant: https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb.

user-7b683e 11 August, 2021, 23:21:07

I think this work requires a special program. Maybe you can start your study by generating a window including the page 22. Then, you can use offline data in Pupil Recordings folder. But I guess that you should also add a timeline to this program.

However, I say this suggestion to be able to visualizating data. On the other hand, maybe, you can analyse behaviors of test subjects directly via relavent recording files without a visualization progress.

user-a0389d 11 August, 2021, 17:58:15

Thank you so much! I understand what you were mentioning here. My only concern is that the eye-tracking data collection was completed 2 years ago. I cannot invite every human subject back to conduct another session using Fiducial markers. I can certainly do it by myself in the lab, will the results of screen-based coordinates I got be valid in applying to everyone who participated in the study?

nmt 12 August, 2021, 08:50:04

Ah, in that case there is no way to accurately relate gaze with the computer screen using our software. The Fiducial markers must be present in every recording.

user-d1ed67 12 August, 2021, 03:08:17

Hi. I have a few questions about the pupil_position.csv exported from the Pupil Player.

1) How is ellipse_angle defined? That is, what is the 0-degree position of the pupil? Is the angle here indicating counter-clockwise rotation?

2) How is circle_3d_normal calculated? Is it calculated based on sphere_3d and circle_3d_center? My understanding is that: circle_3d_normal = vector from sphere_3d to circle_3d_center normalised into unit distance.

papr 19 August, 2021, 09:19:16

Hi 1) we use the opencv ellipse definition. https://docs.opencv.org/4.5.2/d6/d6e/group__imgproc__draw.html#ga28b2267d35786f5f890ca167236cbc69 2) Yes, if only one model would be used. By default, pye3d uses three models (https://docs.pupil-labs.com/developer/core/pye3d/) though of which it uses the short-term model to calculate the normal. The sphere and circle centers are inferred from the long-term model. If the long-term model is frozen, it will be used to calculate the normal (instead of using the short-term model). https://github.com/pupil-labs/pye3d-detector/blob/e7f470ade72704da154a29fecb540625c645d84f/pye3d/detector_3d.py#L372-L381

user-d1ed67 12 August, 2021, 07:13:05

Hi I have a question about diameter_3d and circle_3d_radius.

According to the explanation at https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter, diameter_3d and circle_3d_radius are the same. Just one is the diameter and the other is the radius. However, in the exported file, diameter_3d != 2*circle_3d_radius. Could you please explain a little bit why this happened?

papr 19 August, 2021, 09:21:38

In pye3d versions prior to 0.0.7, the refraction correction was not correctly applied to the diameter_3d field, causing this inconsistency. That has been fixed in 0.0.7 and deployed in Pupil Core v3.3. This is the PR with the changes https://github.com/pupil-labs/pye3d-detector/pull/28

user-b14f98 12 August, 2021, 16:34:56

Hey folks, after calibration I imagine that the eye camera positions are found within a parent coordinate system (e.g. the scene camera coordinate system). If so, where are the transformation stored in offline data?

user-ac21fb 12 August, 2021, 21:35:18

How would I know what the best calibration settings are? I often get jumpy data and am trying to understand what the best settings are? For reference I am trying to record myself reading a magazine. Appreciate any help!

nmt 16 August, 2021, 15:21:14

Hi @user-ac21fb. A common reason for the behaviour you describe is poor pupil detection, e.g. when looking downward, the pupils can be obscured by the eyelids/lashes. Have you tried positioning the eye cameras such that the pupils are fully visible at all angles of eye rotation? See these instructions for reference: https://docs.pupil-labs.com/core/#_3-check-pupil-detection

user-8f38ce 12 August, 2021, 21:51:59

Hey. Could someone explain to me in common language how eye tracking works? Is there a camera or sensor locked on your pupils? And how is that camera lens speed? Will it react as fast as the movement of the eye?

user-ac21fb 12 August, 2021, 21:58:02

On the glasses I have. There are two cameras facing the eyes that track the pupils. There is also one camera facing outward to record "what you see". The technology then tracks how your pupils are moving to identify where they are looking in relation to that 3rd camera.

user-8f38ce 12 August, 2021, 22:00:48

The specs say Camera Latency 8.5ms. Wouldn't it be better to use TLens from PoLight with their polymer lens. No electric interference, less power consumption and 0,5 ms latency

user-8f38ce 12 August, 2021, 22:02:21

@dstrauss Thanks for the answer 😊👍

user-8f38ce 12 August, 2021, 22:04:40

I read rumours that Varjo is working on new ar/vr glasses using TLens

user-8f38ce 12 August, 2021, 22:05:33

Here is a patent link I found: https://worldwide.espacenet.com/patent/search/family/077063040/publication/US2021243384A1?q=Polymer-lens

nmt 16 August, 2021, 15:31:32

Pupil Core’s eye cameras can sample at up to 200 Hz, which captures most eye movements of interest, e.g. fixations, saccades and even microsaccades in some cases

user-4bc389 13 August, 2021, 05:07:40

Hello, in the exported pupil position data, why the ID of the eye has 00 and 1111, and there are two diameters at the same time. Is there any difference? How to choose it

Chat image

nmt 16 August, 2021, 15:36:45

@user-d1ed67 is correct – look at the method column to distinguish between diameter in pixels and mm. Diameter in mm is provided by the pye3d model.

user-d1ed67 13 August, 2021, 05:41:05

eye_id 0 = right eye, eye_id 1 = left eye

user-d1ed67 13 August, 2021, 05:41:38

the first diameter is the length of the longer axis of the pupil ellipse in the 2d eye frame

user-d1ed67 13 August, 2021, 05:41:45

unit is number of pixels

user-d1ed67 13 August, 2021, 05:42:31

second diameter_3d is the estimated diameter of the pupil. unit is milimeter

user-7daa32 13 August, 2021, 13:12:17

I did the scan path... Why is current fixation as shown in player not on the exact spot ? For example, fixation 63 will be in Spot B in the scanpath but 32 in the video for the same spot

nmt 16 August, 2021, 15:48:53

Don’t forget that the tutorial is plotting gaze points that have been mapped to the surface via the surface tracker plugin (not all fixations throughout the recording will have been located on the surface), whereas Pupil Player will show all fixations detected in the recording. So the mismatch is expected.

user-d44f4f 16 August, 2021, 11:50:13

Hi, @papr @nmt. Apart from above question, we think the eye0_hardcoded_translation is fixed, which also causes the gaze estimation error due to device slippage or different subject parameters https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L18-L19.

nmt 16 August, 2021, 14:43:37

The most recent real-time calibration is stored in the prerecorded_calibration_result msgpack file in the Capture settings folder. The file contains the eye camera to world camera rotation and translation matrix : left/right_model->eye_camera_to_world_matrix. Post-hoc/recording specific calibrations are stored as msgpack plcal files in the recording's calibration subfolder.

user-b14f98 16 August, 2021, 14:51:27

What is the difference between a post-hoc and a recorded calibration?

user-b14f98 16 August, 2021, 14:50:42

Thank you, nmt!

nmt 16 August, 2021, 14:52:41

1) I am unable to offer much insight into this I’m afraid. Perhaps @papr can when he returns from vacation in the coming days. 2) Pupil Cam1 ID2 is the world camera.

user-562b5c 16 August, 2021, 22:45:35

Thanks, will wait for @papr 's response

user-b14f98 16 August, 2021, 14:56:17

@nmt Ok, I just checked my code and I did unpack one of those .plcal files, but did not see the matrices I'm looking for. HEre is an example of what I'm getting out of the plcal file: {b'version': 2, b'data': {b'version': 2, b'unique_id': b'f2fe00dc-747a-4747-8a16-92e1432beeb1', b'name': b'Default Calibration', b'recording_uuid': b'cb268c01-93d7-4e8d-a6f1-dcc6f0fb6ced', b'gazer_class_name': b'Gazer3D', b'frame_index_range': [0, 10975], b'minimum_confidence': 0.8, b'status': b'Not calculated yet', b'is_offline_calibration': True, b'calib_params': None}}

user-b14f98 16 August, 2021, 14:58:54

I wonder if the 'status': 'Not calculated yet' is telling.

nmt 16 August, 2021, 15:07:40

I updated my response to better explain: AFAIK, when a real-time calibration is performed, it is stored in the Capture settings and automatically used in future recordings. If you choose to record a calibration choreography, you can do post-hoc calibration(s), which can subsequently be found in the recording's calibration subfolder. If the calibration was performed, the matrix will be present:

{'calib_params': {'left_model': {'eye_camera_to_world_matrix': [[-0.7589563344200847, 0.1102681457711145, 0.6417368763534076, 56.5318012629597], [-0.25616539557010026, 0.8555211684927436, -0.4499586874072163, -30.367339335581956], [-0.5986355924567487, -0.5058897768180448, -0.6210563268778012, 8.977667419764934], [0.0, 0.0, 0.0, 1.0]], 'gaze_distance': 500}...
user-b14f98 16 August, 2021, 15:08:53

@nmt Ok. I'll have a closer look at the file in Pupil Player. It looks like this old recording did not store that information, even though it has an exported data folder. This could be because Pupil Capture does not create that file, but Pupil Player does when post-processing. My worry is that I cannot get this information from old data without reprocessing and potentially changing the gaze estimates.

user-b16374 16 August, 2021, 15:11:26

@user-90c44a - were you ever able to get the Core with Android working via the Pupil Mobile app? We also would need a mobile solution to record data and wanted to check in on the feasibility before looking into Pupil Core further (as the Invisible might be out of our price range since we'd like to order several devices). Thanks!

user-b14f98 16 August, 2021, 15:11:38

@nmt This ability to preserve the gaze estimates is pretty important. In this case, I'm trying to get some informatoni about the published Gaze-in-Wild dataset (which we produced here at RIT, in my lab). My hope is that I can get this information without changing the dataset.

user-b14f98 16 August, 2021, 15:11:42

http://www.cis.rit.edu/~rsk3900/gaze-in-wild/

user-b14f98 16 August, 2021, 15:11:48

https://www.nature.com/articles/s41598-020-59251-5

user-b14f98 16 August, 2021, 15:16:20

@nmt Actually, I'm not sure how critical it is for a current aims to get the info without changing the dataset, because I believe that we may want to reprocess this data *anyhow." Still, this seems like a shortcoming that the Pupil Labs group wants to address.

user-b14f98 16 August, 2021, 15:35:19

@nmt Woah. That's an optimistic claim 🙂 I'm all about Pupil Labs, but I personally wouldn't be comfortable claiming that the precision trackers is reliably low enough to discriminate microsaccades from sensing noise.

nmt 16 August, 2021, 15:55:07

Microsaccades are detectable according to some of our users of Pupil Core – 200hz is just enough frequency in some cases

user-b14f98 16 August, 2021, 15:42:31

@nmt Indeed, I just processed the same file and used a new offline calibration. The information is now in the .plcal. So, this means that either 1) it is a difference of versions (the video was originally processed using a very old version of pupil capture/player), or 2) it is a difference of using the default calib stored upon the initial recoding with pupil capture, or a post-hoc calibration. Possibly both. If it is the second reason, then I would like to submit a feature request that the default calib with pupil capture store the same information as pupil player.

user-b14f98 16 August, 2021, 15:44:31

It just seems the right way to do it. Clearly the PL team decided that information was important enough to store after calibratoin in player. I'm not sure why the same wouldn't apply to capture. Here is one situation where, because it is not, we will have to reprocess all of our old data files. In other words, those matrices are a critical but missing component of keeping a record of previous calibration data and parameters.

user-b14f98 16 August, 2021, 15:45:01

@nmt Thank you again for helping me track down those matrices.

user-b14f98 16 August, 2021, 15:46:46

@nmt Is there documentation, or any insight you can offer, that describes the "world" reference frame? Is this frame defined by the position and orientation of the scene camera? ...and, perhaps descriptions of the difference of the many similarly labelled matrices in this export?

user-b14f98 16 August, 2021, 16:10:57

I worry that they may be misinterpreting sensor noise - this is something I have found new users often do before realizing that the gaze representation is an estimate.

user-b14f98 16 August, 2021, 16:14:24

anyhow, I'm getting off track. Thank you again for the pointers. Please let me know if you have any documentation or can offer any insight on the "world" reference frame, and the difference between the calib_params left_model/right_model/binocular_model.

nmt 16 August, 2021, 16:49:12

Note that the gaze data is always saved with recording – you don't need to worry about reprocessing and potentially changing the gaze estimates. The real-time calibration should have been stored in the recording's notify.pldata file. Player loads this when loading the gazer plugin (I believe).

user-b14f98 16 August, 2021, 16:54:48

Oh great! I’ll have a look when I get home. I’m out of the house now.

nmt 16 August, 2021, 17:01:34

Details of the world reference frame here: https://docs.pupil-labs.com/core/terminology/#coordinate-system @papr will be able to offer more informed insight into the models when he returns from vacation in the coming days

user-b14f98 16 August, 2021, 17:43:37

Ok, thanks. It looks like the Pupil Labs team uses the term world camera, instead of the more common scene camera.

user-312660 16 August, 2021, 17:23:00

hi! for some reason, my pupil diameter is significantly different for each eye (one eye is around 1.5 mm and the other is around 5). has this happened to anyone else?

nmt 18 August, 2021, 09:15:41

Hi @user-312660 👋. Did you use diameter or diameter_3d? Note that the former is given is pixels as observed in the eye video and is not corrected for perspective/is dependent on eye to camera distance. diameter_3d is given in mm as provided by the 3d eye model and better characterises real pupil size. You will need to sample enough gaze angles to build a well-fitting model for each eye; also see this message for further reference: https://discord.com/channels/285728493612957698/285728493612957698/874976831932072016

user-d1ed67 16 August, 2021, 23:56:06

Hi. I want to use the hmd streaming to provide videos to Pupil Capture. I still use the Pupil Labs cameras to capture videos. However, the camera intrinsic of the world camera is not correctly set. This makes the 3d gaze mapping very inaccurate. Could you please provide some hint about how to write a plugin to set the camera intrinsic correctly?

user-d1ed67 16 August, 2021, 23:56:46

@nmt

user-d1ed67 16 August, 2021, 23:57:29

@papr

user-d1ed67 17 August, 2021, 01:23:07

BTW, I already specified the projection_matrix of each frame sent to HMD_Streaming_Source.

user-d1ed67 17 August, 2021, 07:43:15

I thought about another solution. That is, I can undistort the image manually using my frame-receiving plugin. Therefore, I need to use the distortion coefficient in https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L26-L152. Could you please explain a little bit about how to use those coefficients?

user-73b616 17 August, 2021, 10:15:26

seems like standard opencv distortion coefficients and camera matrix

user-98789c 17 August, 2021, 12:28:02

A question about pre-processing of pupil size data: We remove samples that have a dilation speed higher than our desired threshold. I'm curious to know if, for calculating this threshold, we should consider our experiment design or not. For example, in an experiment where we are looking into the effects of stroking on the pupil size, can we still use a threshold calculated based on our recorded data, or other considerations need to be taken into account?

user-4bc389 18 August, 2021, 02:40:38

hi does the viewing distance affect the collected Precision?

user-7daa32 18 August, 2021, 12:32:46

Hello everyone

In a cross-case analysis, do you think having different fixation algorithm settings for different individual is best practice?

nmt 19 August, 2021, 10:22:27

It's a difficult question, as it would be nice to standardise for both cases, and indeed many researchers apply the same threshold to all participants – whether rightly or wrongly I think is somewhat of an open question. It is perhaps more important to determine appropriate thresholds for the experimental task: https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds

user-562b5c 19 August, 2021, 01:29:58

Hi,

I have a quick question regarding the 3d pupil diameter being extracted. A quick search tells me it should be between 2-4mm or 4-8mm depending on light. However, I have values less than 2, like 1.4mm . These readings have a confidence value more than 0.6 so I haven't removed them, also I can see in the eye video that pupil is being detected properly. So should I still remove values less than 2mm? Thanks.

user-0d4ab0 19 August, 2021, 02:42:18

Hi everyone, I’m looking into eye tracking for a research study and was wondering if the DIY model is sufficient generally speaking. Particularly I’m interested in exploring how participants use different handheld models. I’m new to this method so wanted to get some thoughts on this. Thanks!

Also, I’m terms of tech specs, what are the differences between DIY and Core? I couldn’t seem to find this online.

papr 19 August, 2021, 09:06:49

Hi. 1) I have not seen the term field diameter before. Should this refer to FOV, it might be assuming a circular FOV. Our cameras do not have a circular FOV. It probably depends on the formulas which FOV value is the most appropriate one. Unfortunately, I cannot tell you which one is the right one. 2) World camera refers to the scene camera of your headset, the one pointing forward / in gaze direction. Its name is probably Pupil Cam1 ID2. Does this answer your question?

user-562b5c 21 August, 2021, 12:02:43

Thanks, yes no.2 does. Regarding the FOV, there was some suggestion in the research publications channel by bok bok which I will try. The unified formula was used by another pupil core user in their project, here is the code https://github.com/pignoniG/cognitive_analysis_tool . In the code I can see that they used a value for the FOV but I am not sure how did they choose the value. I have asked the author the question, awaiting their response.

papr 19 August, 2021, 09:41:43

A "gazer" basically has two components, an (1) outer and and (2) inner layer. The inner layer contains the actual mathematical fitting/prediction logic of the mapping functions. See the Model class https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_base.py#L59

The outer layer (the "gazer") is the glue between the inner layer and the Core application. It is the actual plugin.

Due to how the matchmaking works in Core, there can be monocular and binocular pupil matches that need mapping. Therefore, gazers usually implement 3 models: binocular, left monocular, and right monocular. (The dual monocular gazer does not implement the binocular model for example.) Each of these models is an instance of the abstract Model base class linked above. It follows scipy's API conventions for ML models.

The outer layers job is also to transform the Core pupil datum dictionary into a feature matrix that can be passed to the underlying/inner models and to transform the result back into a Core gaze datum dictionary.

The calib_params contain the fitted parameters for the inner models.

user-b14f98 19 August, 2021, 13:14:30

Welcome back! I hope your vacation was refreshing. Thanks for the information on the gazer class. We're looking at the matrices now and see that the translation components are quite different when comparing the same eye's matrix in the dual monocular and binocular class. I would have expected them to be similar. Can you offer any insight there? Is there something obvious that I'm missing?

papr 19 August, 2021, 09:47:39

This is an indication that the 3d eye model is not fit well enough. If possible I would revisit the 3d eye model fitting. Otherwise, yes, I suggest discarding data points that are outside of the 2-8mm range.

user-562b5c 21 August, 2021, 12:05:48

Regarding this, isn't the confidence value an indicator of model fit? meaning below 0.6 it is not fitting well and the closer it is to 1 it is a better fit. Since, I already have the recording, can I do 3d eye model fitting post hoc from the recording, is there a document or some link where it is shown how to do this, thanks again

nmt 19 August, 2021, 10:13:04

Based on physiological data, precision shouldn’t be influenced by viewing distance within the range of a typical Core calibration (e.g. when using a physical marker at 1–2.5m) in typical subjects: https://iovs.arvojournals.org/article.aspx?articleid=2776161

user-4bc389 25 August, 2021, 03:06:54

For example, after calibration, the movement of the viewer back and forth will affect the accuracy of the data, right?

user-b14f98 19 August, 2021, 13:15:53

Here they are represented in Blender. Note that there is a typo for one of the rotation vectors, so just ignore them for now.

Chat image

papr 19 August, 2021, 13:50:04

Please, also let us know which Core version you used (3.4, earlier, or custom branch I shared before my vacation https://github.com/pupil-labs/pupil/tree/post-hoc-vr-gazer)

user-b14f98 19 August, 2021, 13:38:48

@papr ....also, I assume that the gaze exports in the gaze_positions.csv reflect output using both bino and monocular pair models. Is that true, and if so, is there any way to know which model was used to make the pupil-to-gaze mapping for a gaze estimate?

papr 19 August, 2021, 13:48:40

That information is available (1) in the topic and the (2) base_data field. The topic either ends in 0 (right monocular), 1 (left monocular), or 2 (binocular). The base_data field contains one or two values of <eyeid>-<pupil timestamp> in order to identify the pupil data used for generating the gaze datum

papr 19 August, 2021, 13:46:28

each of the models is fit independently on a subset of the data collected during the calibration choreography. https://github.com/pupil-labs/pupil/blob/906c4fbdebcab1dfb91378d7123a209f87c18e2b/pupil_src/shared_modules/gaze_mapping/gazer_base.py#L267-L290 Therefore, there is no constraint that the optimizations come up with the same locations.

Could you please be specific which gazers you used?

Player v3.4 post-hoc calib uses https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L327

Capture v3.3 hmd-calib uses https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_hmd.py#L115

Dual monocular uses a custom gazer https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350#file-gazer_dual_monocular-py-L45

user-b14f98 19 August, 2021, 13:59:46

Therefore, there is no constraint that the optimizations come up with the same locations.

...but, I would expect similar locations. Wouldn't you? My students used Pupil Player v 3.4.0 to reprocess the data using offline pupil detection, and offline calibration/gaze mapping using natural features.

user-0c8e98 19 August, 2021, 14:02:41

hi i have a question if you could help me. i created an interface on visual studio and i want to control the mouse for this interface with pupil labs i don't know is if possible and how to do it i don't know how to use papil labs.

user-b14f98 19 August, 2021, 14:04:01

@papr Also for your sake (because things might become confusing), I should clarify that we have two simultaneous projects going on here at RIT. Kevin is on one, and these more recent questions about the eye-to-scene camera matrices are another. We are effectively having two separate conversations at the same time.

papr 19 August, 2021, 14:04:38

ok, thanks for the clarification.

papr 19 August, 2021, 14:04:02

Agreed. I will talk to my colleague abut it

user-b14f98 19 August, 2021, 14:04:39

Thanks!

user-b14f98 19 August, 2021, 14:05:07

It's especially confusing since both conversations are now focused on the gazer class. Very helpful stuff.

papr 19 August, 2021, 14:08:16

To be honest, they are very much related. 🙂 It is both about figuring out why gaze estimation is much worse for hmd recordings than for Core headset recordings. The (initially) assumed eye location has an effect on the result.

user-b14f98 19 August, 2021, 14:13:28

Ok. Well, we now have a talented graduate student and two very talented undergraduates working on this, and invested in the result. So, please let us know how we can help. Kevin had to take a week off unexpectedly while you were on vacation. He is just now regaining his momentum.

papr 19 August, 2021, 14:25:26

What is the goal of this new project?

user-b14f98 19 August, 2021, 14:25:41

I'll DM.

user-430fc1 19 August, 2021, 14:36:09

I just started seeing this error in capture. Any ideas how to fix it?

Chat image

papr 19 August, 2021, 14:36:45

you seem to have selected a resolution without pre-recorded camera intrinsics.

user-430fc1 19 August, 2021, 14:40:06

Ah, yes, but usually it still works. This time the error keeps repeating and the camera is not working propely

papr 19 August, 2021, 14:42:15

In this case, it repeats because the scene camera reconnected. Have you checked the cable connection? Please note, that gaze estimation will be inaccurate using the fallback dummy intrinsics. (Not sure if this was relevant for you)

user-7bd058 19 August, 2021, 15:11:15

what are tokens in pupillabs? I've found them in my offline data

user-d44f4f 20 August, 2021, 07:38:09

Hi, @papr How are you? Several days ago, I discussed the binocular 3D gaze calibration process with you. Recently, I learned the 3D gaze calibration and found some questions. 1) We think the eye_camera_to_world_matrix0 and eye_camera_to_world_matrix1 include the kappa angle implicitly, and the relation between eye camera and scene camera https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L226-L227. However, we found the sphere centers are transformed from eye camera coords to world coords using both eye_camera_to_world_matrix0 and eye_camera_to_world_matrix1 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L257-L258. We think it is unreasonable, because the transformation for sphere center doesn't need the kappa angle information. 2) We think the eye_hardcoded_translation is fixed, which also causes the 3D gaze estimation error due to device slippage or different subject parameters https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L18-L19.

user-ae4005 22 August, 2021, 11:23:05

Hi there, I'm trying to play the world video individually on a media player but none of the players I used seem to recognize the format of it (I did download the codec package for Windows 10 so I don't think this is the problem). Can anyone tell me why I can't play it? Or what I need to do in order to get it to run?

papr 22 August, 2021, 18:51:03

So there is confidence (indicating pupil detection quality) and model_confidence. To which one were you referring? Yes, you can process the eye videos post-hoc. See https://www.youtube.com/watch?v=_Jnxi1OMMTc&t=2s

user-562b5c 23 August, 2021, 13:19:47

I had meant pupil detection confidence, but now I get the model_confidence also, thanks for the post hoc link

papr 22 August, 2021, 18:54:50

The intermediate scene video recording has a format that a few players cannot play. I either recommend using VLC for playback, exporting the video using Pupil Player, or the transcoding using ffmpeg

user-ae4005 23 August, 2021, 08:57:04

Ahh okay, that's very helpful. Thank you!

user-12d548 23 August, 2021, 02:54:16

Hello, @papr when I tried to run pupil capture in ubuntu PC. The camera cannot be opened. I used USB3.0 and also checked the usb connection, which is fine. The strange thing is that I can open the camera through my thinkpad labptop. Can you or anyone tell me the problem?

Chat image

papr 23 August, 2021, 08:41:49

Hi, yes, the initial calibrated translation and rotation is optimized such that it corrects kappa implicitly. Unfortunately, we do not have a way to model kappa explicitly right now. Therefore, it is not possible for us to correct for kappa only within the rotation instead of both, rotation and translation. And yes, with increasing slippage, this causes an error which we cannot correct for.

papr 23 August, 2021, 08:28:12

Hi, please see https://github.com/pupil-labs/pupil/issues/2156#issuecomment-870380316

user-12d548 23 August, 2021, 08:47:12

Thanks! I solved the problem.

user-d44f4f 23 August, 2021, 09:35:21

Hi, @papr. I think the reason why you cannot model kappa explicitly is that, in the Pupil Core hardware device, the position of eye camera is not fixed relative to the scene camera, and thus you cannot obtained the transformation between the eye camera <-> scene camera in advance . For Pupil invisible or other fixed device, it is possible to model the kappa explicitly. Is it right?

Chat image

papr 23 August, 2021, 09:37:42

What is the source for the image?

papr 23 August, 2021, 09:37:07

I would have to think about it.

user-d44f4f 23 August, 2021, 09:39:28

Haha, I drew it by myself after learning the 3D gaze calibration in Pupil.

papr 23 August, 2021, 09:39:58

ah, ok. Could you elaborate on the meaning of A?

user-d44f4f 23 August, 2021, 09:46:10

The A represents the assumed eyeball coordinate system, which is parallel to the B in the Pupil code.

user-abb8ae 24 August, 2021, 03:51:13

Hi! I managed to assemble the Pupil Core DIY, i wanted to get a better field of vision so i purchased the fish-eye lens (DSL235D-650-F3.2) as stated in the build of materials (https://docs.google.com/spreadsheets/d/1NRv2WixyXNINiq1WQQVs5upn20jakKyEl1R8NObrTgU/pub?single=true&gid=0&output=html). However there seems to be a focus issue. After replacing the lens, the camera is completely out of focus, i bought 2 lens and tried on both and they don't seem to be working as intended. The camera (Logitech C525)is working fine as the original lens still keeps the camera in focus. Can i check if anyone else has tried to use the fish eye lens mentioned? thanks!

user-a79827 24 August, 2021, 12:12:12

Hi i want to generate lots of heatmaps from different measurements. For that reason i want to automate the process in pyhton. Is there any way by using some predefined classes/functions etc. ?

user-7b683e 24 August, 2021, 13:22:02

Yes, you have two different ways to do this. Firstly, you can collect data related with gaze point via Network API. On the other hand, you can also use recording files by opening these with Pandas in Python.

Generating a spesific lab application like the study you want to do requires spesific operations on data. By this reason, you should define your own classes and functions.

However, looking to Pupil Helper in Github may help you a little bit in some points such as subscribing to the topic to be used with Network API or reading recording files.

papr 24 August, 2021, 13:23:12

In addition, here is a list of community projects of which some try to tackle the issue as well https://github.com/pupil-labs/pupil-community

nmt 25 August, 2021, 10:44:47

The angular accuracy of the calibration will not change (unless the headset slips). But the area encompassed by the angular accuracy is dependent on the viewing distance, yes. E.g. the further you move away from a computer screen/stimulus, the bigger the area that could be encompassed. So, it helps to consider stimulus size, viewing distance and calibration accuracy in experiment design https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246

user-7daa32 26 August, 2021, 13:16:43

We will just know the distance from the screen, an estimated angular accuracy (which we can have after validation) to get the estimated size of the stimulus right ?

user-05d4db 26 August, 2021, 05:16:21

Hi! I hope this is not too silly of a question, I am just starting out. I have data from the Eyeline 1000 plus and I want to use Pupil Labs Software to do further analysis. Is that possible? Also, I really need some guidance on how to download it. Any help would be greatly appreciated!

user-05d4db 26 August, 2021, 05:17:05

I have the github page open, but no idea on how to proceed

user-05d4db 26 August, 2021, 05:17:15

Hope you can help!

user-7daa32 26 August, 2021, 13:11:37

Good morning

Please anyone with idea of how to measure the actual distance of a scan path using pupil lab ?

user-da621d 27 August, 2021, 16:11:25

@papr What is the accuracy of this kind of eye-tracker?

papr 30 August, 2021, 07:39:39

You can find this and other information here https://pupil-labs.com/products/core/tech-specs/

user-6dc12a 27 August, 2021, 18:36:17

Hello - I'm a researcher in affective computing (essentially conducting studies that attempt to understand how human switch between different emotional states) and I'm quite interested in using eye tracking as a tool to conduct ecologically sound research in daily emotion perception (self/others). I am wondering if the newest eye tracker has a version that doesn't include the environment camera/or can selectively choose to shut it down so that the eye tracker can last for a longer period of time? And I'm wondering if any folks here have ever done anything that analyse the eye tracking data in real time and use the analysis results to trigger data collection from a different modality. I appreciate any comments/guidance to a specific protocol/published paper/manuscript etc.,

user-7b683e 28 August, 2021, 04:59:44

Hello Hanxiao,

If you choose to turn off environment (so world) camera, you will no longer be using an eye tracking system as a wearable and mobil form because streams from eye and world camera are processed together and so that gaze points can be calculated.

Secondly, you have two alternatives to collect data- offline and realtime. According to your way or requirements of the study, you can choose any one. However, in this point, you should select a proper analysis method. For example, maybe Kalman filter can be used in real time operations.

user-7b683e 28 August, 2021, 04:42:36

Data structures of some Pupil Capture versions are different to each other. We now use version 3.4.0 and a json extraction procedure to reach some keys such as gaze_point related with this version. Well, which version range can be used with our extraction method? For instance, in range between version 2.0 and currently lts (3.4).

papr 30 August, 2021, 07:42:45

Hi, can you clarify if you are referring to the realtime Network API data structures or those recorded to disk by Pupil Capture? Could you please let us know a version for which your scripts are working? If you want to stay up-to-date or lookup changes between versions, please check out our release notes (specifically the developer notes at the bottom might be of interest for you) https://github.com/pupil-labs/pupil/releases

papr 30 August, 2021, 07:38:57

https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads At the bottom of the page, you can find download links for every supported platform. 🙂

Please be aware that there is no automatic importer for the Eyelink 1000 plus format. You will have to convert it to the target format [1] yourself.

The bigger issue is that The Eyelink 1000 Plus is a remote eye tracker. The Pupil Core software assumes recordings from an head-mounted eye tracker. There are a series of plugins that will likely not work as intended due to this assumption being broken for your recordings.

[1] https://docs.pupil-labs.com/developer/core/recording-format/

user-05d4db 01 September, 2021, 09:45:24

Thank you so much!

user-ae4005 30 August, 2021, 10:30:36

Hi there, I have some recordings using surface markers. Pupil Player recognizes the markers and the surface but when I try to export the data, Pupil Player crashes every time. I get following error in the command window (I'm doing everything in the Player app): Any idea what's happening and what I can do to export my data?

Chat image

papr 30 August, 2021, 10:32:43

Hi 👋 Which version of Player are you using? 3.3. or 3.4? Also, is it possible that the marker detection has not finished yet? Please have a look at the timelines at the bottom of the Player window.

user-ae4005 30 August, 2021, 10:34:26

Hi 🙂 I'm using 3.3. The detection did finish for the entire recording, I checked.

papr 30 August, 2021, 10:35:29

Ok, please update to 3.4. It is likely that you encountered this issue https://github.com/pupil-labs/pupil/pull/2157 which has been fixed in 3.4

user-ae4005 30 August, 2021, 10:36:21

Okay, thank you! I was just about to look if there are new versions when you answered 🙂

user-ae4005 30 August, 2021, 11:01:43

So Player doesn't crash now but the exported data doesn't include the "surface" subfolder although the markers and surface are being recognized. What am I missing?

Chat image

papr 30 August, 2021, 11:03:54

could you please extend the timelines menu and share another screenshot of it?

user-ae4005 30 August, 2021, 11:09:41

What do you mean with "extend the timelines menu"? Do you want to see a screenshot of a later timepoint or is there some other way to show the timeline?

papr 30 August, 2021, 11:10:47

immediately above the timeline there is a handle which you can drag upwards in order to increase the timeline display area. I would like to see the surface-tracker related timelines.

user-ae4005 30 August, 2021, 11:12:47

Ahhh got it! Thanks for the explanation. Here it is (I only care about the surface called "screen").

Chat image

papr 30 August, 2021, 11:14:24

Those gaps in the timeline, what do you see when you seek there? A grey background? From this screenshot alone, it does not look like the the marker detection was completed.

user-ae4005 30 August, 2021, 11:18:51

Yes, in the gaps I just see a grey background.. So I need to wait until all the gaps disappear to be able to export the surfaces? It's been more than 10min now (the recording is ~35min), does that make sense?

papr 30 August, 2021, 11:20:56

Are the gaps staying the same or are they closing slowly? In case of the former, please share the recording with data@pupil-labs.com such that we can reproduce the issue and work on a fix. In case of the latter, please wait until the gaps have been filled and try exporting again.

user-ae4005 30 August, 2021, 11:24:13

One of the gaps became smaller since I sent you the last screen shot. So I'll wait a bit longer and see if they close all the way at some point. Will update you here!

papr 30 August, 2021, 11:32:08

on that note, could you please share the recording with data@pupil-labs.com nonetheless? The detection during these gaps should not be that slow. Would be helpful to have access to this particular recording to check what is slowing down the process.

papr 30 August, 2021, 11:26:36

Once it is closed, the next one should start closing, too 🙂

user-ae4005 30 August, 2021, 12:37:36

I tried another recording and it went somewhat faster but there are two (very small) gaps remaining and they don't seem to be closing. This means the surfaces won't export, right?

user-ae4005 30 August, 2021, 11:56:46

The gaps are still there and they don't seem to be changing... I'm uploading the recording now and will send it once it's on the drive.

user-ae4005 30 August, 2021, 11:34:10

Sure, no problem. The gaps are still there by the way... I'll send the recording via a google drive link, will that be okay?

papr 30 August, 2021, 12:13:01

that would be great

user-ae4005 30 August, 2021, 12:16:24

Sent 🙂

papr 30 August, 2021, 15:10:34

When I opened the recording, the marker cash was complete. No gaps visible. I am reprocessing the markers now in order to check if I can reproduce. If this works as expected, too, the issue might be setup dependent. In this case, it would be helpful to know the operating system and hardware details (CPU + RAM) of your computer

Edit/Follow up: I was not able to reproduce the issue.

papr 30 August, 2021, 12:48:03

correct. I will let you know if I find a work around

user-ae4005 30 August, 2021, 12:48:20

Great, thank you!\

user-5ef6c0 30 August, 2021, 15:06:19

hello, quick question about pupil player: is it possible to add "bookmarks" to specific times so then one can quickly jump to one time mark to the next?

user-ae4005 30 August, 2021, 16:26:11

Thanks for the update! It is weird though, the same thing happened with two more recordings... Do you think the computer could be a problem? I'd need to check the hardware details but I'll only be able to do that tomorrow..

papr 30 August, 2021, 16:27:17

It might also be operating-system related, but I cannot tell for sure yet.

user-ae4005 30 August, 2021, 16:28:40

I'm using Windows 10... Could that be a problem?

papr 30 August, 2021, 16:34:38

Thanks for letting me know. @nmt could you please try to reproduce this issue tomorrow? I labeled the shared recording for you in the inbox.

user-4f103b 30 August, 2021, 18:02:43

Hi, I am new in using pupil core eye tracker and software (Capture and Player). I am wondering if I can detect that my gaze positions are on a specific object on the screen. e.g. I want to display 4 objects on the screen and determine if person gaze is on any those 4 objects or not? And I want to consider only that fixation whenever gaze are shifting to any of those 4 objects.

user-7b683e 30 August, 2021, 19:10:54

In addition, if you describe multi surfaces, you should control only one key in an event - on_surface. If its value is true, the user is looking to the object that beings to the surface.

papr 30 August, 2021, 19:06:18

Please be aware that you need Apriltag markers for the surface tracker. But you can define multiple surfaces based on the same markers

user-7b683e 30 August, 2021, 18:59:25

Hi, yes you can detect the object which an user has stared. To do this, you should define a surface, then collect data. By converting these data to pixel coordinate system, you can see where point did the user look, and of course, what object is in the relevant point.

user-4f103b 30 August, 2021, 19:20:24

Thank you for giving me a direction. Do you to any resource where I can see how to convert data to pixel coordinate because screen size will vary for this experiment.

papr 30 August, 2021, 20:20:20

Alternatively, you can do this kind of analysis post-hoc, too. It sounds like realtime processing is not needed here. 🙂

papr 30 August, 2021, 20:14:32

I suggest playing around with the feature. Some of your questions might resolve by themselves. If any remain, feel free to post them here 🙂 https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-7b683e 30 August, 2021, 20:19:02

You can refer this script published by papr. The code was designed to control mouse. However if you delete some line where cursor was controled, you can add your own code with converting to pixel coordinate system method. https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py

papr 30 August, 2021, 20:20:46

But yes, the example holds in any case 👍

user-908b50 30 August, 2021, 23:34:56

So we are visualizing our fixation data post-hoc. The data was collected using the world coordinate system and we are graphing it as is (using a modified version of the tutorial on pupil tutorial) instead of using pixel coordinates. And some of our data looks weird when graphed on our surface. Logically, the fx, y coordinates look transformed (shifted differentially for quite a few of the recordings). Is that something I could get some guidance on if I were to email some of those graphs with a detailed description?

papr 31 August, 2021, 06:03:21

Yes, definitely. :) info@pupil-labs.com

user-f5b6c9 31 August, 2021, 09:48:16

Hi! I switched my PupilCapture Version from 1.18 to 3.4. and cant calibrate anymore. I used a different computer screen to show the calibration markers but somehow the new capture (3.4.5.), running on the same Dell Computer as before, doesnt produce sounds when the marker is displayed in the different positions. Any ideas ? Did the calibration methods change? I tried Single Marker and Natural Marker, but actually these are Screen Markers, shown by a different computer screen.

papr 31 August, 2021, 09:50:07

We removed the manual marker calibration in our 2.0 release. Please see our release notes for reference https://github.com/pupil-labs/pupil/releases/v2.0

user-f5b6c9 31 August, 2021, 09:49:55
  • The other computer screen is an extended screen , mirrored from a MacBook
user-f5b6c9 31 August, 2021, 09:50:23

Is there any way to re-implement this?

user-f5b6c9 31 August, 2021, 09:51:01

I need these manual marker calibrations, but would also like to profit from the new 3d algorithm and all the benefits the new releases offer

papr 31 August, 2021, 09:51:38

What part of the manual marker calibration are you relying on? Having a physical external marker instead of displaying it on a screen?

user-f5b6c9 31 August, 2021, 09:52:23

Its both - I am displaying external markers on a screen

papr 31 August, 2021, 09:52:49

So you have a dedicated software for displaying the markers?

user-f5b6c9 31 August, 2021, 09:53:14

especially that Pupil Capture detects these markers and gives me a signal so i can continue with the next calibration position.

user-f5b6c9 31 August, 2021, 09:53:32

Yes I use the PsychToolbox to continue to the next point

papr 31 August, 2021, 09:53:58

So Capture and the psychtoolbox run on separate computers?

user-f5b6c9 31 August, 2021, 09:54:04

Yes.

papr 31 August, 2021, 10:01:23

So the recommendation would be to use the single marker calibration in "physical" mode. Then you can simplify your psychtoolbox code to only show a single marker in the center. This would require a change in the calibration choreography for the subject: Instead of having the head fixed and fixating different targets, your subject would fixate the single target while rotating their head (VOR movement)

papr 31 August, 2021, 10:01:58

Alternatively, you can build a custom calibration choreography plugin that replicates the manual marker calibration from pre v2.0

user-f5b6c9 31 August, 2021, 10:02:29

I already used Single Marker Calibration in Physical Marker mode, but I dont get any acoustic signals for the markers being detected and >90% of pupil data are getting dismissed. Does this mean the markers arent detected or that the eyetracking quality is bad?

papr 31 August, 2021, 10:02:54

that sounds like low quality pupil data / difficult pupil detection

user-f5b6c9 31 August, 2021, 10:06:11

How can I build such a calibration plugin?

user-f5b6c9 31 August, 2021, 10:06:27

Just copy a specific file or do I need to edit that.

papr 31 August, 2021, 10:13:55

You will probably do some coding based on some other code example/implementation

user-7b683e 31 August, 2021, 10:19:04

The source of this status can be the power of your computer. I'm not sure, however, you can try to discrease calibration duration (or sampling point number) from the relevant menu in world camera window about calibration.

user-f5b6c9 31 August, 2021, 10:31:42

I can only change the calibration duration in Screen Calibration Method. I am currently not using this as described above.

user-7b683e 31 August, 2021, 10:22:32

If it works, you should be sure the angular accuracy is enough for your purpose - generally I get between about .8 and 1.5

papr 31 August, 2021, 10:58:55

Hi, we were not able to reproduce the issue on Windows 10 either. Our current guess is that your computer does not have sufficient RAM to run the full detection. You can verify this guess by opening the System Monitor on Windows and looking at the RAM usage of Pupil Player during the detection.

user-ae4005 31 August, 2021, 11:05:04

Okay, thank you for looking into it! I'll have a look at the RAM usage and will also try to run it on a different computer.

user-9eeaf8 31 August, 2021, 15:46:26

Hello, I have an issue with participants who wear mascara. The pupil detection works ok, in my opinion (pupil marker is pretty stable on the pupil). But the single marker calibration (with the spiral head movement) does not seem to work. The calibrated area is always very small and even this does not seem to work correctly. Again, in my opinion the pupil in the eye video is recognized in a pretty stable way, only the calibration does not work. Can someone help me with this? (I found a note in an update that there is a mascara-off option, but I couldn't find it.) Thank you!

user-7b683e 31 August, 2021, 21:33:55

Hello, Is there any source about circuit design of the headset?

user-7b683e 01 September, 2021, 17:17:21

Up-to-date

user-7b683e 31 August, 2021, 21:35:20

In this point, can I learn different components except for cameras, and relations between components in the headset?

user-4f103b 31 August, 2021, 23:15:17

Hi, can we export video with the circle (point of gaze) ?

papr 01 September, 2021, 07:32:07

Yes, you can do that with Pupil Player. https://docs.pupil-labs.com/core/software/pupil-player/

End of August archive