👁 core


user-95718f 01 June, 2021, 11:53:34

Hey there! I`m in a desperate situation - > wrong USB adapter fried a component on the PCB... PLEASE: Can anyone tell me what the component is?

Chat image

papr 01 June, 2021, 11:58:40

Please contact info@pupil-labs.com in this case

user-95718f 01 June, 2021, 11:59:19

Thx!

user-95718f 01 June, 2021, 12:13:19

Hey again! Can anyone please tell me the labeling/text on the COMPONENT IN THE IMAGE of the previous post? (mine is fried and i need to replace the component) Thanks!

user-7daa32 01 June, 2021, 14:31:47

I'm worry about wether to create surfaces before recording or post hoc on player after recording

user-a8f462 01 June, 2021, 19:48:49

Hello ya'll Iwas just wondering if anyone had been able get pupil mobil to work with a older phone then Nexus 5x. In my research lab were trying to use an adapter and a galaxy note 4 to work, it doesn't seem like it will. It seems from past messages I read you need a direct connect. Im pretty new to alot of this and I was just wondering why an adapter won't work?

user-ef3ca7 01 June, 2021, 20:18:05

Hi, I installed V3.3-0 on My Mac (as below picture). But I'm not able to open Pupil Player Software. Any suggestion?

Chat image

papr 01 June, 2021, 20:18:39

What happens if you double-click it?

user-ef3ca7 01 June, 2021, 20:20:30

nothing

papr 01 June, 2021, 20:21:11

Could you please copy the application into the Applications folder and run /Applications/Pupil\ Player.app/Contents/MacOS/pupil_player from terminal?

user-ef3ca7 01 June, 2021, 20:22:36

let me check!

user-908b50 01 June, 2021, 20:33:21

How do people define/recognize a drift and correct it? I have also noticed that in some cases it takes a while for the fixation detection algorithm to work after a blink or a saccade (if the eyelid is droopy). How are others accounting for or are you?

user-4bc389 02 June, 2021, 06:31:39

Hi. Is there any standard for the value in the area pointed to by the red arrow, that is, which value is usually more appropriate? thank you

Chat image

user-ae4005 02 June, 2021, 12:42:27

Hi there, so I ran an experiment with the pupil labs eye tracker and unfortunately didn't define a surface for my computer screen. So now I need to match my eye data with the data from psychopy posthoc (I'm running a reaching task). I did record multiple calibrations that I coded into in the experiment. Is there an easy way to access these calibrations? Can I export them somehow (I found them when I go on "Posthoc gaze calibrations" in the player but I don't know how to export them...

papr 03 June, 2021, 09:25:11

The calibration data does not allow you to define surfaces post-hoc. You can set up surfaces post-hoc, but for that you would have had to set up surface markers around the screen edges.

user-ae4005 03 June, 2021, 09:23:42

Can anyone help?

user-7daa32 02 June, 2021, 12:43:08

Hi

Please how can we deal with issue of low precision when doing AOI analysis of targets that are of equal sizes? Can the surface created for the targets be of different sizes?

papr 03 June, 2021, 09:26:03

Yes, the surfaces can have different sizes

user-f90f05 02 June, 2021, 17:23:21

How can I access the GazeData script?

papr 03 June, 2021, 09:25:44

I am assuming that you are referring to the hmd-eyes project. Please have a look at the included gaze demos.

user-ae4005 03 June, 2021, 09:30:04

Thank you for your reply! I didn't use the markers unfortunately... And I already collected a big amount of data. Is there any other way to match the eye data with my other data (in scale)? I'm really stuck with this and I understand that it's been a big mistake not using the markers.

papr 03 June, 2021, 12:24:05

Assuming that there is no head movement, you could try defining an area of the scene camera as the monitor. But you would have to do that in your post-processing of the exported data.

user-4bad8e 03 June, 2021, 11:40:44

Hi Does anyone know that how pupil core acquires 3d gaze data? (Is there any publication?) I want to know how to calculated 3d gaze positions, which are shown in "fizations.csv."

user-f90f05 03 June, 2021, 12:09:43

are you using python or unity?

user-f90f05 03 June, 2021, 12:16:05
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using PupilLabs;

public class Data : MonoBehaviour
{
    public GazeData gazeData;
    private Vector3 direct;
    private float distance;
    private Vector2 norm;
    // Start is called before the first frame update
    void Start()
    {
        direct =new Vector3(gazeData.GazeDirection.x, gazeData.GazeDirection.y, gazeData.GazeDirection.z);
        distance =gazeData.GazeDistance;
        norm = new Vector2(gazeData.NormPos.x, gazeData.NormPos.y);
    }

    // Update is called once per frame
    void Update()
    {
        Debug.Log(direct);
        Debug.Log(distance);
        Debug.Log(norm);

    }

}

is this the correct way to access Data?

user-f90f05 03 June, 2021, 12:19:58

when i run the script on unity this is what it says "NullReferenceException: Object reference not set to an instance of an object"

user-f90f05 03 June, 2021, 12:20:12

anyone know how to fix it? or is this normal?

user-4bad8e 04 June, 2021, 12:55:16

Thank you for imformation! I use python.

papr 03 June, 2021, 12:24:59

Pupil Core uses https://docs.pupil-labs.com/developer/core/pye3d/ to generate 3d pupil data which is then mapped into a common coordinate system (scene camera)

user-4bad8e 08 June, 2021, 04:45:52

Is it possible to calculate the distance of saccades from the consecutive values of the "gaze_point_3d_x,y,z" in the "fixation.csv?" I think these values are dimensionless, and we only can calculate relative values of the distance of the saccades. Could you tell me this idea is correct or there are any limitations?

user-4bad8e 04 June, 2021, 12:54:17

Tank you very much!

papr 03 June, 2021, 12:27:24

Looks like this topic would fit the 💻 software-dev or 🥽 core-xr channel better. Unfortunately, I am not a Unity expert. If I had to guess, gazeData is not set in your example.

user-f90f05 03 June, 2021, 12:49:23

Oh I wasn't aware sorry and thanks!

user-908b50 04 June, 2021, 07:41:18

Hi, just following up, how do I correct for drifts in fixations or gaze data? This came up during calibration too and the RAs made a note of it during data collection. Also, on the export info csv file, what is the difference between absolute and relative time range? I'm interested in calculating total task time (which is how the data was clipped during the export in player). These are the numbers from one of my exports: Relative Time Range 00:02.479 - 20:14.249

Absolute Time Range 1523.005157 - 2734.774731. Thanks!

papr 04 June, 2021, 08:40:02

Given that drift can be highly non-linear, it can be very difficult to correct it, even in case-by-case scenerios.

papr 04 June, 2021, 08:21:02

If you want to calculate time differences, it might be easier to use the absolute time (in seconds) range, as it is easier to parse

user-908b50 04 June, 2021, 07:49:12

Also, relative time range is in the following format right: minutes:second:millseconds?

papr 04 June, 2021, 08:18:03

yes, the time format is correct. Relative time refers to time since the first scene video frame

user-908b50 04 June, 2021, 08:31:53

Thanks! But that gives me a duration value a couple of decimal points higher than what the exported mp4 video shows. It's a small 0.9 difference when the values are rounded.

papr 04 June, 2021, 08:33:57

exported mp4 video duration How are you getting this value or which software displays this value?

To be sure, you can also look at the exported world_timestamps.csv. It includes absolute timestamps for each exported frame.

user-908b50 04 June, 2021, 08:35:44

Its windows media player. Got it! Yes, those values line withe export info.

papr 04 June, 2021, 08:38:23

Yeah, media players are not the most accurate ones to display durations. They read this information from the video containers meta information which is not required to be perfectly in line with the actual stream data. For example, you could have a 10 minute audio and a 5 minute video stream in the same container file, and most players would display a 10 minute duration.

user-908b50 04 June, 2021, 08:43:27

Is there a way to quatify amount of drift occurring and then decide if it's worth correcting for? Would you suggest discarding those data? For most (not all) recordings, the RAs noticed drift at the bottom of the screen where the tabs were black and white (perhaps because the colour scheme was similar to a calibration marker is my best guess).

papr 04 June, 2021, 08:46:05

You can only quantify it if you have reference data to compare the gaze estimation to. Concretely, you could use the post-hoc calibration to add manual reference locations and run a validation on that particular time section.

user-908b50 04 June, 2021, 08:49:34

Okay, so ideally, if there is good data without drift, I can use its gaze esimation data and compare it with the data with drift. That's problemation because at a particular time, two people can be fixating on different spaces on the screen so the calibration may fail.

papr 04 June, 2021, 08:55:22

I feel like we have different concepts of drift. Gaze estimation errors are usually dependent on the location of the actual gaze (scene camera center is usually more accurate than scene camera borders). If the accuracy for a specific location decreases over time, e.g. due to headset slippage, then I would speak of drift.

So, technically, during a calibration, there can be no drift (as in the definition above).

two people can be fixating on different spaces on the screen so the calibration may fail I do not understand this statement. Is this something specific to your experiment setup?

user-908b50 04 June, 2021, 09:06:36

If we are validating a particular section based on reference data, do gaze data have to match exactly based on time? Participants basically scanned a screen while playing a slot machine game.

user-908b50 04 June, 2021, 09:05:04

Okay, yes I see. I was talking about a shift in gaze. So, the participant's were looking somewhere else and the scene camera shows the location of the gaze that is a few centimetres off than their actual, reported gaze. This came up during calibration and we just carried on collecting data regardless. What to do in this case? As for headset slippage, I am not sure if that actually occurred or to what extent. Some people definitely moved their heads a lot. Visually during exports , I have been noticing lack of fixation markers at the bottom of the screen for a few participants (i have yet to visualize problematic data yet, only going through clean data for now).

papr 04 June, 2021, 09:09:35

Usually, you calibrate because you do not have a valid gaze estimation yet. Therefore, it is kind of difficult to talk about gaze estimation errors during calibration. Usually, it is recommended to perform a validation directly after (same procedure as calibration but with different marker locations) to evaluate the calibration quality.

When people look down, their pupil is often occluded by their eye lids. This might cause you lack in high confidence gaze or fixations at the bottom of your screen,

user-908b50 04 June, 2021, 09:14:36

Okay, so basically there is no way to correct for slight shift in gaze points that we observed at the bottom of the screen post an otherwise successful calibration? We did perform a validation. What happened was after multiple validations with the same issue, data was just collected anyway because no solution worked.

papr 04 June, 2021, 09:19:48

What happened was after multiple validations with the same issue, data was just collected anyway because no solution worked. ok, understood.

slight shift in gaze points that we observed at the bottom of the screen Yeah, I would recommend enabling the eye video overlay and check out the pupil detections during these bottom-screen periods. You will probably see that the pupil detections do not match the pupil perfectly

user-22c43d 04 June, 2021, 09:34:27

Hi everyone, I am new to pupil player and honestly, as a MD, not very IT-talented.. We are using pupil player to count fixations and measure dwell-times on a specific projekt. I defined my surface but when I press 'e' the tables remain empty. Everything works well on my colleagues computer. I realised that the plugin 'fixation detector' is missing and nowhere to find. A quick internet research didn't provide an answer. Deinstallation and Reinstallation didn't solve the problem. Thank you in advance!

Chat image

papr 04 June, 2021, 09:35:31

Hi, is it possible that you are using Pupil Invisible instead of Pupil Core?

user-22c43d 04 June, 2021, 09:40:55

thank you for your quick answer! where can I find this information? In the about section I only see that it is Pupil Player V3.3.0.

Chat image

wrp 04 June, 2021, 09:43:39

@user-22c43d starting in version 3.2 we disabled plugins that were not relevant/robust for Pupil Invisible recordings. (I am assuming that you are opening Pupil Invisible recordings). Therefore you will not see the fixation detector plugin in Pupil Player v3.2 and above with Pupil Invisible recordings. Please see release notes: https://github.com/pupil-labs/pupil/releases/tag/v3.2

wrp 04 June, 2021, 09:45:08

That being said. We are working on a new fixation detection algorithm that will be available (hopefully soon) for Pupil Invisible recordings. This will be available in Cloud.

user-22c43d 04 June, 2021, 09:46:42

Ok I got it, thank you! And yes, in the general settings section I just saw that it is Pupil Invisible recordings. That's why it is working on my colleagues computer, he has an older version. Is there a possibilty to download the older versions in the meantime?

user-22c43d 04 June, 2021, 09:47:49

Found the version 3.2 in the links you sent. Thanks!

user-22c43d 04 June, 2021, 09:58:48

I downloaded V3.2 now but there is still no fixation detector..🤷‍♀️

wrp 04 June, 2021, 09:59:30

Please allow me to clarify version < 3.2 should have fixation detector for Pupil Invisible recordings.

wrp 04 June, 2021, 10:00:46

Please note that it is not recommended to use Pupil Core's fixation detector for Pupil Invisible recordings. ⚠️ The fixation detector in Pupil Core works well for Pupil Core recordings, but was not designed for Pupil Invisible and therefore may not be accurate in classification.

user-22c43d 04 June, 2021, 10:02:17

ok. Is there an alternative way to count fixations/dwell-times until you released the new algo?

wrp 04 June, 2021, 10:05:02

@user-22c43d there is not currently a recommended method for fixation classification with Pupil Invisible, but we will have a solution soon. Dwell time is a separate topic as it generally concerns duration of gaze on stimulus or area of interest - which would require a surface/reference image as AOI and then gaze data could be used to calculate dwell time per surface/aoi.

user-22c43d 04 June, 2021, 10:07:24

ok thanks, will discuss this in our group. Thank you for your help!

wrp 04 June, 2021, 10:07:57

@user-22c43d has your group considered using Pupil Cloud BTW?

user-22c43d 04 June, 2021, 10:09:16

not yet, as far as I know.

user-7daa32 04 June, 2021, 12:09:36

Here is a timestamp showing here

Chat image

user-7daa32 10 June, 2021, 03:06:26

The timestamps for fixation in the video here and the start timestamp in fixation file which is more accurate?

user-7daa32 04 June, 2021, 12:13:28

Here is another.... I know we can look at the export files but we are interested on something we thought we can get by manually taking data from the video. Now what's the difference between these times (duration) in both pictures?

Chat image

papr 04 June, 2021, 12:14:54

The first is a single point in time (speciically, relative time since the first scene video frame). The second is a time period. This is not comparable.

user-7daa32 04 June, 2021, 12:16:45

Thanks... We are interested in the first one .. I think it is not on the exported file right ?

papr 04 June, 2021, 12:18:26

The exported file only includes absolute timestamps. The displayed relative time is just as an intuitive reference for the playback ui.

user-7daa32 04 June, 2021, 12:21:48

We have used the difference between displayed timestamps and the difference between fixation durations and both look similar. I'm sorry what's "absolute timestamp" ?

user-7daa32 04 June, 2021, 12:24:25

I mean we computed the differences between consecutive timestamps and then

The differences in consecutive fixation duration.

Both seem the same

papr 04 June, 2021, 12:28:05

One thing to note is that Player only displays scene video frame timestamps, i.e. which have a limited frequency (30 Hz by default). While fixations are calculated based on gaze (usually 120 Hz or higher). If you use the displayed timestamps in Player, you will have less time precision compared to using the timestamps in the fixations.csv export

papr 04 June, 2021, 12:26:01

Absolute time refers to "pupil time", the timestamps that were recorded during the recording

user-7daa32 04 June, 2021, 12:31:03

Thanks

What of the Annotations timestamps? Does it align with the fixation data?

papr 04 June, 2021, 12:31:48

Timestamps in annotations.csv and fixations.csv come from the same clock and are therefore comparable

user-7daa32 04 June, 2021, 12:32:01

Thanks

user-7daa32 04 June, 2021, 14:54:52

Please how can we get an audio sync to pupil lab system? Or is there plugin for audio system... ?

We planned to use an eternal audio recorder but am not sure how that can be in sync with the recording

nmt 07 June, 2021, 08:48:07

Hi @user-7daa32. For external audio devices, I would recommend using the Lab Streaming Layer (LSL) framework. This would provide accurate synchronisation between audio and gaze, but takes a bit of work to get setup: 1) Use the AudioCaptureWin App to record audio and publish it via LSL (https://github.com/labstreaminglayer/App-AudioCapture#overview) 2) Publish gaze data during a Pupil Capture recording with our LSL Plugin (https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/README.md) 3) Record the LSL stream with the Lab Recorder App (https://github.com/labstreaminglayer/App-LabRecorder#overview) 4) Extract timestamps and audio from the .xdf and convert to a listenable format -- do whatever post-processing

wrp 07 June, 2021, 06:57:19

@papr or @nmt any thoughts/suggestions here?

user-6ef9ea 04 June, 2021, 15:40:57

Hi everyone, we are considering getting the Pupil Labs add-on for HTC Vive, and were wondering if it will be compatible with the new Vive Pro 2 headset?

user-699471 05 June, 2021, 16:17:02

Hi there. I have been trying to run pupil_capture on PopOS 20.10 (also in arch) for the past 4 days and just realized that my user was not in plugdev group.......

wrp 07 June, 2021, 06:55:26

Hi 👋 based on images of the Vive Pro 2, it looks like our existing Vive add-on will not be compatible with Vive Pro 2. A dedicated Vive Pro 2 is not on our roadmap. If you are purchasing Pro 2, can offer to a de-cased version of our Add-on with cameras and cable tree for custom prototyping. If interested send an email to [email removed]

user-b14f98 07 June, 2021, 15:09:01

@wrp Jumping in - thanks for letting us know. Is there anything else that you can share about your roadmap for VR?

wrp 07 June, 2021, 06:56:25

Hi 👋 Sorry you spent so much time on this. For future readers of this thread please check out documentation here: https://docs.pupil-labs.com/core/software/pupil-capture/#linux

user-6ef9ea 07 June, 2021, 07:29:08

thanks for the heads-up, we haven't ordered yet, so good to know, we'll reconsider our options

user-6ef9ea 07 June, 2021, 07:30:36

we are also very much interested in this, i guess more generally a solution for precise syncing of an external TTL/DIO to the system would be incredibly useful not just for audio (but for displays, button presses, events, etc)

user-597293 07 June, 2021, 16:21:04

I am currently running an eye-tracking experiment involving screen stimulus, key pressing and audio capture using Psychopy (https://psychopy.org/). Psychopy is built on Python and can run any Python code during the experiment. This means that I can send Pupil annotations over the network API from PsychoPy on triggers/events etc, as well as syncing Pupil with the experiment clock. The audio recording is also done automatically using sounddevice library and scipy for WAV export. Psychopy's included microphone module caused a lot of bugs unfortunately.

My set up probably isn't as "in sync" as LSL, but for my experiment the temporal resolution is good enough and in my opinion makes the data a bit more manageable. Reach out on DM if I can help 👍

nmt 07 June, 2021, 08:53:34

@user-7daa32 @user-6ef9ea Our Network API also offers flexible approaches to control experiments remotely, send/receive trigger events, and synchronise Pupil Core with other devices (https://docs.pupil-labs.com/developer/core/network-api/#network-api). For example, you can use our annotation plugin to make annotations with a timestamp in the recording on desired events, such as trigger events. Annotations can be sent via the keyboard or programmatically (https://docs.pupil-labs.com/core/software/pupil-capture/#annotations). We also maintain a Plugin for LSL that publishes gaze data in real-time using the LSL framework. This enables unified/time-synchronised collection of measurements with other LSL supported devices (https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/README.md)

user-28fc08 07 June, 2021, 08:24:17

Hello ! I am trying to install a plugin for pupil lab but I have some issues. When I try to install all Python dependencies, I use the requirements.txt file from the root of the pupil repository. I use this command :

python -m pip install -r requirements.txt and here is the error :

ERROR: Could not find a version that satisfies the requirement pyre (from pupil-invisible-lsl-relay) (from versions: none) ERROR: No matching distribution found for pyre

So I try to install pyre-check with this command : pip install pyre-check

And here is the error : error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/

But I already install Microsoft C++ Build Tools with many tutorials, you can check on the following screen picture. Do you know how to solve my problem ? Or an other way to install a plugin ?

user-28fc08 07 June, 2021, 08:25:32

Chat image

papr 07 June, 2021, 09:01:36

Apologies, the pyre repository is going through a name change right now, which is conflicting with the requirements.txt file. Please delete the [email removed] lines at the bottom of the requirements.txt files and install ndsi via one of these wheels: https://github.com/pupil-labs/pyndsi/suites/2900326485/artifacts/64999752

user-28fc08 07 June, 2021, 09:16:13

Thank you very much !

user-597293 07 June, 2021, 16:02:54

Hi, I have some quite long recordings (everything from 10-40 minutes) where I want to export the raw data, surface data and annotations. When using Pupil Player I have ran into some unexpected crashes. Here is (parts of) the log from when exporting began until it shut down:

Offline Surface Tracker Exporter - [INFO] surface_tracker.background_tasks: exporting metrics to my_path\Pupil data\participant_502\000\exports\000\surfaces
(........)
Offline Surface Tracker Exporter - [INFO] surface_tracker.background_tasks: Saved surface gaze and fixation data for 'Monitor'
Offline Surface Tracker Exporter - [INFO] background_helper: Traceback (most recent call last):
  File "background_helper.py", line 73, in _wrapper
  File "surface_tracker\background_tasks.py", line 318, in save_surface_statisics_to_file
  File "surface_tracker\background_tasks.py", line 385, in _export_marker_detections
TypeError: 'NoneType' object is not iterable

player - [INFO] raw_data_exporter: Created 'gaze_positions.csv' file.
player - [INFO] annotations: Created 'annotations.csv' file.
player - [ERROR] launchables.player: Process Player crashed with trace:
Traceback (most recent call last):
  File "launchables\player.py", line 720, in player
  File "surface_tracker\surface_tracker_offline.py", line 287, in recent_events
  File "surface_tracker\surface_tracker_offline.py", line 308, in _fetch_data_from_bg_fillers
  File "background_helper.py", line 121, in fetch
TypeError: 'NoneType' object is not iterable

player - [INFO] launchables.player: Process shutting down. 

Do I have to wait for something to load in Player before exporting? Neither blinking detection or fixations were turned on here.

papr 16 June, 2021, 07:50:44

Hi, we tried reproducing the issue but were not successful in doing so. We think it could be related to your specific recording. Would you be able to share it with [email removed]

papr 07 June, 2021, 16:22:58

Which pupil Player Version are you using? @user-764f72 please have a look at this issue and check if this is already a known issue

user-597293 07 June, 2021, 17:10:22

Did a quick experiment and have a theory on why it's crashing. If it is of any help.. I think it's due to exporting before marker cache is loaded completely? It did not crash when I waited for the cache to "fill", and crashed when I exported before cache was filled right after importing the data. See pictures.

Chat image

user-597293 07 June, 2021, 16:23:51

v.3.3.0

user-597293 07 June, 2021, 17:10:26

Chat image

user-4bc389 08 June, 2021, 03:22:38

Hi I have some questions about the picture below. What are the special cases? Thank you

Chat image

user-28fc08 08 June, 2021, 07:30:12

Hello ! When I deleted the ndsi @ lines of the requirements.txt I had an error like this : ERROR: uvc-0.14-cp36-cp36m-win_amd64.whl is not a supported wheel on this platform. I tried to install Pyuvc 0.14 with other links but it still has the same error... Do you know my problem ? I am on windows 10

papr 08 June, 2021, 07:32:09

Please be aware that it usually is not necessary to run the software from source. Nearly everything can be done using the plugin or network api of the bundled applications.

papr 08 June, 2021, 07:31:21

Most of our wheels are currently only available for python 3.6 on windows. You are likely running a newer version of python.

user-28fc08 08 June, 2021, 08:09:10

Ahhh yes thank you !

nmt 08 June, 2021, 08:40:05

Hi @user-4bc389. The natural features calibration is mainly for situations where you have no access to a calibration marker: https://docs.pupil-labs.com/core/software/pupil-capture/#natural-features-calibration-choreography In general, this method is less reliable as it depends on communication between the wearer and operator

user-4bc389 08 June, 2021, 08:58:53

thanks👍

nmt 08 June, 2021, 09:21:29

Hi @user-4bad8e. You can calculate visual angle changes between consecutive gaze_point_3d values relatively easily: https://discord.com/channels/285728493612957698/285728635267186688/692710274066677820. The trickier part is using this to classify saccades. Just looking at the data within fixations.csv will likely not be sufficient as the dispersion-based fixation filter is not optimised for saccades – there is no guarantee that the changes between fixations accurately reflect saccadic dispersion.

user-4bad8e 09 June, 2021, 06:26:44

Sorry, I have an additional question. What is the unit of gaze_point_x,y,z showed in fixations.csv? Pixels?

nmt 08 June, 2021, 09:25:53

@user-4bad8e there are community projects that have that implemented saccadic analysis, e.g. by @user-2be752 https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing

user-4bad8e 08 June, 2021, 13:10:37

Hi @nmt. Thank you for your polite responses. That helps me a lot. I understood the limitation of the data for evaluating saccades. I want to analyze the scattering of places that occupants have fixated in a room using the calculated visual angles. Thank you.

user-7daa32 08 June, 2021, 14:53:50

I am still trying to get your last responses to me. Thanks....they are just esoteric for now

user-d80cf5 08 June, 2021, 15:46:11

Hi all, I get a very similar error information as the issue https://github.com/pupil-labs/pupil/issues/1764 with version 3.3.0 when I open the pupil eye capture. I have tried the suggested solutions in that issue, but unfortunately they don't work for me. Does anyone have some clues on how to fix this? Thanks!

world - [WARNING] video_capture.uvc_backend: Updating drivers, please wait...
powershell.exe -version 5 -Command "Start-Process 'D:\pupil\Pupil Capture v3.3.0\PupilDrvInst.exe' -Wait -Verb runas -WorkingDirectory 'C:\Users\lanru\AppData\Local\Temp\tmpgt4z7836' -ArgumentList '--vid 1443 --pid 37424 --desc \"Pupil Cam1 ID0\" --vendor \"Pupil Labs\" --inst'"
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
  File "launchables\world.py", line 669, in world
  File "plugin.py", line 398, in __init__
  File "plugin.py", line 421, in add
  File "video_capture\uvc_backend.py", line 78, in __init__
  File "video_capture\uvc_backend.py", line 248, in verify_drivers
  File "subprocess.py", line 729, in __init__
  File "subprocess.py", line 1017, in _execute_child
FileNotFoundError: [WinError 2] 系统找不到指定的文件

world - [INFO] launchables.world: Process shutting down.
user-4bc389 09 June, 2021, 06:33:46

Hi What could be the cause of the problem in the picture? thanks

Chat image

papr 09 June, 2021, 07:18:04

You need gaze data for the fixation detector to work on. It looks like you have none. This can happen if you selected Post-hoc calibration in the gaze data menu but did not finish setting it up.

papr 09 June, 2021, 07:16:27

It's millimeters within the scene camera coordinate system.

user-4bad8e 09 June, 2021, 07:41:33

Thank you very much!

user-908b50 09 June, 2021, 08:55:49

Hi, so I am interested in counting fixations and dwell times per AOI. For fixations (defined as the number of times spent fixating in an AOI), I am thinking of each counting fixation id (9, 3, 6, etc) as one and then summing the number of times it occurs over my pre-defined area. We have defined dwell times as percentage of task time spent fixating in an AOI. So, my goal here is sum the durations of all fixations within an AOI and then calculate what percentage of task time it actually was. Does this make sense? I just want to run the logic by someone to make sure I am thinking/using the exported values correctly. We are not using gaze data to get dwell time.

papr 09 June, 2021, 08:58:34

Summing the durations of each unique AOI fixation would have been my approach, too.

user-908b50 10 June, 2021, 21:13:47

As a follow-up, I have been using the fixation scan path code and someone else's code to map fixations. If each fixation count can occur over a range of x, y coordinates (which I am guessing is what probably contributes to dispersion), then what is the merit of plotting all fixations in the latter case. To get a sense of the number of fixation across AOIs visually, would it better to filter by id and then use either the average value or that reported in the fixations.csv file (matching it by the relevant ids of course during data parsing)?

nmt 09 June, 2021, 09:23:02

@user-908b50, yes this is a common approach in the literature

user-908b50 09 June, 2021, 09:06:47

yes, that is what I've been doing. I have been noticing this in addition to the fixation marker not showing up on screen. Some are essentially gaps where the person is looking somewhere but we don't know where exactly. From an analysis pov, what do you recommend? Should I get rid of this data? I am thinking if there is some way to salvage this data.

papr 09 June, 2021, 09:14:35

You could always reduce the minimum-data confidence. That will include lesser-confidence data in the fixation detection. But it is likely that this will result in false-negative detections (low confidence tends to jump a lot). You could automatically annotate periods of low confidence and exclude these periods from the dwell-time and task-duration calculations. @nmt What do you think?

user-4bc389 09 June, 2021, 09:08:49

Hi . What does the place pointed by the red arrow in the picture mean,thanks

Chat image

papr 09 June, 2021, 09:10:01

If a surface is visible/detected within a scene video frame, the software tries to map the gaze for this specific frame onto the surface. This columns indicates, if the gaze is on the surface/AOI or not.

user-4bc389 09 June, 2021, 09:15:42

@papr In other words, false means that the data does not fall into the AOI, and true means that it falls into the AOI, right?

papr 09 June, 2021, 09:15:52

correct

user-4bc389 09 June, 2021, 09:17:18

@papr I see thank you

nmt 09 June, 2021, 09:35:14

@user-908b50 This is where things can get quite subjective. Sometimes it's clear that the wearer was looking within an AOI even during periods of lower confidence. And in these cases, reducing the minimum data confidence can work. But if pupil detection quality is too low, the fixation result might be invalid. Deciding what is and isn’t valid can become difficult. I would try reducing the minimum data confidence to see what happens. It might be obvious that the results can be trusted, or vice versa.

user-908b50 10 June, 2021, 21:09:58

great! thanks @papr & @nmt for the suggestions! I will try it out and see where that takes me.

user-0c8e98 09 June, 2021, 10:45:42

Hello, it's the first time I'm using pupil core and it's showing me this error. I don't know what to do, if there's anyone who can help me

Chat image

papr 09 June, 2021, 10:47:07

Hi 👋 It looks like the driver installation was not successful. Did you get the request for administrative privileges shortly after starting the application?

user-0c8e98 09 June, 2021, 10:49:21

No, I did not receive the request for administrative privileges shortly after starting the application

papr 09 June, 2021, 10:49:52

Did you see any other terminal-like windows pop up?

user-0c8e98 09 June, 2021, 10:51:39

no i don't see any other terminal

papr 09 June, 2021, 10:52:57

Have you connected the headset to the computer? If this is the case, please check the Cameras and Imaging Devices categories in the system's Device Manager application. They should list 3 Pupil Cam cameras.

user-0c8e98 09 June, 2021, 10:52:21

Chat image

user-0c8e98 09 June, 2021, 10:54:58

Chat image

user-0c8e98 09 June, 2021, 10:55:34

you mean this ???

papr 09 June, 2021, 10:56:25

Ah, you are using an older model. In this case, you need to install the drivers by hand. Please follow steps 1-7 from these instructions: https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-0c8e98 09 June, 2021, 11:19:51

thank you, i will intall the drivers and try again.

user-7daa32 09 June, 2021, 13:29:50

Hello

Is the frame time in player in milliseconds... It looks like it will start with millisecond, enter second and then minute.

user-7daa32 09 June, 2021, 13:31:35

Chat image

papr 09 June, 2021, 13:33:53

its minutes:seconds - frame index

user-7daa32 09 June, 2021, 16:24:21

Thank you

user-0c8e98 09 June, 2021, 13:43:36

one camera works but the other one doesn't i can't make it composite parent

Chat image

papr 09 June, 2021, 13:50:22

You have a monocular headset. It only has a scene camera and one eye camera. Our current headsets are binocular, i.e. they have two eye cameras. This is why the software shows two eye windows, but only one connects to a camera. You can simply close the eye1 window.

user-0c8e98 09 June, 2021, 13:45:55

Chat image

user-d80cf5 09 June, 2021, 14:45:27

Hello, I still got stuck in this error(following picture). Could someone has a clue on how to fix it? thanks!

Chat image

user-d80cf5 09 June, 2021, 14:47:44

I think it is probably due to the driver, my pupil cameras under libusbK look wired

Chat image

papr 09 June, 2021, 15:32:17

Yeah, there is definitively something going wrong during driver installation. Could you uninstall these again and then try the steps 1-7 from these instructions to manually install the drivers? https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-d80cf5 09 June, 2021, 15:45:36

Thanks so much! it works well now👍

user-7daa32 09 June, 2021, 16:23:58

I know I have asked this question before. I'm sorry.

Problem: words displayed on a screen are blurry to view on player. What do you think might have happened or what do you think can remedy this?

Is the Computer screen resolution or the resolution set in Capture matter?

Or must both resolutions align?

user-3cff0d 09 June, 2021, 17:15:36

Hi, @papr - Do you know if there is a flowchart/something similar floating around, or one that I might be able to get access to, showing the process of how confidence is calculated for the 2d pupil detector?

papr 10 June, 2021, 07:10:03

The most accurate documentation of the 2d detection can be found in our paper from 2014 https://arxiv.org/pdf/1405.0006.pdf

It also includes information about confidence.

user-3cff0d 09 June, 2021, 17:16:30

Also, the same question, but for the process of pupil fitting

user-d80cf5 09 June, 2021, 18:41:11

Hi, sorry to bother you again. After I move the pupil cam 1 ID0(composite parent) to the libusbK, the program can start and the two windows for the eyes work normally. But I just found the calibration function doesn't work and gives the information " Calibration requires world capture video input." , like the following picture. Since I am using the HTC Vive add-on, should I also move the HTC Vive (composite parent)to libusbK? Or do you have some clues on the possible reasons leading to this problem? Thanks!

Chat image

papr 10 June, 2021, 06:59:13

You should check out our hmd-eyes project. It is able to stream the Unity main camera to Capture as a scene camera replacement.

user-7daa32 09 June, 2021, 18:43:08

Hello

Those gaze points before fixation Begin should be called what ?

If there a gaze at frame 7 follow by a fixation at frame range of 8-12, what's the name for that gaze points at 7?

papr 10 June, 2021, 07:33:22

There are various types of eye movements. If a gaze point does not belong to a fixation, it is most likely a saccade, but might be part of an other eye movement type, too.

user-7daa32 10 June, 2021, 11:49:23

Thanks... That's my explanation too. Thanks for clarification

papr 10 June, 2021, 07:34:54

They are the same timestamps but with different clock starting points. The UI only displays the timestamps up to a 3-digit precision. That should be sufficient for your use case.

user-7daa32 10 June, 2021, 11:49:34

Thanks

nmt 10 June, 2021, 07:47:12

You probably need to adjust the focus of the world camera lens to suit the camera-to-screen distance in your experiment

user-7daa32 10 June, 2021, 11:49:57

Got this answer here before.... Thanks for your patience

user-44bf4f 10 June, 2021, 09:03:02

Hi, I am currently trying to run pupil labs capture from source on linux. As far as I'm aware, I have installed all required dependencies, however, when trying to run main.py, i get an error that the OpenGL module wasn't found. I already tried just installing the module with pip, however that fails as pip claims to be unable to find the module - any idea what the issue may be/ how to fix it?

papr 10 June, 2021, 09:33:01

What linux and Python version are you using?

papr 10 June, 2021, 09:04:42

The module to be installed is pip install PyOpenGL.

user-44bf4f 10 June, 2021, 09:29:43

Okay, thanks - i think i found the issue now, apparently the requirements.txt script did not run through properly and I didn't realise that. The script apparently errors out when trying to install pupil-apriltags==1.0.4 and pye3d==0.0.7, both times because the module six is not found...

papr 10 June, 2021, 09:32:40

That is weird. Six should be available via pypi https://pypi.org/project/six/#description

user-44bf4f 10 June, 2021, 09:40:11

hm, six is installed, OS is Ubuntu 20.04 based, python version is 3.8.5

papr 10 June, 2021, 09:40:31

Could you provide the full error message then?

user-44bf4f 10 June, 2021, 09:40:42

sure

user-44bf4f 10 June, 2021, 09:41:38

message.txt

papr 10 June, 2021, 09:46:08

Please 1. git clone --recursive https://github.com/pupil-labs/apriltags.git 2. Add "six" to the requires list in pyproject.toml 3. run pip install <path to repo>

user-44bf4f 10 June, 2021, 09:42:05

That is the part of the script output where it tries to install apriltags

user-44bf4f 10 June, 2021, 09:42:31

the error message for pye3d is the same one basically as far as i can tell

user-44bf4f 10 June, 2021, 09:50:21

That did it 👍
Should i do the same for pye3d?

papr 10 June, 2021, 09:50:46

yes, please

user-44bf4f 10 June, 2021, 10:00:05

That worked, thank you very much! 😄

user-ae4005 10 June, 2021, 11:27:04

Hi there, I ran into a problem with the pupil detection today (this hasn't happened before and I have collected about 20 participants already). The eye tracker was not able to detect the pupils at all and I'm not sure what the reason is for this. This participant had green eyes, could this be a problem? Is there a way to adjust the settings to still detect the pupils in a situation like this?

user-d80cf5 10 June, 2021, 12:22:44

Thanks! I will check it. But just in case, I would like to make sure my drivers are now installed correctly since the name pupil Cam1 ID0 --vendor\Pupil looks wired and there are only two pupil cams under libusbK(plus hidden ones). 🤔 Could you please kindly check it out? Here you can see the picture:

Chat image

papr 10 June, 2021, 12:24:50

The add-on only has two cameras. Even though the naming looks weird, if you can see the video stream in Capture, everything is working fine.

user-7daa32 10 June, 2021, 14:10:50

Why are the width and height of the surfaces created not really adjustable?

papr 10 June, 2021, 14:11:18

Please elaborate on what you are referring to.

user-7daa32 10 June, 2021, 14:15:38

Chat image

papr 10 June, 2021, 14:18:45

And the issue is that you cannot click into the text field? In your previous screenshot, the name field was not filled. I think the software requires you to set a name. Therefore, switching to the width and height field is not possible.

papr 10 June, 2021, 14:19:24

Please describe the behaviour that you are encountering and how it differs from the expected behavior.

user-7daa32 10 June, 2021, 14:41:41

There is a name. When I changed the 1.0 in both boxes to 2.0 for instance, I saw no change. Meaning putting in values there change nothing

papr 10 June, 2021, 14:42:59

The size is used to define the internal AOI coordinate system. It makes a difference if you look at the surface-mapped gaze data and the heatmap.

user-7daa32 10 June, 2021, 14:46:55

I will check what you mean by internal AOI

user-7daa32 10 June, 2021, 14:46:18

I have never seen a clear heatmap except on the surface created. Heatmap outputs are blurry

papr 10 June, 2021, 14:44:14

To adjust the surface definition, click the edit surface button (top right of the surface) and move the corners of your surface into the desired position

user-7daa32 10 June, 2021, 14:45:14

I can adjust it that way. I just want all the surfaces size to be precisely the Same

papr 10 June, 2021, 14:47:48

The "blurryness" is a function of the width/height and the heatmap smoothness. Larger width/height -> more heatmap resolution.

user-7daa32 10 June, 2021, 14:50:21

Oooh I those are the function of the size of the AOI. Thanks. I have to edit and do that manually as I have been doing

user-908b50 10 June, 2021, 22:12:49

Also, how do you automatically annotate the data? I have been doing it manually.

user-597293 12 June, 2021, 13:42:48

Not sure what you mean by automatic, but this can be programmatically done with the network API. Example code here: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py

user-e22b51 11 June, 2021, 00:07:25

Traceback (most recent call last): File "launchables\world.py", line 751, in world File "camera_intrinsics_estimation.py", line 323, in recent_events cv2.error: OpenCV(3.4.13) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-cg3xbgmk\opencv\modules\core\src\matmul.dispatch.cpp:439: error: (-215:Assertion failed) scn == m.cols || scn + 1 == m.cols in function 'cv::transform'

user-e22b51 11 June, 2021, 00:08:10

Hi I get this error when I am trying to do the camera intrinsics estimation: Traceback (most recent call last): File "launchables\world.py", line 751, in world File "camera_intrinsics_estimation.py", line 323, in recent_events cv2.error: OpenCV(3.4.13) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-cg3xbgmk\opencv\modules\core\src\matmul.dispatch.cpp:439: error: (-215:Assertion failed) scn == m.cols || scn + 1 == m.cols in function 'cv::transform'

user-e22b51 11 June, 2021, 00:09:02

the world and eye processes shut down and the capture is not responsive

user-4bad8e 11 June, 2021, 03:00:42

Hello. Is it possible to estimate the depth of the users' viewing place in real space from the world camera using gaze_point_3d_z in fixation.csv?

user-7890df 11 June, 2021, 11:03:16

Hi @papr, since upgrading to version 3.x, gazepoint positions seem to be MUCH noisier than v2.6, has anyone else noticed this? I confirmed this by recording with v2.6 capture, and creating gaze mapping with both 2.6 and 3.1 (both their own post-hoc calibration and new gaze mapping). You can see clearly during the visualization of the gaze position that player 3.1 created with the same recording is jumping around all over the place, while 2.6 is quite stable. This will become a problem with anyone who records with 3.1 and can't go backwards to use the 2.6 algorithm (theoretically this should be possible with video alone, is that correct?)

papr 11 June, 2021, 11:08:09

Mmh, sounds like your 3d model seems to be less stable than before. Could you share the 3.x recording with [email removed] I could give more specific feedback this way.

user-7890df 11 June, 2021, 11:15:16

Sure, thank you. To be specific, the recording was done in 2.6 (since 3.1 recordings aren't backwards compatible), and it was the 3.1 remapping that was the problem. Recording in 3.1 and mapping still has the problem. I will share both the original 2.6 mapping and 3.1 mapping folders. THANKS!

papr 11 June, 2021, 11:29:02

Thank you for sharing the recording. I will come back to you via email early next week.

user-7daa32 11 June, 2021, 18:43:17

Hi

Is it possible to visualize the overall scanpath of the task? A static image of the order of fixations?

nmt 14 June, 2021, 07:08:04

Hi @user-7daa32 . You can add a ‘Vis Polyline’ in Pupil Player that visualises the scanpath of gaze points for the last 3.0 secs. Alternatively, if you used surface tracking in your experiment, you can visualise the entire scan path on surfaces with some coding. Check our this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb

user-b772cc 12 June, 2021, 07:33:48

@papr, is Pupil Core monocular 200Hz? Thanks.

nmt 14 June, 2021, 07:10:21

Hi @user-b772cc. Yes, Pupil Core is able to sample up to 200 Hz (monocular).

user-2996ad 12 June, 2021, 10:14:29

Hi all, sorry for asking again about a topic which has been discussed already, but I am trying to use the latest Pupil Capture (3.3.0), on Windows 10, to work with my pupil core device (high-speed 120hz worldcamera + binocular 200hz eye cameras). It seems there is no way to have the device recognized by the PC. I have also activated the Administrator account, but the device is not listed under the "libUSBK Usb Devices": it is rather listed under "USB (Universal Serial Bus) controller", but with the following errors (I try to translate them from Italian): - unknown USB device (failed port setting) - unknown USB device (failed request device descriptor)

When I have tried in the lab, at some point - after installing the software as administrator - it worked for few seconds (and the device was listed under libUSBK Usb Devices) but then it stopped. I read previous posts, and one of the suggestions (may 15th 2021) was to install an old version of PupilCapture: is this the only possible solution? In this case, which is the suggested version?

Thank you very much, best regards nicola

nmt 14 June, 2021, 07:22:02

Hii @user-2996ad. In the first instance, please follow the instructions in this message to debug your driver installation: https://discord.com/channels/285728493612957698/285728493612957698/842376938981949440

user-6e6da9 12 June, 2021, 15:15:39

Probably a simple question, but i just cant find a solution: since im using the nework api (and already had to use the suicidal snail method and restart the subscription, since the data comes faster in than i can work through it),

how do i map the gaze to the world camera? i have normalized coordinates, but those are for the world window, and i cant find a way to get the world window as a topic, so my python script only has the world_frame, and the normalized position is not corresponding to the coordinates of that.

user-6e6da9 12 June, 2021, 15:47:46

maybe in other words: is there any way to get the world window coordinates for the world frame from the network API over ZMQ?

big thank you if somebody has an idea

nmt 14 June, 2021, 07:34:48

Hi @user-6e6da9. If you subscribe to the gaze datum, norm_pos is gaze in the world camera image frame in normalised coordinates. If you want the 3d point that the wearer looks at in the world camera coordinate system, look at gaze_point_3d (https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format). If you haven’t already, check out the Pupil Helpers repository for examples of receiving data in real-time: https://github.com/pupil-labs/pupil-helpers/tree/master/python

papr 14 June, 2021, 07:58:16

@user-6e6da9 to extend the response above: Should you be interested in the 2d pixel location within the scene frame, you can use this equation to convert from normalized to pixel locations:

x_pixel = x_norm * frame.width
y_pizel = (1.0 - y_norm) * frame.height
user-6e6da9 14 June, 2021, 09:00:55

It would be correct, if i could calibrate the pupil labs that the calibrated world window is the same size as the world_frame. As it stands, its always a rectangle (or shape) in the world_frame in pupil labs, and the norm_position is for this smal part of the world_frame. And this would be okay, if i know, what the Thats what i tried. the problem i have: it coresponds to the calibrated window in the world frame. But i dont know where the window is. the norm_pos corresponds with the position in this window for me.(using the norm_pos of base_data) if i look at the corner of the calibrated window, i already get (1,0). But that is not the corner of the whole world_frame.

if i look at the 3D data, it is even more off. gaze_normal_3d is almost at the edge of the screen, and norm_pos of gaze 3D has a negativ normalized position. none of those would correspont to the correct pixel on the world_frame

Gaze: {'eye_center_3d': [-417.07332897871754, 75.0570080555142, -15.841330035112268], ' gaze_normal_3d': [0.9893230944326201, 0.050810551746338056, 0.1365946655382179], 'gaze_point_3d': [77.58821823759251, 100.46228392868323, 52.456002733996684], 'norm_pos': [0.7362048359425144, -0.08089865886879655], 'topic': 'gaze.3d.0.', ... 'base_data': [{... 'norm_pos': [0.7015782696975317, 0.034688731852176025]...]}

Chat image

user-6e6da9 14 June, 2021, 09:02:03

thats what i have done, but since the norm_pos correspont to the callibrated frame in the picture, and not the whole world frame, it yields a wrong position.

papr 14 June, 2021, 09:07:45

since the norm_pos correspont to the callibrated frame in the picture That is not correct. Gaze is mapped within the whole scene camera field of view. See https://docs.pupil-labs.com/core/terminology/#coordinate-system for details. The green bounding box does represent the "calibration area", true, but it refers to the area where the gaze is most accurate. It does not define a new coordinate system.

From the link above, you will see that the 3d coordinate system has its origin in the center of the camera, not in the bottom left.

using the norm_pos of base_data This sounds like you are using the pupil norm_pos and not the gaze norm_pos. That would be incorrect, since pupil data is relative to the eye cameras and not to the scene camera. https://docs.pupil-labs.com/core/terminology/#pupil-positions

user-6e3d0f 14 June, 2021, 09:03:47

Is there a way to only choose one eye for the results in pupil player? So only results from the left eye should be exported?

papr 14 June, 2021, 09:09:32

If you are just talking about the export, I suggest filtering the data post-hoc. If you want to recalculate everything, including gaze, I suggest renaming the right eye video file and running post-hoc pupil detection and calibration. This will generate monocular pupil and gaze data from the non-modified video file.

user-6e6da9 14 June, 2021, 09:13:19

hm, ok. i tried it with the norm_pos from gaze3d, and got wrong positions, but then it seems to have been because of calibration or delay in receiving the data... norm_pos of the base data seemed to be more acurate.

Is there a reason why e normalized position between 0 and 1 can deliver negative numbers? i know floating point can be tricky with fine precision, but this should not really be if it detected a gaze... is the correct way in this case to just take the absolut, or to map if(x<0 ) x = 0 ?

Thanks a lot for the answer 🙂

papr 14 June, 2021, 09:17:54

norm_pos from gaze3d Could you share a short example code in 💻 software-dev how you access and visualize the data exactly? Maybe I can spot the issue.

Is there a reason why e normalized position between 0 and 1 can deliver negative numbers? Yes, gaze can lie outside of the scene cameras field of view. In these cases, gaze coordinates can be negative or exceed 1. Most likely, these are from inaccurate pupil detections, though. I suggest filtering by confidence and to discard gaze points that are outside of the [0, 1] range.

user-6e3d0f 14 June, 2021, 09:13:32

Does the filtering of the data work in the fixations_on_surface csv? or what do you mean exactly with filtering posthoc?

papr 14 June, 2021, 09:19:25

If you only used the pupil_positions.csv, you could simply delete all entries for eye0. But in all other cases, you need to run the post-hoc processes again after renaming the eye video file. (Please also rename the according timestamps file)

user-6e3d0f 14 June, 2021, 09:29:54

so ill just rename all eye0 files (the video, the timestamps and the lookup) and run post hoc. Thanks @papr

user-6e6da9 14 June, 2021, 09:30:28

maybe in a few days. Essentially, im using the method described in recv_world_video_frames_with_visualization.py, but the opening of the frames with cv2.imshow takes to long, and i already had to implement the suicidal snail pattern to close and reopen the connection

In regards to the negative coordinates... that was a missunderstanding i had, i asumed if a gaze it outside of the world_frame, then it does not get recognized. knowing that positions <0 and >1 are posible explain why i was so confused, sins the documentation says it returns a normalized position between [0,1]. With that knowledge it makes more sense, when you add that to a suboptimal calibration values as 1.4 and -0.3 are rather confusing without it...

papr 14 June, 2021, 09:37:22

The recv_world_video_frames_with_visualization.py should not need the "suicidal snail pattern". It is built such that it processes the received messages first, before drawing the latest images. You should follow this pattern with the gaze data as well.

user-6e6da9 14 June, 2021, 09:48:21

im receiving it first and only show them when i show them whewn receiving gaze data, but the drawing/opening takes long enough (windows pycharm or ubuntu pycharm), for now im more then happy with the help you gave me, and i will report back if i find whats causing the delay in case that somebody has the same problem 👍🏻

user-7daa32 14 June, 2021, 10:28:48

Thanks. After validation, the calibration area masked with green line go off and another small smaller one is formed. Does it mean that only gazes within the validation box are accurate?

Exported data such as the fixations are averagely calculated from both eyes right ?

papr 14 June, 2021, 10:31:34

Does it mean that only gazes within the validation box are accurate? No. It just means your validation area was smaller than your calibration area. Validation does not change the accuracy of your calibration.

Fixations are calculated based on gaze. Gaze can be both: monocular and binocular. The system tries to maximize the number of binocular mappings given these conditions: - both pupil datums from left and right eye have high confidence - both pupil datums are close in time In all other cases, gaze is mapped monocularly.

user-2996ad 14 June, 2021, 14:32:28

lik

user-344718 14 June, 2021, 20:17:26

Hello,I have a pair of pupil core labs for sale, the product is new

user-344718 14 June, 2021, 20:17:37

Chat image

user-344718 14 June, 2021, 20:20:53

if you are interested please contact me

user-344718 14 June, 2021, 20:24:45

[email removed]

user-597293 15 June, 2021, 08:03:17

Hi! 👋 I’ve had two participants with intense blue eyes, where the 3D model flickers constantly (between 0.8-1.0). Any idea what causes this? The pupil seems to be well lit and is centered within the cameras FOV, similar to all other participants I’ve had earlier.

papr 15 June, 2021, 08:06:48

As long as 2d pupil detection works well, and there are sufficiently different gaze directions, the eye model should be stable. The iris color should not play a role regarding model fitness. If you have trouble with model stability you can always freeze the 3d eye model to enforce a single model.

Could you provide an example recording with data@pupil-labs.com ? This would allow us to give more specific feedback.

user-597293 15 June, 2021, 08:57:54

Sent you an email now.

user-42a6f7 15 June, 2021, 13:00:45

Hey i have a question regarding the core eye tracker. I run an experiment and record for about 3 minutes. When i open the sheet that records the diameter 3d in mm during that recording , i get ranges from 0.01 mm to 3.5 mm The problem is that i know that there are absolete numbers (0.01 for examoke) So i dont know which numbers i should include as real measurements. Also keep in mind that during the expeeiment, the eye pupil wasnt exactly at the green circle of the eye tracker, so it wasnt always taking accurate measurements. So how can i know the right real numbers inside the pupil_positions.csv file?

papr 15 June, 2021, 13:01:43

Hi, please see https://docs.pupil-labs.com/core/#_3-check-pupil-detection

user-42a6f7 15 June, 2021, 13:02:39

I have read it before

user-42a6f7 15 June, 2021, 13:02:49

But it does not answer my question

papr 15 June, 2021, 13:03:13

Slowly move your eyes around until the eye model (green circle) adjusted to fit your eye ball If everything is setup correctly, the green circle should no longer move around.

user-42a6f7 15 June, 2021, 13:04:59

I know that, what i am saying is that i have already taken the recordings and i dont know which points in time the green circle was positioned correctly (because the eye was moving)

papr 15 June, 2021, 13:05:47

Once you have a stable model, you will also see more realistic diameter values in the range of 2-9 mm. Of course, it is not always possible to estimate the diameter reliable, e.g. during blinks. It is recommend to discard entries with low confidence, e.g. lower than 0.6

user-42a6f7 15 June, 2021, 13:06:42

So i should discard the values less than 0.6 mm

user-42a6f7 15 June, 2021, 13:06:45

Right?

papr 15 June, 2021, 13:08:25

The export has a column confidence. Entries with a confidence lower than 0.6 should be discarded. If you are interested in further processing options, I recommend having a look at: Kret, M. E., & Sjak-Shie, E. E. (2019). Preprocessing pupil size data: Guidelines and code. Behavior Research Methods, 51(3), 1336–1342. https://doi.org/10.3758/s13428-018-1075-y

user-7daa32 16 June, 2021, 16:10:30

If one is interested in the saccades time, do we need to worry about low confidences if the confidences during focused attention (fixations) are higher?

papr 15 June, 2021, 13:10:45

Also, it is good practice to freeze your model once it is fit well. If you have already finished all your recordings, you can still try running the post-hoc pupil detection in Player. Start the process until the model fits the eye, pause the processing, freeze the models and hit restart. It will apply the frozen model to the whole recording.

papr 15 June, 2021, 14:35:14

Not necessarily, just the current pupil detection. As I mentioned here, you can run post-hoc pupil detection to improve the detection. If you want, you can share an example recording with data@pupil-labs.com and I can let you know via email what can be improved specifically based on the recorded eye videos.

user-42a6f7 15 June, 2021, 13:13:10

Awesome man, i will try that

user-42a6f7 15 June, 2021, 13:13:27

Thanks @papr , you are a very helpful guy

user-42a6f7 15 June, 2021, 14:27:24

Is the pupil diameter range 3d values in the pupil player accurate?

papr 15 June, 2021, 14:28:05

I am not sure what you are referring to. Are you referring to the values displayed in the timeline?

user-42a6f7 15 June, 2021, 14:27:33

Can i take it onto account?

user-42a6f7 15 June, 2021, 14:29:11

Chat image

papr 15 June, 2021, 14:32:42

That range represents the recorded/detected data with some simple outlier removal. It does not do any sanity check regarding the validity. The resulting diameter values can only be accurate if the 3d model is accurate. Given that the displayed range in your photo is way too small ("normal" pupil physiology has a range of 2-9 mm), it is likely that there is no valid model and diameter data in this recording (given the current detection)

user-42a6f7 15 June, 2021, 14:29:12

I am uploading a photo of it

user-42a6f7 15 June, 2021, 14:33:34

So is this recording invalid?

user-42a6f7 15 June, 2021, 14:33:52

Is there a way to get accurate estimations from this recording?

user-42a6f7 15 June, 2021, 14:36:43

So what should i do after i run the post-hoc pupil detection?

papr 15 June, 2021, 14:38:22

The improvements need to be done during the post-hoc detection. This includes adjusting 2d pupil detection parameters if 2d detection is bad, etc. It is difficult to say what needs to be done specifically without having access to the eye videos.

user-42a6f7 15 June, 2021, 14:36:59

I clicked on "redirect"

user-42a6f7 15 June, 2021, 14:39:20

Okay

user-42a6f7 15 June, 2021, 14:39:35

So what should i send exactly to that email?

papr 15 June, 2021, 14:41:07

Ideally, you would share a raw Pupil Capture recording (i.e. the folder that you drop ontop of Player to open a recording).

user-42a6f7 15 June, 2021, 14:41:36

Great

user-42a6f7 15 June, 2021, 14:41:40

I will send it now

user-42a6f7 15 June, 2021, 14:43:52

What should i write in the email subject?

papr 15 June, 2021, 14:44:30

No need for a specific subject. I will recognize your email 🙂

user-42a6f7 15 June, 2021, 14:44:51

Great man😃

user-42a6f7 15 June, 2021, 14:45:30

I sent you the email

user-42a6f7 15 June, 2021, 14:45:48

It is from [email removed]

user-42a6f7 15 June, 2021, 14:50:32

Did i send you a wrong file?

papr 15 June, 2021, 14:51:58

You only shared the export folder. I would need the folder that includes the info.player.json file

user-42a6f7 15 June, 2021, 14:52:41

Should i send the whole folder?

papr 15 June, 2021, 14:52:56

Yes, the whole folder please

user-42a6f7 15 June, 2021, 14:52:47

Or just that file?

user-42a6f7 15 June, 2021, 14:53:00

Okay

user-42a6f7 15 June, 2021, 14:53:59

But the file is 600 MB

user-42a6f7 15 June, 2021, 14:54:12

It is so big, is that okay with you?

papr 15 June, 2021, 14:55:46

Yes, that size is to be expected and is no problem.

user-42a6f7 15 June, 2021, 14:55:57

Okay

user-42a6f7 15 June, 2021, 14:56:43

I will send it, but i will need to do the same thing with about 10 different recordinga

user-42a6f7 15 June, 2021, 14:56:51

Recordings

user-42a6f7 15 June, 2021, 14:57:14

So it would be better if you can tell me what to do

papr 15 June, 2021, 14:58:02

Yes, of course. If all recordings have the same issue, it should be possible to find a solution from a single recording.

user-42a6f7 15 June, 2021, 14:57:41

Because adjusting those 10 recordings will take all your day

user-42a6f7 15 June, 2021, 14:58:31

And then duplicate it?

papr 15 June, 2021, 15:00:45

The "solution" will probably be a series of instructions that you can apply in Player to all of your recordings. But I won't be able to tell you until I have seen an example recording. As an example, it could also be that pupil detection is impossible due to bad lighting of the video.

user-42a6f7 15 June, 2021, 15:01:09

Okay

user-42a6f7 15 June, 2021, 15:01:31

I am uploading the file

user-42a6f7 15 June, 2021, 16:18:09

Hey @papr

user-42a6f7 15 June, 2021, 16:18:25

The file just finished uploading and the message was sent

papr 15 June, 2021, 16:30:33

If you turn on the eye video overlay, you will see the issue. The subject is wearing glasses, causing strong reflexions in the eye camera. The pupil is barely visible at times. I doubt that it will be possible to get good pupil data out of this. 😕

Also, usually if you are interested in measuring the psychosensory pupil response, you need to control the lighting of the scenery very carefully. This is not the case in your recording. So even if the pupil detection worked, it would be difficult to interpret the data in any mental effort context. 😕

user-42a6f7 15 June, 2021, 16:32:40

Okaaaay

user-42a6f7 15 June, 2021, 16:33:05

I have another video for someone without glasses

user-42a6f7 15 June, 2021, 16:33:14

Can i send it to you?

papr 15 June, 2021, 16:33:23

Sure. 🙂

user-42a6f7 15 June, 2021, 16:33:29

Awesome

user-42a6f7 15 June, 2021, 16:35:20

So is it hopeless for that video?

papr 15 June, 2021, 16:36:03

I will check how much I can get out of it tomorrow.

user-42a6f7 15 June, 2021, 18:39:06

Great , man

user-42a6f7 15 June, 2021, 18:39:23

I sent you the other file

user-42a6f7 15 June, 2021, 18:39:56

And i reallllly appreciate your help @papr 😃

user-7c46e8 16 June, 2021, 09:11:50

Hi, I have a few questions regarding pupil size measurements: 1) How much of a difference in pupil size (in mm) between eyes can be considered normal? 2) I noticed quite large differences in pupil size measurements between offline & online analysis, which of the 2 methods tends to be more reliable and is there a way to check which method performs more accurately? 3) I have some recordings with pupil size measures frequently being below 2mm (which I thought was the natural lower range of pupil size), even when detection confidence is high. Are these measurements valid?

user-53874e 16 June, 2021, 10:04:55

Hi, anywhere can I download the Pupil Mobile App (apk)? As I cannot install through Google Play.

papr 16 June, 2021, 10:25:03

I do not think so. 😕

user-597293 16 June, 2021, 10:06:19

Mail sent now ✉️

papr 16 June, 2021, 10:06:30

Thanks!

user-53874e 16 June, 2021, 10:20:38

How can I subscribe with Pupil Capture from the remote host?

Chat image

papr 16 June, 2021, 10:23:22

In newer versions of Capture, the Pupil Mobile device is listed in the Video Source menu -> Activate Device selector. It is only listed, if there is a device correctly recognized in the network

user-53874e 16 June, 2021, 10:21:20

@papr I dont find any "Capture Selection" option from Pupil Capture interface.

user-53874e 16 June, 2021, 10:24:43

@papr Thanks!

user-53874e 16 June, 2021, 10:29:01

There're something wrong with my macOS version Pupil Capture. The headset is connected via USB-C, it says device found but I got no video signal, both world camera and eye camera.

Chat image

papr 16 June, 2021, 10:30:34

mmh. Could you try selecting the device explicitly from the Active Device selector? And afterward, share the ~/pupil_capture_settings/capture.log file?

user-53874e 16 June, 2021, 10:31:38

If I select "Local USB" option, I got the following error message

Chat image

user-53874e 16 June, 2021, 10:31:50

Chat image

papr 16 June, 2021, 10:32:32

Is it possible that you started Capture multiple times by accident? Or have a different program accessing the cameras? If not, please try restarting and trying again.

user-53874e 16 June, 2021, 10:35:35

@papr I've restarted it and nothing changed. Here's the log file

capture.log

user-53874e 16 June, 2021, 11:27:36

2021-06-16 18:33:45,207 - world - [WARNING] launchables.world: Process started. 2021-06-16 18:33:45,208 - world - [DEBUG] ndsi.network: Dropping <pyre.pyre_event ENTER from bbff2452a3cc44629a11d4c351cc7baa> 2021-06-16 18:33:45,208 - world - [DEBUG] ndsi.network: Dropping <pyre.pyre_event ENTER from 24cbe3897bcf47b380d2e55f47face19> 2021-06-16 18:33:45,387 - world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. 2021-06-16 18:33:45,394 - world - [DEBUG] uvc: Found device that mached uid:'20:4' 2021-06-16 18:33:51,113 - world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. 2021-06-16 18:33:51,121 - world - [DEBUG] uvc: Found device that mached uid:'20:4' 2021-06-16 18:33:56,588 - world - [DEBUG] uvc: Found device that mached uid:'20:3' 2021-06-16 18:33:56,589 - world - [ERROR] video_capture.uvc_backend: The selected camera is already in use or blocked.

papr 16 June, 2021, 11:32:05

It is possible that there is an issue with the usbc-to-usbc cable. There are different types of usb-c cables and some of them are limited regarding what data they transfer. 😕 Do you have a different cable that you can try?

user-53874e 16 June, 2021, 13:45:09

I've tried every cable I can find, still cannot get the video signal. But now I've installed Mobile App, some how Pupil Capture can read video from remote host. 🙂

user-53874e 16 June, 2021, 11:54:09

OK, I'll try other cables. many thanks 🙂

user-fcbfcb 16 June, 2021, 20:39:56

Is there any plug in available for pupil core eye tracker? Tobii has developed a SDK for Unity and Unreal Engine so far I know. If I want to use eye Pupil Core eye tracker in Unreal Engine, how can I do so?

wrp 17 June, 2021, 04:57:36

We do not officially have an Unreal Engine plugin - we target Unity 3d and have reference implementations in Python for calibration and interface at: https://github.com/pupil-labs/hmd-eyes/. Pupil Labs VR add-ons are also supported by WorldViz.

There were some community efforts a few years back (e.g. https://github.com/SysOverdrive/UPupilLabsVR ) but these are likely depreciated based on changes we have made to Pupil Core - but may still serve as a useful starting point. There is also this student thesis: https://www.cg.tuwien.ac.at/research/publications/2019/huerbe-2019-BA/

user-a98526 17 June, 2021, 03:51:11

Hi@papr,I try to get images from pupil-capture and process them, but the processing speed of my code is less than 30hz. The sample code has the following description: * # The subscription socket receives data in the background and queues it for # processing. Once the queue is full, it will stop receiving data until the # queue is being processed. In other words, the code for processing the queue # needs to be faster than the incoming data.* I would like to ask if there is a way to get only the latest image frame.

user-ad7e10 17 June, 2021, 11:28:56

I think my equipment Pupil Core ML-016 has a little problem because the main body and the eye camera is loosely connected which sometimes cause unstable video stream of the result in my computer. After being used for a week, now I can't receive any information from it so I opened the connection part of the eye camera and the main body and found that there are two wires seems to be disconnected and the white junction is loose so it can be easily pulled out. I wonder what can I do to fix it or could you please guide me how to get such equipment by myself (I searched on shopping website but couldn't find ML-016).

Chat image

papr 17 June, 2021, 11:29:41

Please contact info@pupil-labs.com in this regard

user-a1611a 17 June, 2021, 17:40:19

Hi, I am new here and a rookie at that. I just got Pupil core from a senior colleague, and I have been able to do the set up. However, I am confused on how to approach my specific problem, which is to use Pupil core to collect data of object locations. Say I am holding an object, I want to be able to locate the object I am looking at, hence generating series of images + label. So essentially, I want to use Pupil core to generate data for a model I am about to train. I want to be able to use this data for multimodal fusion. Please, any pointers would really be appreciated. Thank you.

papr 17 June, 2021, 18:04:06

Hi, welcome! Check out this community fork https://github.com/jesseweisberg/pupil I think it does exactly what you are trying to do

user-a1611a 17 June, 2021, 18:07:59

Thank you I will check it out now

user-6af96b 18 June, 2021, 09:12:56

Hello folks, I have a question and I would be very thankful if you could answer this to me: When recording with the glasses it's nice to use the "fisheye" lens, because you have more information in the picture. I need to de-distort / equalize the image for object detection. But in theory, if you de-distort the image, the gaze data is not correct anymore, right? To me it seems like a very common problem so my question is if there is a build-in tool to adapt the gaze data to the new de-distorted image?

Chat image

papr 18 June, 2021, 09:14:15

You can use the iMotiones exporter to get undistorted video and gaze 🙂

user-6af96b 18 June, 2021, 09:14:54

oh this sounds great thank you very much, I will check this out ! 🙂 🙂

user-7c46e8 18 June, 2021, 10:30:38

Hi all I have another question, does gaze calibration affect the accuracy of the pupil size measurements?

papr 18 June, 2021, 10:31:18

The 3d model stability does, though

papr 18 June, 2021, 10:31:01

No, it does not

user-7c46e8 18 June, 2021, 10:33:44

Thanks! The 3d model stability can be optimized by adjusting the 2D detector settings and giving the model time to stabilize while the participant moves their eyes around, right?

papr 18 June, 2021, 11:16:18

correct

user-597293 18 June, 2021, 12:02:07

Is it possible to recalibrate to monocular calibration post-hoc if one eye generally yields lower confidence?

papr 18 June, 2021, 12:07:42

yes, by renaming the video and timestamp file of that eye, running posthoc detection and calibration afterward

user-597293 18 June, 2021, 12:12:37

Okay, like renaming eye0 files to something else so that it is not “picked up” by pupil player?

papr 18 June, 2021, 12:13:32

Correct

user-07abaf 18 June, 2021, 12:30:25

Hi, in my recent recordings, the blue pupil estimation (?) heavily diverges from the actual measured pupil (red). Wonder, what may be the factors influencing that?

Setting Info: auto exposure mode (since the illumination changes constantly today), in algorithm mode: pupil int. range: 3, pupil min: 5, max: 50 freeze 3D model (after doing a rolling eye test, so that the green eyemodel adjusts correctly) model sensitivity: 0.997 resolution: 192x192, fr:120

papr 18 June, 2021, 13:48:48

blue and red circles not matching consistently is an indicator that the 3d model was not frozen in the correct place. Feel free to share an example recording with data@pupil-labs.com such that we can give more specific feedback

user-07abaf 18 June, 2021, 12:31:41

oh and using the orange arm extenders

user-07abaf 18 June, 2021, 13:53:28

will do!

user-a1611a 18 June, 2021, 13:55:12

Hi, @papr thank your response yesterday, I just found out my Pupil core has no world scene camera, is it something I can purchase separately? I also didn't see any slot to plug it in assuming I can buy separately. Please see attached picture.

user-a1611a 18 June, 2021, 13:55:21

Chat image

papr 18 June, 2021, 13:57:43

Pupil Core comes with a scene camera be default. It can be configured otherwise before purchasing, see here https://pupil-labs.com/products/core/configure/ I am not sure, if the scene camera can be installed post-hoc. Please contact info@pupil-labs.com in this regard.

user-a1611a 18 June, 2021, 13:58:23

Ok thank you

user-aaa87b 19 June, 2021, 10:39:15

Hi there! I’ve installed for the first time Pupil core on Win10 (I normally use Ubuntu). Everything went on quite fine, however when I start the intrinsic estimation Capture freezes and I get this message. Any help? Thank you all.

Chat image

papr 19 June, 2021, 16:22:31

Is C:\Users\appveyor your local user directory? Probably not, correct?

user-aaa87b 21 June, 2021, 08:29:17

No, indeed it is not.

user-4bc389 21 June, 2021, 06:23:51

Hi , I did not find these two calibration configuration methods in the capture, are they canceled?

Chat image

papr 21 June, 2021, 08:06:12

Binocular is used by default. If only one eye camera is available, or the confidence is bad for one eye, Capture falls back to monocular.

papr 21 June, 2021, 08:29:51

Which camera model and resolution did you choose?

user-aaa87b 21 June, 2021, 08:38:56

Eye: Pupil Cam2 ID0 @ Local USB, Framerate 200, Res. 192, 192; World: Pupil Cam1 ID2 @ Local USB, Framerate 60, Res. 1280, 720.

user-aaa87b 21 June, 2021, 08:34:03

Eye: Pupil Cam2 ID0 @ Local USB, Framerate 200, Res. 192, 192; World: Pupil Cam1 ID2 @ Local USB, Framerate 60, Res. 1280, 720.

papr 21 June, 2021, 08:40:27

Apologies, I meant distortion model, not camera model. It is a setting in the plugin's menu.

user-aaa87b 21 June, 2021, 11:23:48

Fisheye

user-d6701f 21 June, 2021, 10:01:36

Hey there I have to work on recordings that were done with pupil capture version 0.7.6-2 and I would like to work on them with the new pupil player version, if possible. When I tried to import them the Player said "this recording needs a data format update, open once in pupil player v1.1.7" Now my question is, can I download the Pupil Player version v1.1.7 somewhere, or is there another way for updating the data format?

papr 21 June, 2021, 11:55:01

Please use the radial model for the 720p resolution

user-aaa87b 22 June, 2021, 08:53:11

Ok, thank you !

user-7daa32 22 June, 2021, 18:45:20

Hello

I know there are manufacturer's values for Accuracy and recommendation. Please are there values for best practices when using 2D or 3D pipelines? Or can we just choose an average visual angle we got from our pilot studies, use it to calculate the stimulus size?

papr 22 June, 2021, 18:47:09

You should use the accuracies reached/measured in your pilot studies. Every experiment is different and it is important to make sure it works as expected.

user-7daa32 22 June, 2021, 18:49:12

Thanks

user-60f500 23 June, 2021, 06:03:37

Hi guys, I am wandering if it is possible to calculate saccades when the recording is done using LSL. In the output .xdf file I have several variables. In fact, I have 22 variables like gaze_point_3d_y, confidence, etc. And I suppose these variables are the same than the one you obtain when you record using pupilCapture (but I'm not sure about that). Is there a simple way to visualize saccades in the data ? Maybe by computing the difference between the center of the screen (or the center of the visual field) and the position of the eye (gaze position I suppose) ? Thanks for the help

papr 23 June, 2021, 07:34:24

And I suppose these variables are the same than the one you obtain when you record using pupilCapture That is correct

In general, saccades can happen everywhere in the field of view. Do you have an experiment setup that expects saccades from the center of the screen? I, personally, would start by visualizing the gaze data itself before looking at higher level eye-movement classifications.

user-d4df96 23 June, 2021, 07:44:20

Hello and help! I was hoping to run an eye tracking study tomorrow with the VR eye tracker (silly I know) - at the moment I have about half an hour it's a 2 hour round trip to pick up an HTC Vive Pro. Question - just ran the demo and installed the recording software - the demo raycasts are all over the place and not where I am looking - I can't see any errors but pupil confidence is < .8 - I am wearing contacts does that affect it? Any quick tips?

papr 23 June, 2021, 08:01:53

I day preparation is indeed ambitious regarding preparation time. 😅 What type of data are you looking for?

papr 23 June, 2021, 08:01:05

Pupil detection is crucial for a good gaze prediction. I suggest making sure that there are no occlusion or similar that interferes with detection. Also, when 2d detection works well for you (blue circle fits the pupil well), look around / roll your eyes to fit the 3d model (green outline; should fit the eyeball; red and blue circles should overlap).

VR setups can be tricky to validate if you are on your own. I suggest making a recording in Capture and playing it back using Pupil Player with the eye video overlay plugin enabled. If you want you can share such a recording with data@pupil-labs.com for more specific feedback.

user-d4df96 23 June, 2021, 07:46:19

I did some OBS screen capture and the pupil seems well detected to me.

user-d6701f 23 June, 2021, 08:04:06

Hi again 🙂 is there an easy way to update the meta file format of already exsisting recordings?

papr 23 June, 2021, 08:04:37

Hi! Which type of data are you planning to update?

user-d6701f 23 June, 2021, 08:06:21

What I'd like to do is to analyse blinks with the new pupil player using the plugin, but I have recordings from Pupil capture version 0.7.6-2 unfortunetely

papr 23 June, 2021, 08:07:56

oh, that is pretty old 😅 I think you need to open the recording with v1.16 (?) once, before you can open it in the most recent version of Player

user-d6701f 23 June, 2021, 08:07:40

when trying to open the recordings, pupil player said I need to update the data format of the recording

papr 23 June, 2021, 08:08:34

Yeah, read more about it here https://github.com/pupil-labs/pupil/releases/v1.16

user-d6701f 23 June, 2021, 08:08:48

yeah, true 😄 okay, but how do I do that? Can I get this version somewhere?

papr 23 June, 2021, 08:09:07

You can download that version from the link above.

user-d6701f 23 June, 2021, 08:09:20

perfect!!

papr 23 June, 2021, 08:10:21

The upgrade happens automatically when opening the recording in that version. Make sure to make a backup of the recording if you still want to open the recording in v0.7. The upgrade changes the format such that v0.7 can no longer open the recording.

user-d6701f 23 June, 2021, 08:09:46

when downloading it, will it overwrite the version I already have?

papr 23 June, 2021, 08:10:28

The upgrade happens inplace.

user-d6701f 23 June, 2021, 08:11:19

Okay, thanks a lot!!

user-d4df96 23 June, 2021, 08:14:16

@papr thanks! It's raycast data for a brain injury population - looking at cognitive function - have things already setup with Pro Eye - but given equipment shortages, I thought I'd try to get the Pupil Labs happened - turns out it will probably take me more than a couple of hours - I am going to sit down properly and read the instructions. 🙂 For now, think I'll make do with the roundtrip.

user-4efbd8 23 June, 2021, 09:55:19

Mahlzeit everyone. I'm still working on installing/implementing pupil-labs on my Win10. I'm trying to follow the documentation step by step and after the updated requirement.txt i got a bit further but now i'm stuck at this:

Chat image

papr 23 June, 2021, 10:03:20

Looks like the libuvc or turbojpeg dll cannot be found. You can spend time debugging this. But did you know that you can run the pre-compiled version of Capture and do nearly any custom development work using user-plugins?

user-4efbd8 23 June, 2021, 09:55:47

Do you know what i'm maybe missing?

user-4efbd8 23 June, 2021, 10:05:44

I didn't... where can i find it?

user-ef3ca7 23 June, 2021, 15:02:51

Hello, we want to cite the PyE3D application in our paper. Is Swirski and Dodgson, 2013 appropriate to cite?

papr 23 June, 2021, 15:04:41

please cite https://dl.acm.org/doi/10.1145/3314111.3319819

user-ef3ca7 23 June, 2021, 15:05:15

Thank you

user-aadbbd 24 June, 2021, 08:39:16

Hello! I am having some trouble with the Network API. It does not seem to do anything when I run the pupil_remote_control.py script from my command prompt, nor does it seem to do anything when I run the same code from a Jupyter Notebook. However, it shows that the code is running. Is there something additional that I must do to actually connect to my Pupil Core device?

papr 24 June, 2021, 08:40:48

Please make sure that you are running Pupil Capture and its network api is available at the port configured in script (by default 50020)

user-aadbbd 24 June, 2021, 08:41:31

How do I check what port I am using?

papr 24 June, 2021, 08:42:09

It is displayed in the Network API menu in Capture

user-aadbbd 24 June, 2021, 08:42:00

Sorry that I am very new to networking

user-aadbbd 24 June, 2021, 08:42:31

Ah okay, thank you so much!

user-597293 24 June, 2021, 09:32:33

Hi, I am trying to do offline blink detection in Pupil Player, but it will not detect any blinks. In the GUI it does not show any activity either, just the horizontal threshold lines. The following was written to log:

player - [WARNING] blink_detection: No blinks were detected in this recording. Nothing to export.
player - [DEBUG] blink_detection: Recalculating took
    2.7310sec for 670779 pp
    245614.05567290605 pp/sec
    size: 0

Thanks in advance 😄

Edit: And just to be clear, the recordings are long (~45 min) and include a lot of blinks and confidence drops. When I import shorter recordings, blinks are detected.

papr 25 June, 2021, 13:16:02

Let me correct. It looks like something is off with the timestamps. Basically the filter width is zero, which is why there is no filter response. Filter width is calculated based on:

filter_size = 2 * round(len(all_pp) * self.history_length / total_time / 2.0)

all_pp are the pupil data (670779 entries in your case)

user-7daa32 24 June, 2021, 18:50:07

Anyone here can help on how to build in an audio capture plugin in Capture software?

It is nice if we have such plug in

user-7daa32 25 June, 2021, 14:47:00

I read a paper that used Pupil Lab core. They made use of Audio capture in their pupil lab software

user-7daa32 24 June, 2021, 19:27:45

If after calculating the Target size and we got say, 2.8deg x 2.8 deg, can we still account for offset in player by adding margins when creating surfaces ?

user-d6701f 25 June, 2021, 12:42:19

Hi, I tried to open the old recordings in player version 1.16 like you said, unfortuntely it tells me it can't open the file, that the directionary is missing. Any ideas?

papr 25 June, 2021, 12:43:26

You need to drop the folder containing the info.csv file, not a file

user-d6701f 25 June, 2021, 12:44:17

Yes, I dropped the whole folder (e.g. '000')

papr 25 June, 2021, 12:44:46

ok. Could you please let us know what the exact wording of the error message is?

user-d6701f 25 June, 2021, 12:44:30

it contains the info file

user-d6701f 25 June, 2021, 12:50:56

player - [ERROR] launchables.player: InvalidRecordingException: There is no info file in the target directory.

papr 25 June, 2021, 12:59:13

mmh. Not sure what is going wrong that it thinks that the info file is missing. Please share the recording with data@pupil-labs.com if you want us to investigate the issue.

user-d6701f 25 June, 2021, 13:00:38

okay, I'll send you one of the folders thanks!

papr 25 June, 2021, 13:36:25

It looks like you did not give data@pupil-labs.com access permissions for the shared link

user-597293 25 June, 2021, 13:09:31

This is what I mean by no activity 🤔

Chat image

papr 25 June, 2021, 13:10:12

Looks like you do not have any pupil data. What type o recording do you have? You might want to run the post-hoc pupil detection.

user-597293 25 June, 2021, 13:10:59

It's a normal recording directly from Pupil Capture. Gaze and fixations are working.

papr 25 June, 2021, 13:12:21

Could you post a screenshot of the fps and confidence timelines?

papr 25 June, 2021, 13:17:01

total time is calculated based on the first and last timestamp. They seem to be very far apart in your case, such that the result is zero.

papr 25 June, 2021, 13:17:44

Please export the pupil_positions.csv and share it with data@pupil-labs.com such if you want us to investigate.

user-597293 25 June, 2021, 13:21:57

Feel free to try it on the data (participant_503) that I sent you for the surface cache bug. I will send you an exported pupil_positions.csv for another recording as well. All my recordings are indeed very long, around 2700 s, so that might be the problem.

papr 25 June, 2021, 13:25:30

That should be no problem if you have sufficient amount of pupil positions. I will have a look

user-597293 25 June, 2021, 13:29:33

Mail sent now 📬 Thank you.

papr 25 June, 2021, 13:34:16

participant_503 works for me

papr 25 June, 2021, 13:45:41

I am redownloading the recording now. Might take a while 😅

papr 25 June, 2021, 13:42:27

But participant_503/000 is fairly short for me.

papr 25 June, 2021, 13:42:05

@user-597293 Could you follow this step please? It might give us an indication of what is wrong.

user-597293 25 June, 2021, 13:58:54

Here's a screenshot (with a clear blink present). Pupil confidence and diameter does not show up as continuous lines, but as faint "scatter plots". Not sure why eye FPS is so fuzzy, I do think it remains fairly stable around 120 FPS throughout the recording.

Chat image

papr 25 June, 2021, 13:50:18

The pupil_positions.csv looks good. I will have to investigate this in more detail next week

user-597293 25 June, 2021, 13:51:02

participant_503 should be just short of 45 min 😅 It also has an exported pupil_positions ready in export folder 001 (only 500 MB, hehe).

papr 25 June, 2021, 14:59:29

Found the issue. You are using the R pupil-remote command in your experiment, correct?

The issue in participant_503 is that it contains pupil data from before the time reset. Player sorts the data by time and places these at the end because they have a large timestamp. These large timestamps are then used to calculate the filter size and timeline subsampling.

You can use this notebook and adjust it to remove these samples https://gist.github.com/papr/96f35bebcfea4e9b9552590868ca7f54

- filter_pldata(info_json.parent, "pupil", lambda d_ts_topic: d_ts_topic[1] >= start)
+ duration = info_data["duration_s"]
+ duration += 60.0  # add a generous buffer
+ filter_pldata(info_json.parent, "pupil", lambda d_ts_topic: d_ts_topic[1] <= duration)
papr 25 June, 2021, 15:02:15

For future experiments, I highly recommend using local clock sync as it is demonstrated here https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py

user-597293 25 June, 2021, 15:19:09

Ah, I see! Looking through my code it also seems like I've forgot to add a sleep call between setting the time and starting recording. Thanks a lot 😄

user-7e9436 25 June, 2021, 22:33:02

@papr @wrp Hey again, I got the IR's soldered on and now at the stage of getting the cams setup on the Pupil Software, however, when I open Pupil Capture (as Administrator), the camera image does not show up. Ive tried uninstalling devices in device manager as per the guide, but still no image. It says "Camera not found in red when opening the software, and when looking in the camera menu, it says camera is blocked?" Both laptop and desktop doing the same thing. What should I do?

user-fcbfcb 28 June, 2021, 12:01:44

Hi! I have two questions, 1. How to use Pupil Lab eye tracker in unreal engine? 2. according to the VRPN wiki (https://github.com/vrpn/vrpn/wiki/Available-hardware-devices) the Pupil Lab's eye trackers are unsupported by VRPN. Is there anyone, or any source to add pupilLab device drive?

user-597293 29 June, 2021, 10:24:29

Hi, Quick question on fixations.csv vs. fixations_on_surface_*.csv. The latter contains several samples per fixation_id, with differences between screen coordinates for each sample.. Any reasons not to use the average of all samples with same fixation_id? Fixations are exported with a large dispersion value as I am only interested "the bigger picture" in terms of gaze position, so minor differences in position won't matter anyways. Thanks in advance 👍

papr 29 June, 2021, 10:27:50

This happens if the surface moves relative to the scene camera during the detected fixation period

user-3cff0d 29 June, 2021, 19:46:59

Hello, I have a question: When calibration happens in the Pupil Labs Core software, is that done individually for each eye or is the "cyclopean eye" the one that's calibrated?

papr 29 June, 2021, 19:48:31

The software attempts to do both. It tries to fit two monocular and one binocular model.

user-3cff0d 29 June, 2021, 19:50:02

Okay, I see. In terms of which of those three models is used to put the gaze location dot on the screen, I'm guessing that depends on the confidence values of each eye in a given frame of input?

papr 29 June, 2021, 19:51:00

If the matcher was able to create a binocular pupil pair, it will use the binocular model. And yes, that mainly depends on the pupil confidence.

user-3cff0d 29 June, 2021, 19:50:31

Meaning, when one eye's confidence drops, it will switch to the monocular gaze location of the other eye?

user-3cff0d 29 June, 2021, 19:51:31

Okay, I think I understand. Thank you for the quick answer!

user-82b4f1 30 June, 2021, 08:47:35

Hello I had to suspend my project and could not follow for a while. So pelase forgive me if I put questions that have already been answered. I will be glad to follow your links in the chat history, in case.

user-82b4f1 30 June, 2021, 09:27:15

@papr @user-597293 I am sorry if this message looks long, but I believe I have to provide some framework to ease whoever tries to help.

I am working with recorded videos with a camera that gets both eyes frontally. My aim is to study the dynamics of eye movement while they are following a target. My currently intended flow is as follows [with issues/questions in square brackets] : 1. I get a .h264 video file (RaspberryPi + HQ camera + raspivid, setting only resolution=1920x500 and fps=60) [ - the resulting file, checking with ffplay is 25 fps, not 60. Any clue? - what are better suited video recording format/codec/settings, for pupil to read the video then? - for precision/synchronisation purposes, I'd like to get the actual timestamps in my video, any advice how to make raspivid save them in the video file? ] 2. I crop the video to obtain two videos, one for each eye [ - I am using ffmpeg from command line, but it is quite cumbersome to use those ] complex filtergraphs [. Any advice on using a programmatic approach (python/jupyter) ]

papr 30 June, 2021, 14:07:47

Hi.

  1. Generally, I cannot say anything about raspivid and its output. It is possible that it actually does not run the cam at 60 but at 25 hz. I suggest looking at the pts (presentation timestamps) in the .h264 file for more exact time measurements. You can use https://github.com/PyAV-Org/PyAV to read this information. These timestamps are relative to the video file start and not synchronized between videos. In order to get synchronized ts, you will need to somehow measure the reception time of the frame data. Not sure if raspivid is able to do that.

  2. I would also use pyav to read the frame data, cropping the images with the python code of your choice (e.g. https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.crop), and reencoding the resulting frames using pyav.

  3. I would not recommend this approach. If you are able to generate _timestamps.npy files with synchronized timestamps, you only need a dummy info.player.json file for Player to recognize the recording. Then you can use the full Pupil Player post-hoc detection capabilities.

user-82b4f1 30 June, 2021, 09:47:07
  1. I submit those videos to pupil_capture or pupil_service and get pupil x, y, accuracy, eyeblinks (then fixations, saccades... etc.) by querying Pupil Remote service via the Network API [ - which of the three programs pupil_capture/ _service/_player is better suited for doing the modelling/processing? is there any even better option (e.g headless)? - until last attempts there was no way to synchronise the timing of the two videos. Has something changed in this respect? - if I get actual timestamps in the video file, will each eye output data be referred to those? Or should I rather provide a _timestamps.npy file explicitly? ] Please feel free to direct me to outer links, or tell my that my workflow has to be changed from scratch. I'd like to stay on a traced path and not far from what others are doing. Thank you very much.
user-82b4f1 30 June, 2021, 09:49:02
  1. (Last but not least) Are there experiences of running pupil on a RPi4?
user-765368 30 June, 2021, 11:24:15

@papr hello, im sorry if these questions were already asked, but i couldnt find in the history answers to them so im going to ask. 1. Based on what i understood from the documentation the coordinates of the pupil positions are relative to the camera, is there a way to calculate the positions (horizontal and vertical positions) of the eye itself? 2. Is it possible to calculate the gaze based on different "conditions"? (i mean based on the same pupils video and calibration i can calculate both the monocular gaze for each eye and the binocular) 3. Is there a way that i can take the current pupil positions that i have (in which their horizontal and vertical components are dependent on their angle to the cameras) and calculate the eye's horizontal and vertical components from them? (with or without the gaze detection)

Thanks alot!

nmt 30 June, 2021, 12:39:54

Hi @user-765368. 1) Yes, in the case of pupil positions, data are relative to the eye camera coordinate system. sphere_center_* is the position of the eyeball sphere (eye camera 3d space units are scaled to mm). 2) gaze_normal*_x,y,z describes the visual axis for each eye (i.e. that goes through the centre of the eyeball and the object that’s looked at). Conceptually, this is describing the gaze direction for each eye individually (can be obtained from a binocular calibration). Important to note, this is relative to the world camera coordinate system, which is different from the pupil positions. 3) I’m not sure that I understand this question. Do you want the eyeball viewing direction with respect to the eye camera? If so, look at circle_3d_normal_* which indicates the direction that the pupil points at in 3d space (relative to the eye camera coordinate system). Note that pupil data from each eye are in separate coordinate systems, i.e. relative to each eye camera. Coordinate systems: https://docs.pupil-labs.com/core/terminology/#coordinate-system Data made available: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

user-4bad8e 30 June, 2021, 13:27:39

Hello. Thank you for your kind supports. I am trying to use offline head pose tracker to calculate subjects' head moves while experiments. Could you tell me the conditions when the data of the world camera's pose is recorded in head_pose_tracker_poses.csv?

user-765368 30 June, 2021, 13:38:23

Thanks for the quick reply, i need to measure and compare the vertical and horizontal movements of the eyes(their movement and the difference in their position). Is there a way to calculate the movements and their ratios between the eyes? I was thinking of using the gaze tracking and the pupil data (probably Phi and Theta) of the eyes to calculate the ratios, is it possible? (about my third question i basically need to make it so that i can caculate the eyes movement and that they will be in the same coordinate system)

papr 30 June, 2021, 14:09:09

if you want both eye models to be in the same coordinate system, have a look at the gaze data. It includes the eye model data (center + gaze direction) in scene camera coordinates.

nmt 30 June, 2021, 13:58:50

Hi @user-4bad8e. Can you elaborate a little on what steps you have taken for head pose tracking, e.g. have you used tracking markers, were they detected okay in the head pose tracker plugin? https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking

user-4bad8e 30 June, 2021, 14:10:14

Hi, @nmt. Thank you for your response. I have used tracking markers, and I think these markers were detected in the head pose tracker plugin. However, it seems that only one marker (the red one) was shown in the head pose tracker visualizer.

papr 30 June, 2021, 14:10:15

if you are only interested in relative movements, you can use pupil data. relative changes in theta/phi as well as circle_3d_normals are comparable between both eye cameras

user-765368 01 July, 2021, 14:43:52

So if i understand you correctly, the size of a movement that results in an increment of 0.1 in the phi value of one eye is equivalent to the size of movement that will cause an increment of the same size in phi value (0.1) in the other eye? (im saying size on purpose as the increment/decrement is opposite for the eyes) Thanks!

user-4bad8e 30 June, 2021, 14:11:53

This is a screenshot.

Chat image

user-4bad8e 30 June, 2021, 14:12:44

Screenshot for the head tracker visualizer.

Chat image

papr 30 June, 2021, 14:13:04

You do not seem to have sufficient scene camera movement in order to triangulate the other markers in reference to the origin marker.

user-4bad8e 30 June, 2021, 14:15:16

Thank you @papr ! I will try it again.

papr 30 June, 2021, 14:13:36

The good news is that you can reuse the 3d model from other recordings or even Pupil Capture.

papr 30 June, 2021, 14:17:49

You can use Pupil Capture to build a model, too! This can be helpful to get feedback in realtime

user-4bad8e 01 July, 2021, 02:57:46

I could make a 3d model, thank you very much. I have a question about the logged data of camera poses in head_pose_tracker_poses.csv. As long as I saw the timestamp data in the CSV, the data is not recorded at regular intervals. Is there any threshold of the move of the camera to record the pose data or other algorithms?

user-9b5cc8 30 June, 2021, 20:16:43

Got around to assembling my Pupil Labs 3D printed eye tracking headset today. Not impressed with the design. Found it brittle/fragile. The cable guides broke off the eye camera mount and, more importantly, the world camera mount snapped when trying to attach it to the headband rod. Does anyone know of an alternative headband design?

End of June archive