πŸ‘ core


user-cdb45b 01 February, 2023, 19:34:22

Hi all just thought I should ask this

user-cdb45b 01 February, 2023, 19:43:04

Hi everyone! I am working on setting up Pupil Core eyetrackers to receive annotations from MATLAB Psychtoolbox stimuli, demarcating trial onset + changes with timestamps, and am running into this error when I run the example "pupil_remote_control.m" script.

I have downloaded matlab-zmq. When I run "addpath('~/MATLAB/matlab-zmq')", I get this error: "Unable to resolve the name 'zmq.core.ctx_new'. Error in pupil_remote_control (line 23) ctx = zmq.core.ctx_new();"

When I run "addpath(genpath('/Users/knix/Documents/MATLAB/matlab-zmq'))", I get this error: "Error using zmq.core.ctx_new Execution of script ctx_new as a function is not supported: /Users/knix/Documents/MATLAB/matlab-zmq/lib/+zmq/+core/ctx_new.m Error in pupil_remote_control (line 23) ctx = zmq.core.ctx_new();"

Any tips for how to get MATLAB to recognize zmq?

Chat image

user-cdb45b 02 February, 2023, 19:35:46

For more reference, the script has been added to my MATLAB path, but when I run the line "ctx = zmq.core.ctx_new;" the error message still says: "Execution of script ctx_new as a function is not supported".

Running the line "zmq.core.ctx_new" or "zmq.core.ctx_new()" alone does not return any errors -- just runs that script. But it seems like storing it as the variable "ctx" as the pupil_helpers MATLAB script suggests returns this error.

Chat image

user-79be36 01 February, 2023, 20:36:59

I am trying to load a recording folder into pupil player and am getting the error: "ValueError: Cannot load file containing pickled data when allow_pickle=False". I am also unable to load and view any of the camera videos using VLC media player (all of my other recordings load fine, and my videos play on this system). Is this recording corrupted in some way?

user-79be36 01 February, 2023, 21:04:46

Pickled Loading Problem

user-2798d6 02 February, 2023, 01:32:51

No video on Mac - I've tried the fix in the terminal that was reported on Github, but I'm still not getting video from my Pupil Core glasses. I'm on Venture 13.2

user-2798d6 02 February, 2023, 01:50:08

I appreciate any suggestions! The fix worked when I was on Monterrey, but isn't working now.

user-09fbf1 02 February, 2023, 09:17:18

Hi! I' m new to Pupils and I would like to know if there is a way to get the calibration data taken in pupil capture, i.e. the precision error as it appears in the script. Is it possible? Thanks!!

papr 02 February, 2023, 09:19:35

Hey, the calibration data is stored as part of the recording (for post-hoc analysis) and is announced via the Network API (for realtime analysis). Which of the two are you interested in?

user-09fbf1 02 February, 2023, 09:33:44

Thank you for the quick answer. I am most interested in angular accuracy. How could I obtain this value from previously recorded videos?

papr 02 February, 2023, 09:34:28

You can recalculate the accuracy+precision for a set of Pupil Core recordings using this tool https://github.com/papr/pupil-core-pipeline

user-09fbf1 02 February, 2023, 09:44:41

I will explore it! Thank you!

user-2798d6 02 February, 2023, 13:36:11

No video on Mac - I've tried the fix in the terminal that was reported on Github, but I'm still not getting video from my Pupil Core glasses. I'm on Venture 13.2. I appreciate any suggestions! The fix worked when I was on Monterrey, but isn't working now.

papr 02 February, 2023, 13:39:14

Hi, can you share the exact terminal command that you executed and its output?

user-2798d6 02 February, 2023, 13:40:29

Sure, it will take me a minute to set things up to get the output, but the command I used was "sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture"

user-2798d6 02 February, 2023, 13:42:37

It's working now...I replicated exactly what I did last night, but it's working now. Problem solved I guess! πŸ™‚

nmt 03 February, 2023, 08:31:26

@user-89d824, it would also be worth gently cleaning the camera pinhole lenses with a microfibre cloth to ensure there's nothing causing a blur

user-b0cf4e 03 February, 2023, 09:01:40

Hi everyone, I have a problem on my Pupil Lab Core, the wires on the camera connector broke. Does someone know the reference of the connector? Cause I am not equipped for such small soldering...

Chat image

nmt 03 February, 2023, 09:43:11

Please contact info@pupil-labs.com in this regard

user-b0cf4e 03 February, 2023, 12:45:07

Hi Neil, thanks!

user-eeecc7 04 February, 2023, 07:36:27

Hey @papr how do eye0 and eye1 timestamps map to the pupil_timestamp in the exported pupil_positions.csv?

nmt 06 February, 2023, 09:44:20

Hey @user-eeecc7. Those timestamps are equivalent to Pupil Time. When an eye video frame is received by Pupil Capture it's assigned a Pupil timestamp.

user-89d824 06 February, 2023, 13:49:35

Another question, but this one has got more to do with using Unity with Pupil Core.

Basically, I need the participant to start recording by clicking on a button in Unity, which will start the recording in Pupil Capture. The way I've done this is: In the RecordingController script, I've changed the startRecording and stopRecording booleans to public bools.

And then I created two functions in PressRScript.cs (see attached) which maps to button presses in Unity.

In Unity, I have two scenes - The first scene is where the Recording Controller script is attached to a gameObject. I used the DontDestroyOnLoad() function to keep this gameObject from being destroyed when we transition to the second scene, during which the first recording will end, and start (and end) an additional two times. In total, that should give me 3 separate recordings.

I would say this method works about 70% of the time, but sometimes it wouldn't work after we transition to the second scene and throws the exception error attached. I would have to press the R key on the keyboard manually to end/start the recording.

Any idea on what's going on please or if there's a better way to do what I need to do, i.e., have the participant start/end Pupil Capture across different scene transitions by clicking on a button in Unity?

ExecptionError.txt PressRScript.cs

nmt 06 February, 2023, 15:59:43

We can't offer much advice in the way of makeup I'm afraid

user-89d824 09 February, 2023, 10:24:15

Hi @user-d407c1, could you help me with this problem please? πŸ™‚

user-3aea81 07 February, 2023, 04:28:08

Hello I have question about pupil core,invisible.

user-3aea81 07 February, 2023, 04:28:59

Can i get paper using core and invisible

user-3aea81 07 February, 2023, 04:30:35

also can i get some video about core and invisible

user-3aea81 07 February, 2023, 04:31:37

like that introduce product simply

Chat image

user-277de7 07 February, 2023, 04:42:04

@nmt @user-d407c1 @papr Hi, sorry about yesterday. I'm trying using python with frame work to custome and visualize the stream data from core glasses connect with my PC by USB! Can you suggest for me some libraries sand example codes to do that ? Or any solution help me to collect realtime data from glasses to visualize by python on my PC ?

user-d407c1 07 February, 2023, 08:29:39

Hi @user-277de7 Did you had a look at the Network API? https://docs.pupil-labs.com/developer/core/network-api/, also have a look at https://github.com/Lifestohack/pupil-video-backend

user-9f3df4 07 February, 2023, 09:32:32

Hello there. I just upgraded to a more recent Pupil Capture (3.5) version, and now the LSL stream is empty. The stream is found, and all channels are there, but the sampling rate is also specified as 0 Hz. Is this a known issue? I used the search function and didn't find any comparable cases. Do you have an idea how to solve this potentially? Best regards -Kilian

nmt 07 February, 2023, 15:33:22

Hey @user-9f3df4. Can you confirm that the Core system was being worn and a calibration had occurred?

nmt 07 February, 2023, 15:11:15

Hi @user-3aea81 πŸ‘‹. We don't have a video introducing Core and Invisible, but you can check out the website for lots of information: https://pupil-labs.com/products/. We also have a publication list on the website: https://pupil-labs.com/publications/. If you have any specific questions, feel free to ask here!

user-3aea81 08 February, 2023, 01:18:34

thanks for reply ^^ I got point

user-9f3df4 07 February, 2023, 15:45:43

Yes, it was the core system. We are only using pupilometry, so there was no further calibration than the head movement to fit the eyeball model. But the interesting parameters (3D Pupil diameter) are fine in the debug mode for the eyes.

nmt 07 February, 2023, 16:00:30

Can you also confirm that you're using the latest LSL relay version: https://github.com/labstreaminglayer/App-PupilLabs/releases/tag/v2.1 ?

user-9f3df4 07 February, 2023, 16:02:08

I will check this and come back to you tomorrow.

user-a86593 07 February, 2023, 17:29:32

Hi Pupil Labs team and community, are there any tips or guidelines which lighting conditions (i.e., how bright the room should be) are best to get meaningful and reliable pupil size measurements? Many thanks in advance! Johannes

user-3aea81 08 February, 2023, 01:35:03

i have one more Q. can i check how consistently see the object? like how long my eyes stay on the object

user-d407c1 08 February, 2023, 08:14:14

We provide gaze points and fixations in scene camera coordinates out of the box, if you know where an object is you can use those coordinates to understand whether they looked at an object or not. To track the object on the scene camera, you can use any video segmentation tool of your liking, but it will require some processing from your side. There are software providers like iMotions that offer turnkey solutions for tracking AOIs and provide you with gaze statistics both for Core and Invisible.

Using Invisible/Neon you can also use our reference image mapper(https://docs.pupil-labs.com/invisible/enrichments/reference-image-mapper/) or if you would like to track an object on a screen you caqn use the marker mapper (https://docs.pupil-labs.com/invisible/enrichments/marker-mapper/) and define areas of interest (AOIs) in a 2D image, to track how long you looked at an object (Check out this link https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/). Additionally, more AOIs tracking features are also coming in the future.

user-0e5193 08 February, 2023, 06:55:32

Hello, can I ask a qustion? In fixation export, what does mean norm_pos_x and y? I thought it is fixation point (0 to 1), but my exported data have the value of (-2 to 3).

user-d407c1 08 February, 2023, 08:02:56

Those are the normalised coordinates of the fixations at the scene camera coordinates. As you can see here (https://docs.pupil-labs.com/core/terminology/#coordinate-system), it starts on the bottom left and goes from 0-1. Some values can be found out of the 0-1 as the eye cameras may cover a bit more range than the scene camera, but values should nonetheless stay close to the range 0-1. Was there a successful calibration on the recording? Was the calibration performed covering an ample section of the scene camera or was it a single point calibration (with no movement)?

user-0e5193 08 February, 2023, 07:35:45

Oh maybe I found answer in document. The out of 0-1 value is out of calibration right?

user-0e5193 08 February, 2023, 08:12:30

Every calibration was successful. Then, are norm x, y of fixation.csv and norm x, y of gaze_positions.csv the same?

user-d407c1 08 February, 2023, 08:16:34

They are the same but one provides the information for gaze points while the other one provides it only for detected fixations. If you would like us to have a look at the recording please share it with data@pupil-labs.com and refer to this conversation.

user-d407c1 08 February, 2023, 10:14:19

Matlab ZMQ MacOS

nmt 08 February, 2023, 11:41:06

LSL

user-908b50 08 February, 2023, 20:21:52

Hi, I have some basic questions on how the preprocessing code was setup in the build version. Are the exported files filtered (i.e. bad samples or low frequency etc)? Is there some documentation to support these choices? Also, I want to implement a saccade detection code, but I am not sure how-to in the already pre-built version? Do I need to build from scratch?

user-3aea81 09 February, 2023, 00:29:24

thanks gor reply. this picture is my point

Chat image

nmt 09 February, 2023, 11:30:47

The example will only work with Invisible recordings + downloaded RIM enrichment results. If you want something similar for Core, you'd have to use the Surface Tracker analysis tool and do some modifications to the code.

user-3aea81 09 February, 2023, 00:30:35

it's only for invisible????? if not I can make it from open source?

user-3aea81 09 February, 2023, 00:44:29

and i can make with pupil player or pupil capture?

user-d407c1 09 February, 2023, 10:28:01

I have replied here https://discord.com/channels/285728493612957698/285728635267186688/1072423129214881873

user-89d824 09 February, 2023, 10:30:15

My apologies, I didn't know why I missed it!

nmt 09 February, 2023, 11:24:55

Hi!

The exported files are not filtered, but we often recommend discarding samples with a confidence of 0.6 or lower before preprocessing the exported data any further. You can read about confidence here: https://docs.pupil-labs.com/core/terminology/#confidence. This value is measured from the number of pixels on the circumference of the fitted pupil ellipse.

Re. a saccade filter, working with data post-hoc would be easiest. It mightn't be necessary to run from source in this case. For a real-time implementation, developing a Plugin and running from source would be recommended. Check out the Plugin API docs: https://docs.pupil-labs.com/developer/core/plugin-api/#api-reference

Note that a while back there was a community-contributed saccade detector: https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing. Not sure if it will still work, but worth a try.

user-908b50 10 February, 2023, 20:19:12

Yes, that's what I did. I only exported data at a CI of 0.80 so the exported surface fixation values (what I am interested in) won't have anything below that. Okay, thanks for sharing links. I will be implement the saccade detector code post-hoc so I will take a look and see where to insert this code so I get saccades during export.

user-3aea81 09 February, 2023, 12:27:09

A... I see ^^ thanks you so much

nmt 09 February, 2023, 12:53:09

Note that with the Surface Tracker Plugin, you can drag and drop AOIs within Pupil Player. Read more about it here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker So actually all you'd need to do after running the plugin and exporting the results is compute the metrics (e.g. dwell time) and generate the plots.

user-a86593 09 February, 2023, 14:30:00

Hi Pupil Labs team and community, are there any tips or guidelines which lighting conditions (i.e., how bright the room should be) are best to get meaningful and reliable pupil size measurements? Many thanks in advance! Johannes

user-d407c1 09 February, 2023, 14:57:22

Hi Johannes! It's best to use a dimly lit environment for pupil size measurements. If your room is too bright, you may not see much differences, as light also make the pupils smaller. Please check our best practices for pupillometry https://docs.pupil-labs.com/core/best-practices/#pupillometry and I recommend that you also read https://link.springer.com/article/10.3758/s13428-021-01762-8 Furthermore, you can adjust the eye camera exposure settings in each eye window to achieve better contrast between the pupil and the surrounding regions of the eye (namely the iris).

user-a86593 09 February, 2023, 19:24:20

Oh great, thank you so much - very helpful indeed!

user-c76d64 09 February, 2023, 15:53:30

Hello ! is there any way to send hardware trigger as for example TTL pulse on Pupil Core glasses to synchronise de gaze data with others sensors (EEG, ECG ...) ?

user-d407c1 09 February, 2023, 16:26:35

Hi @user-c76d64! There is no out-of-the-box TTL signalling. However, you can use ourΒ Network API (https://docs.pupil-labs.com/developer/core/network-api/)Β to control/send events to Pupil Capture. It might be possible to combine this with something like aΒ USB to TTL sensor (https://pcbs.readthedocs.io/en/latest/triggers.html#dlp-io8-g), or even an Arduino, to send triggers. This library could be helpful for you https://pyserial.readthedocs.io/en/latest/pyserial.html

user-c76d64 09 February, 2023, 16:45:56

Thanks for your answer, Yes it could be a solution but has we are working with EEG sensors we need a really high time precision when sending these triggers. We want that both triggers : the one sent to Pupil core and the one sent to the EEG sensors are received at the exact same time. The idea is to have any time desynchronisation between the triggers of the 2 devices in order to then synchronise the gaze and the EEG data based on the trigger events.

user-d407c1 10 February, 2023, 08:34:11

For this time of data synchronisation Lab Stream Layer (LSL) could help you, have a look https://labstreaminglayer.readthedocs.io/info/intro.html most EEG systems on the market are supported and we maintain a relay for Capture https://github.com/labstreaminglayer/App-PupilLabs

user-2e5a7e 09 February, 2023, 17:35:29

Are PupilCore glasses (running Pupil Mobile and streaming to Pupil Capture) compatible with any National Instruments DAQ devices? We are trying to sync our wireless Pupil setup with an existing Qualisys MoCap setup that is streaming data to MATLAB in real-time

user-d407c1 10 February, 2023, 11:04:02

Hi @user-2e5a7e There is no official support to connect with DAQ devices, but you can try https://github.com/maltesen/liblsl-LabVIEW.

user-def465 10 February, 2023, 08:31:12

Hello ! I had some issues with one recording: for some reason, I could not end the recording properly from inside Pupil Player and I had to terminate the process dirtily (I think my laptop froze). It turns out that data from this recording is in a strange state, with two videos files (eye0 and world) being in "writing state" (files have .mp4.writing extensions). I cannot import it in Pupil Player, but I am not sure if it is because I am missing some files, because of encoding issues, etc. Here is the list of the files that I have :

blinks.pldata eye1.mp4 fixations.pldata notify.pldata surfaces.pldata eye0.mp4.writing eye1_timestamps.npy gaze.pldata pupil.pldata world.mp4.writing By any chance, is there a way to salvage this recording ?

user-d407c1 10 February, 2023, 09:49:08

Hi @user-def465 ! It seems like the data was corrupted due to a force close of Pupil Capture. There are no guarantees that you can recover the data but there are several things that you can try:

Recovering the mp4 files You can use a tool like https://github.com/anthwlock/untrunc to try to restore them. Please refer to their documentation.

*Restoring pldata files https://docs.pupil-labs.com/developer/core/recording-format/#pldata-files *
 You can try reading them like here https://discord.com/channels/285728493612957698/285728493612957698/649340561773297664 to restore the data.

To be readable by Pupil Player you would still need an info.player.json file, which you can generate it by https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a#file-generate_pupil_player_recording-py-L72-L90 (Note that you will need to adjust the start_time_synced)

Alternatively you can read the pldata files directly out of the plate with https://gist.github.com/papr/81163ada21e29469133bd5202de6893e

user-def465 10 February, 2023, 10:01:45

Thank you ! I will try these methods.

user-c8e5c4 10 February, 2023, 10:59:58

hey team, this might be a stupid question, but i cannot manage to add a vis circle plugin? I redownloaded capture, but vis circle is just not there? could you help me out?

Chat image

user-d407c1 10 February, 2023, 11:37:39

Hi @user-c8e5c4 ! vis_circle is part of Pupil Player, to see gaze in Capture, you only need to perform the calibration

user-c8e5c4 10 February, 2023, 13:16:10

aah, is see, thank you!

user-6e0242 10 February, 2023, 17:21:41

Hi team. Marco from Italy here. One question. After some times I restart making some tests with invisible. Usually after loading in cloud, I download raw data to adjust and se something more in player. But nothing on video. Something changed since my last usage? Or some issue by me? Thanks

user-c2d375 10 February, 2023, 18:01:31

Ciao @user-6e0242 πŸ‘‹ First of all, please ensure that you are downloading raw data in the Pupil Player format by right-clicking on your Invisible recording and then selecting the proper format. Second, make sure you have the last version of Pupil Player on your computer (https://pupil-labs.com/products/core/). Let me know if it does the trick!

user-6e0242 11 February, 2023, 09:56:01

Thanks. My error was to download from project instead of from the single video so it wasn’t in the right format. Thanks 😊

user-908b50 10 February, 2023, 20:27:48

From the analysis perspective, does it matter if some of the recordings were preprocessed using player 2.5.0 and others (that couldn't with 2.5.0) were processed using 3.5.0. I am just trying to anticipate from a methods perspective if the reviewer might find it problematic.

nmt 13 February, 2023, 09:49:43

@user-908b50, It depends what data they contain. More specifically, what version of Pupil Capture were your recordings made with? A new 3d geometrical eye model (pye3d) was released in Core software v3.0: https://github.com/pupil-labs/pupil/releases/tag/v3.0

user-5ba46b 10 February, 2023, 21:51:13

Hi everyone! Excited to be part of the community. I am currently trying to understand how the pupil core eye tracker can be setup together with a portable eeg. Can someone give me any tips, links to documentation or even video? This is my first time with such project

user-d407c1 13 February, 2023, 07:52:09

Hi @user-5ba46b ! Combining the Pupil Core eye tracker with a portable EEG. This is a great idea, as it can provide a more comprehensive view of the user's cognitive state. You can check if your EEG is compatible with Lab Streaming Layer https://labstreaminglayer.readthedocs.io/info/supported_devices.html, and use the LSL plugin https://github.com/labstreaminglayer/App-PupilLabs

Otherwise, you can also use the network API https://docs.pupil-labs.com/developer/core/network-api/ to send annotations to synchronise both systems.

user-ed360e 10 February, 2023, 22:00:29

Hi all, is there any information regarding the common pupil size range captured by PupilCore?

I'm running some tests and I'm finding in my experiment pupil responses ranging between 1.5-2.5 mm (even in a dark room), which seems below that which is reported in the literature Methods in cognitive pupillometry: Design, preprocessing, and statistical analysis

Pupil size varies between roughly 2 and 8 mm in diameter (MathΓ΄t, 2018; Pan et al., 2022), depending mainly on the amount of light that enters the eye.

I'm wondering if you could help me giving some advice. Thanks in advance!

user-ed360e 11 February, 2023, 23:14:46

Hi all, Can someone explain to me why in the data export file, all cells from the 'diameter' column are populated, while the cells from 'diameter_3d' are populated intermittently, appearing in one cell and then not in the next?

Thanks a lot!

user-ae54c3 12 February, 2023, 06:24:26

Which version of python do i need to use pupil?

user-d407c1 13 February, 2023, 08:30:06

Hi @user-ae54c3! To run pupil from source you need Python 3.7 or newer

user-2ce8eb 12 February, 2023, 12:26:44

Hi, Pupil team, I want to experiment with the pupil core, but I just started using it and there is still a lot I didn't understand. I need your help! At present, I have encountered three problems. The first problem is calibration. Although I have calibrated, there is still some deviation. What should I do? The second is in the process of use, my fixation point is very unstable, or shaking, which is not consistent with the fact that I am looking at a certain place. Thirdly, the images recorded during use are not clear. Although I have improved a lot by manually adjusting the focus of the camera, I still want to ask if there are any other ways to improve them.

nmt 13 February, 2023, 09:56:27

Hi @user-2ce8eb πŸ‘‹. The first thing to ensure is good pupil detection. This is the foundation of everything. Be sure to check out this section of the getting started guide: https://docs.pupil-labs.com/core/#_3-check-pupil-detection

user-277de7 13 February, 2023, 03:59:22

Hi team pupil @user-d407c1 @papr @user-c2d375 , I want to visualize real time gaze and surface data from pupil core by cv2, what message should I use for sending? And do you have any examples?

nmt 13 February, 2023, 13:04:53

Hey @user-277de7 πŸ‘‹. We'd be super grateful if in future you could avoid tagging specific/multiple members of our team in your questions, unless you're replying to a message that is! This will also encourage other members of the community to get involved πŸ™‚

user-d407c1 13 February, 2023, 08:55:04

Hi @user-277de7 check out https://docs.pupil-labs.com/developer/core/overview/#surface-datum-format for the topic and field names

user-1fa7e1 13 February, 2023, 09:14:17

Hello @user-d407c1 we are currently working on a school project with the pupil labs's eye tracking system but we faced a problem . could you please explain us how to retrieve the gaze coordinates ? thank you for your answer.

user-d407c1 13 February, 2023, 09:44:22

Hi @user-1fa7e1 It's hard to provide feedback without further context, what is the error that you faced? Did the error occurred in Pupil Player? or in Pupil Capture? Could you share some logs? If you prefer, if it may contain sensitive data, you can contact us through info@pupil-labs.com

user-1fa7e1 13 February, 2023, 09:46:34

No, we are trying to retrieve the gaze coordinates through Python and not pupil player (@user-d407c1)

nmt 13 February, 2023, 10:05:39

Check out Core's realtime api: https://github.com/pupil-labs/pupil-helpers/tree/master/python

nmt 13 February, 2023, 09:52:41

Pupil size estimates are provided in mm by pye3d, which is a 3d geometrical eye model. The output can depend on a lot of factors, such as pupil detection, model fitting and headset slippage. Intermittent population suggests a noisy model. I'd recommend reading our pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry

user-ed360e 13 February, 2023, 13:38:07

Hey, thanks a lot for your response!

I'm attaching a screenshot of the pupil_positions.csv There are two rows for each timestamp, one for '2d c++' and one for 'pye3d 0.3.0 real-time' methods. The 'diameter_3d' column is only filled for the 'pye3d 0.3.0 real-time' method, as expected, however, the 'diameter' column is filled in both rows, resulting in different diameters for the same timestamp. Which I'm finding weird. Is this correct?

Edit: it ends up leading to two different frame samples

Chat image

nmt 13 February, 2023, 10:03:54

We have not used an artificial mechanical eye, but rather evaluated gaze directions with respect to synthetic ground-truth eye images: https://docs.pupil-labs.com/developer/core/pye3d/#academic-references

user-e15aa5 13 February, 2023, 10:35:19

OK, thank you for the references!

user-9f7f1b 29 March, 2023, 03:57:34

Hi, Neil! Has your team released synthetic ground-truth eye images or synthetic model metioned in "A fast approach to refraction-aware eye-model fitting and gaze prediction"? If hasn't, do you have plan to release them? This is important for us to learn your papers and methods in depth.

user-94f03a 13 February, 2023, 10:04:26

Hello! Not sure if I am posting on the correct channel. I have a question for the coordinates of the surface tracker. As you can see in the screenshot, we have a project with 6 tags (4 in the corners and 2 in the long sides). I used the pupil player (workflow: pupil invisible -> pupil cloud -> download raw) to calculate the surface tracker. I am now working with the data from the 'surfaces' subfolder. I have two questions I would like to ask:

  1. to obtain the gaze coordinates on the surface, I use the file "gaze_positions_on_surface_Projection.csv". What is the difference btween x_norm and x_scaled? Where exactly is the 0,0 of the rescale? We want to reverse engineer what pixel people are looking at, so we need to be precise if the suface includes all the tag or its midpoint etc.

  2. The file fixations_on_surface_Projection.csv is empty is this expected behaviour? I thought we can now get fixations for pupil invisible too.

Thanks!

Chat image

user-c2d375 13 February, 2023, 10:58:41

Hi @user-94f03a πŸ‘‹ x/y_norm represent the x/y coordinates of the gaze point on the surface, while x/y_scaled are the same coordinates but scaled accordingly to the surface dimension manually defined in Pupil Player (in your case, width and height parameters within the "Projection" surface section). (0,0) is the bottom left corner of the surface and (1,1) is the top right corner. The file 'fixations_on_surface_Projection.csv' is empty because the fixation detector plugin is not available for Pupil Invisible recordings.

I would suggest to use the Marker Mapper enrichment in Pupil Cloud to obtain fixation data within the surface (https://docs.pupil-labs.com/invisible/enrichments/marker-mapper/)

user-98789c 13 February, 2023, 12:22:46

Hi pupil people, I have been asked if the smoothing process that I use for my pupil plots is causal or not. I smooth my data at two stages: first, using a low-pass filter while preprocessing, like this: def _smooth(x, data_freq): LpFilt_cutoffFreq = 4 LpFilt_order = 4 lowpass_filter = scipy.signal.butter( LpFilt_order, LpFilt_cutoffFreq, fs=data_freq, output="sos", ) result = scipy.signal.sosfiltfilt(lowpass_filter, x) return result and in the end, I also smooth my plot using convolution, like this: smoothbin = 2 signal_avg_conv = np.convolve(signal_avg, np.ones(smoothbin) / smoothbin, 'same') An example of my plots is attached. Do you have any idea how to do "causal smoothing"? I assume it means for each data point, I don't use future data points for smoothing.

Chat image

user-98789c 13 February, 2023, 14:40:02

also about this plot, has anyone experience with double peaks after viewing some stimuli?

user-480f4c 13 February, 2023, 14:47:00

Hi @user-98789c πŸ‘‹ Maybe one possibility would be to use the Savitzky-Golay filter (a type of moving average filter that uses a least-squares polynomial fit to smooth the signal). It can be applied using the savgol_filter function (from the scipy.signal library). The mode argument in this function specifies how the signal is padded with additional values at the edges. When mode is set to nearest, the padding values are simply copies of the closest available data points. This effectively means that the smoothed signal is not calculated based on future values, which I understand might be what you are looking for. But note that there might be other techniques more appropriate for causal smoothing that I am not aware of πŸ™‚

user-98789c 13 February, 2023, 15:19:57

just for your information, @user-480f4c and anyone interested, I applied the scipy.signal.savgol_filter and the result is not different from applying the scipy.signal.butter filter that I was using..

user-98789c 13 February, 2023, 14:56:59

thanks a lot @user-480f4c I'm looking into it πŸ™‚

user-b9005d 13 February, 2023, 18:51:12

Our lab has been using the Pupil Core headset on young infants with a smaller eye radius than a typical adult. We would like to compensate for that difference. Can you tell us how radius might be used in the 3d eye model so that we can scale our final eye movement data correctly to the infant eye size

nmt 15 February, 2023, 07:34:57

Hi @user-b9005d. We have discussed this case internally and have a few notes:

Eyeball size is an important parameter for determining eyeball position (our ETRA 2019 paper provides details about how it enters the calculations). It is thus conceivable that adjusting it in pye3d (our 3d eye model) will result in more accurate estimates for children. However, we have never tested/verified this.

Conceptually, it should be possible to change the parameter and re-run pupil detection + the 3d model in a post-hoc context using the raw recordings containing eye videos etc. We don't think such changes could be made to the exported csv data, however.

Refraction correction has a fixed diameter built in. So, while you could change the appropriate parameter in pye3d in order to accommodate children, you could not use the refraction correction we provide.

Hope this helps!

user-89d824 14 February, 2023, 12:02:25

Hi Neil,

I sent the email last Monday. The title is, "Pupil detection problem and Pupil Capture closing itself mid-recording"

May I know if you've received it please? Thank you.

nmt 14 February, 2023, 14:52:00

Hey! Just followed up to your email πŸ™‚

user-84cfa7 14 February, 2023, 14:59:28

Hi every pupil labs workers, I want to make my own eye-tracking headset from scratch, and I hope you can give me some advice and help. Can you tell me which python libraries you use, how to lock the iris, how to use two cameras at the same time and use the eye tracking results in the form of red dots to lock the object displayed by the other camera, thank you

nmt 14 February, 2023, 18:46:43

Hey @user-84cfa7 πŸ‘‹. Recommend checking out Pupil DIY in the first instance: https://docs.pupil-labs.com/core/diy/

user-02de1f 14 February, 2023, 19:39:47

Hello PupilLabs I would like to pre-order the "Is this thing on?" Is there a way to do this? When I add to cart I only get the "just act natural".

user-02de1f 14 February, 2023, 22:06:22

Is there anybody out there?

nmt 15 February, 2023, 07:22:15

Hi @user-02de1f πŸ‘‹. Note that most of the Pupil Labs team here on Discord are based in Europe, so timezones don't always overlap πŸ™‚ Re. 'Is this thing in' - we aim to start shipping that near the end of Q2 this year, but we don't have a concrete date.

user-f51d8a 15 February, 2023, 03:11:14

Hello, I wanted to ask regarding the 30 minutes video call onboarding workshop that is advertised in your website. May i know how can we arrange this workshop?

nmt 15 February, 2023, 07:23:18

Please reach out to [email removed] in this regard

nmt 15 February, 2023, 07:22:48

I noted you've also emailed us. A member of the sales team will respond there as well!

user-9b3299 15 February, 2023, 19:02:40

Is there a way to batch export a folder of files using shell script that can interact with the pupil player

nmt 16 February, 2023, 16:12:06

Hey! See this message for reference: https://discord.com/channels/285728493612957698/446977689690177536/839399327989891093

user-908b50 15 February, 2023, 19:48:00

Hi, I have been looking into some of the tutorials and it seems the one where the surface fixation values are plotted onto the surface image is different. Now, you transform those values into pixel space. I did not. I just flipped the y axis and plotted as in onto my reference image. Then, I counted the number of fixations in each area of interest on my image. These values are then averaged across all participants for each AOI and used in models.

user-908b50 15 February, 2023, 19:48:33

I am wondering what is the purpose of transformation. Will I get incorrect surface mapping otherwise?

user-908b50 15 February, 2023, 19:50:36

So this is what I am getting (sans transformation).

Chat image

user-908b50 15 February, 2023, 19:54:24

Code

message.txt

user-cdb45b 15 February, 2023, 20:27:16

Hello @user-d407c1 ! Thanks again for your previous suggestions.

I am now working calling Pupil Core commands in MATLAB via MATLAB-ZMQ, following the documentation at https://github.com/fagg/matlab-zmq, using MATLAB R2022b and MEX configured to use β€˜Microsoft Visual C++ 2019 (C)’ β€” I tried using other compiling options (MinGW64 Compiler (C), and it didn’t work.

I believe there is a problem with my MEX configuration but I haven’t seen documentation online for this specific error β€” this comes when I run the β€˜make’ script (config.m options that I tried, also in attached screenshots, as well as my ZeroMQ 4.0.4/lib directory for reference. Any tips/work-arounds for resolving the MEX issue? Thank you!

Chat image Chat image Chat image Chat image

nmt 16 February, 2023, 16:13:59

Hi @user-cdb45b. We're unable to provide support with MATLAB-specific issues, I'm afraid. It would be worth reaching out to the authors of matlab-zmq

user-908b50 15 February, 2023, 22:35:47

How do I find the sampling rate of my eye tracker?

nmt 16 February, 2023, 16:15:58

You can look at the total number of samples compared to the duration of your recording in the exported files

user-908b50 15 February, 2023, 23:31:28

I'd like to add that I am getting different count fixations in each AOI in my image when exporting data using 2.5.0 player versus the newer 3.5.1. So, I would need to export all of the data with the same player version.

nmt 16 February, 2023, 16:16:58

It's worth double-checking that the fixation detector thresholds are set the same in both versions

user-908b50 16 February, 2023, 20:48:57

Its the same

nmt 18 February, 2023, 12:41:18

Fixations

user-a11557 18 February, 2023, 22:43:05

Hi everyone. I want to know how I can see the data that pupil lab recordings give me regarding fixations and others. Since I do not have a program that allows me to view documents in ".npy" or ".pldata" format. Thank you

nmt 20 February, 2023, 08:54:42

Hi @user-a11557. You can load the recordings in Pupil Player: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-player

user-0b9182 19 February, 2023, 04:00:40

Hi. For gaze_position.csv, which is stored through pupil capture, I want to know how many Hz the data is stored in.

For example, how many data are stored per second.

I wonder if there is a way to reduce the number of Hz according to the user environment.

nmt 20 February, 2023, 09:55:02

Please see this message for reference. Note that if you want to have a specific amount of data for a given environment, I'd recommend downsampling after the recordings have been made.

user-348329 20 February, 2023, 07:03:01

Hi, I'm using a pupil core. There was no problem when using it indoors without turning off the lights, but when using it outside, the gaze is not detected properly. This seems to be because the eye-camera is an IR camera, so it cannot properly detect the pupil due to sunlight. I wonder if the reason I mentioned is correct. Also, I wonder if there is any other way to use the pupil core outside (with good or better gaze detection performacne) than using it after sunset.

nmt 20 February, 2023, 09:57:21

The eye images can be washed out by sunlight. It would be worth reducing the eye camera image exposure time for bright environments

user-51691f 20 February, 2023, 13:31:20

Hi! I've recorded data from two gaze experiments in one sitting via PupilCore glasses. My plan was to split the gaze recordings into two separate files via the PupilPlayer, then apply some optimized post-processing per experiment (fixation detection etc.). However, I cannot load the exported (i.e.split) files back into PupilPlayer, which seems to imply that I first have to apply all possible post-processing before exporting/splitting files. Do you have any suggestions for a workaround? Thanks in advance!

nmt 20 February, 2023, 14:25:08

You can choose to export specific sections of your analyses for post processing. Just drag the ends of the timeline in Player (see screenshot) before export. Splitting the recordings like you suggest is not a recommended workflow.

Chat image

user-660f48 21 February, 2023, 08:47:35

Hi Pupil Labs I am trying to understand the coordinate system used for the exported 2d data. I have two basic questions: 1. Is the coordinate system calculated from the calibrated area? 2. What is the coordinate system for the exported 2d data? I read it is 0,0 in bottom left corner and 1,1 in top left corner. I am confused about this because the world camera is 1280x720 pixels (so not a 1:1 ratio). Can you please help with this basic question? Thanks in advance

nmt 21 February, 2023, 10:07:01

Hey @user-660f48 πŸ‘‹. 2d gaze data are relative to the scene camera image, not the calibrated area. You can read more about that here: https://docs.pupil-labs.com/core/terminology/#coordinate-system. Note that a square aspect ratio is not a prerequisite for normalized coordinates.

user-2ce8eb 21 February, 2023, 14:46:58

Hi, can we use the Pupil Core to achieve the function of the picture we show?

Chat image

user-d407c1 21 February, 2023, 14:50:50

You could potentially do something similar with Core, but you will need to use AprilTags and the Surface Tracker. https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker

The reference image mapper is only available for Invisible.

user-3578ef 21 February, 2023, 23:49:09

Does anybody have a solution for using PySerial in a plugin with the 3.5.1 binary release under Windows 10?

nmt 22 February, 2023, 08:47:46

Hi @user-3578ef. Welcome to our community! Would you be able to share more details about what exactly you're trying to achieve with PySerial + Pupil Capture?

user-1a6a43 22 February, 2023, 08:33:55

are there any Docker image files that work for running an image in Docker on a M1 Mac? I've been trying to make one, however it fails due to issues like installing cysignals

user-ffe6c5 22 February, 2023, 08:51:37

Hi, I was wondering how the confidence in the gaze_positions.csv file ist being calculated/what it stands for. I have tried to compare it with the pupil_positions.csv file but can't really see the connection. Is it the mean of the left and right eye confidence? I know the meaning of the confidence of one eye and how it's being calculated, but I'm not sure concerning the gaze data.

user-c2d375 22 February, 2023, 10:59:46

Hi @user-ffe6c5 πŸ‘‹ Gaze confidence is derived from pupil confidence. This is achieved by calculating the mean of the confidence values of the corresponding pupil data that is used to compute the gaze datum.

user-ffe6c5 22 February, 2023, 13:47:23

thank you @user-c2d375 ! Can you explain briefly, how the corresponding pupil data for a gaze datum are chosen? I noticed that most pupil_timestamps can be found as gaze_timestamps but some are missing in the gaze_positions.csv file and some gaze_timestamps are new (compared to the pupil_positions.csv file)

user-c2d375 22 February, 2023, 14:25:30

Please take a look at our documentation for an overview about pupil data matching https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching

user-ffe6c5 22 February, 2023, 14:29:40

@user-c2d375 right, i totally forgot about that. Thank you very much!

user-5ba46b 22 February, 2023, 17:56:59

Hi there! I have one of these homemade eye tracker in my lab (https://hackaday.com/2013/02/12/build-an-eye-tracking-headset-for-90/) and I am trying to figure out if it still works. I plugged it in the computer and tried to run the Pupil Core software on it but it did not work. My question here is if there is another software to run this and if so where can I find it?

nmt 23 February, 2023, 14:55:22

Hi @user-5ba46b πŸ‘‹. How long has that been lying around πŸ˜†? Always fun to see old projects. First thing to check is whether the cameras are detected by your computer's OS

user-07a971 22 February, 2023, 18:07:30

I have a pupil core headset and the worldview camera was taken apart and now we cant figure out how to put it together. Is there any documentation that you can provide to help put this camera back together

nmt 23 February, 2023, 13:07:27

We don't have documentation in this regard as it is not designed to be dismantled πŸ˜•.

user-caef50 22 February, 2023, 21:47:12

Hello. I have three Pupil Core Eye Trackers. 1 of them works well. 1 of them displays the world camera view through Pupil Cam 1 ID2, but the eye cameras only can display Pupil Cam 1 ID2 (not Pupil Cam 2 ID0). I tried uninstalling all the devices, restarting the computer, restarting default setting on Pupil Capture, and it still does not show Pupil Cam 2 IDO (but it does show up as a hidden driver). Is there anything I can do about this? The third eye tracker throws a USB Device not recognized error every time I plug it in (even though I use the same cables for the other two eye trackers). Are there any steps I can take to remedy at least 1 of the other eye trackers quickly?

user-78b213 24 February, 2023, 20:46:07

Hi @nmt , I'm currently working on a project using the pupil core eye tracker and just wanted to ask you about the sampling rate. I noticed a few people have asked about this before, but I am seeing some discrepancies in the sampling frequency for the data I am collecting. Are these discrepancies due to the pupil player processing or pupil capture settings I have?

nmt 25 February, 2023, 09:01:10

Note that Core's sampling rate can be variable depending on things like cpu load and user-set camera resolution. If you can elaborate a bit on what you mean by discrepancies I'll be able to offer something more concrete

user-0b9182 27 February, 2023, 09:49:30

The events["surfaces] code, which used to work well, has been used in ROS. Is there an internal problem?

this code here:

surface_name="" surfaces=events["surfaces"] print(surfaces) for surface in surfaces: for gaze in surface['gaze_on_surfaces']: gaze_on_surf=gaze['on_surf'] gaze_time_and_surface_name=gaze['timestamp'],surface['name'] gaze_time_and_surface_name=list(gaze_time_and_surface_name) c.append(gaze_time_and_surface_name)

nmt 27 February, 2023, 12:43:09

Hi @user-0b9182! I'm not sure I fully understand your question here. Could you elaborate a bit more?

user-9f7f1b 27 February, 2023, 15:35:40

Hi, team! I have two questions about calibration. 1. In the calibration process, you only filter the pupils with low confidence, but there is some noise remaining with high confidence; 2. When I did 2d 5 point calibration (no world camera, only monitor), of course I could get the correct predicted position with the previous 5 points in the verification phase, but the 4 points used for verification did not perform well. 9 points can get better result.

nmt 27 February, 2023, 16:41:21

Hi @user-9f7f1b πŸ‘‹. Would you be able to explain your setup in more detail, i.e. how did you calibrate without a scene camera?

user-eeecc7 27 February, 2023, 20:08:10

Hi @nmt , I have a question about the pupil.pldata file. If I generate one myself using a different pupil detection method, do I need to flip the location of the right pupil to compensate for the right video being flipped? If not, is this flip applied when I export the data from the player?

user-d407c1 28 February, 2023, 08:11:17

Hi @user-0b9182 what version of Pupil Capture were you using? I mention this because at some point Capture was updated to use msgpack 1.0, this introduced some breaking changes to old plugins Let me follow later with the release notes and affecting changes

In Capture 3.0 https://github.com/pupil-labs/pupil/releases/tag/v3.0, the network API changed to be compatible with the msgpack-python 1.0 standards and their "strict_map_key" policy. Every release after that changed to use strings as dictionary keys on all msgpack-encoded data published via the Network API. This change mainly affects binocular 3D data, which was previously referred to by using integers.

user-d407c1 28 February, 2023, 09:53:59

Hi @user-2ce8eb! Please avoid hijacking other people's answers. We always try to respond to everyone as soon as possible, so there's no need to mention us or reply onto others questions.

Firstly, I want to remind you that Mobile Core has been deprecated, intrinsics might not get properly corrected when streaming to Capture, thus you might see worse accuracy. Onto accuracy, keep in mind that the accuracy of your results will ultimately depend on your experimental conditions. - If you're using the 3D calibration feature, please remember to roll your eyes before capturing a good 3D eye model. This will help ensure the most accurate results possible.

  • For the 2D calibration, it's best to use a controlled environment where there is minimal head movement

In both cases, please ask your subjects to move their eyes and try to keep their head still during calibration.

Bonus: You can start the recording before calibration and re-run the calibration in Pupil Player to see which calibration method works best for your specific case.

user-2ce8eb 28 February, 2023, 10:54:02

Oh well. I will not mention you or reply onto others questions. And what I want to tell is that I have known the app called Mobile Core has been deprecated, and I just want to a experiment which involves having the tester use a mobile phone and using the pupil core to see where the tester's gaze is on the screen and how the eye moves. The phone's screen appears smaller in the world video. Now, I can not get a good calibration!

End of February archive