๐Ÿ‘ core


user-cdcab0 01 February, 2024, 05:10:52

This is a pretty unique configuration that I don't think many people, if any, have tried. Perhaps it's as simple as a missing system dependency? Can you try sudo apt-get install libglfw3 ?

user-c075ce 01 February, 2024, 08:51:03

Itโ€™s already installed too

user-c075ce 01 February, 2024, 08:53:25

(Sorry, printscreen doesnโ€™t work)

Chat image

user-cdcab0 01 February, 2024, 09:20:53

Hmm. You said this is in a VM, right? Perhaps you need to install some guest drivers?

user-c075ce 01 February, 2024, 09:28:20

Yes, it is. Which ones I would need to install?

user-cdcab0 01 February, 2024, 09:29:25

What are you using to run your VM? VirtIO? VirtualBox? etc

user-c075ce 01 February, 2024, 09:34:09

I am using the UBports PDK, which is preconfigured with QEMU

user-cdcab0 01 February, 2024, 09:40:49

Looking that up, it appears to be preconfigured with OpenGL support, although they do note some known issues with NVIDIA GPUs if that applies to you.

Are you able to run any other GLFW apps? Is there an Ubuntu Touch or ubports community?

user-c075ce 01 February, 2024, 09:47:14

I get the same error, when I try to use glfw library to create a window. Yes there is a telegram community of Ubports.

user-c075ce 01 February, 2024, 09:49:58

They suggested to make the pupil app a native UT app, or package it as a classic or strict Snap.

user-cdcab0 01 February, 2024, 09:55:08

That first option sounds like it would be a pretty massive undertaking, and I don't know enough about snaps to know how that would actually change anything

user-c075ce 01 February, 2024, 09:56:32

I was thinking the same..

user-338a8c 01 February, 2024, 13:31:33

Hello, my world cam has started displaying upside down. Does anyone know how to resolve this?

user-a09f5d 01 February, 2024, 18:17:02

This is the player.log file from just a moment ago. I opened a recording in player the quit the player which caused it to crash.

message.txt

user-cdcab0 02 February, 2024, 00:19:01

It looks like you may be hitting the limit for file path lengths (this is an OS / filesystem limitation). Can you try moving, renaming, or otherwise restructuring your folder organization such that the file paths will be shorter?

user-a09f5d 01 February, 2024, 19:03:53

In addition the problem I am having with player crashing that I messaged about above, I have now encountered a different issue when using player on a different computer. I open player, switch to 'post-hoc Pupil Detection' and both eye video load up and the process appear to run as normal, except the message "There is no pupil data to be mapped!" appears in the world view console in red. When the detection is complete both videos disappear as normal, however in the plugin within the console eye0 still says detecting and only eye1 says completed. If I try to run the redetection again only the recording for eye1 loads up this time and I still get the "There is no pupil data to be mapped!" message. Finally, if I switch to 'Pupil Data From Recording' and then back to 'post-hoc Pupil Detection' I also get the follow message on the console in yellow "Aborting redundant eye process startup" in addition to the previously mentioned message in red. Oddly the player.log file is too large for discord to allow me to attach it however, 99.999% of the log file says the same thing on repeat so I have attached a copy of the first part of the log file. The rest of it just repeats the lines "Create gaze mapper Default Gaze Mapper - [DEBUG] gaze_mapping.gazer_3d.gazer_headset: Prediction failed because left model is not fitted" and "Create gaze mapper Default Gaze Mapper - [DEBUG] gaze_mapping.gazer_3d.gazer_headset: Prediction failed because binocular model is not fitted" a LOT until the end where the last bit of the log file reads "2024-02-01 13:40:18,100 - player - [DEBUG] glfw: (65544) b'Win32: Failed to open clipboard: Access is denied. " several times.

Chat image

user-a09f5d 01 February, 2024, 19:06:47

message.txt

user-a09f5d 02 February, 2024, 19:15:19

Thanks for your response. I tried renaming one of the folders to reduce the length of the file path and that seemed to have solved the issue of it crashing when I try to close player. Not sure about the other crashes yet but hopefully this has fixed those too. Thanks a lot!

Do you have any thoughts on the other issue I posted about above that I was having with a single recording on a different computer?

nmt 03 February, 2024, 01:23:17

Please try restarting Player with default settings. That option is in the app settings

user-f76a69 05 February, 2024, 10:28:56

Hi, is it possible to get the same output data without having to export it from the video file? Since weโ€™re looking to use a rather large data set it would be great not to have to save video for every participant.

user-d407c1 05 February, 2024, 12:20:08

Hi @user-f76a69 ! Just to be on the same page, you want to record data without recording the video, right? is not that you have the recordings already and want to export data or?

In that case you could use the Network API to access the data.

Nevertheless, we recommend recording everything, such that you can also do post-hoc evaluations and that you split your recordings into chunks. Check out our Best Practices

user-f76a69 05 February, 2024, 14:33:39

Thanks for the quick response! Yes we would like to just have the data without the recording ๐Ÿ™‚ Iโ€™m connected to the network api. This is probably a silly question but how do I access the data from there? I have so far only used it for pupil remote and had to then use pupil player to export gaze positions and dilations from the recording.

user-c075ce 05 February, 2024, 14:53:27

Hello, I'm using pupil mobile on android to collect the recordings. When I do post-hoc pupil detection in pupil player afterwards, it shows on the screen: Eye0: No camera intrinsics available for camera eye0 at resolution (192,192)! Loading dummy and same for Eye1 also in red: PLAYER: There is no pupil data to be mapped! The output video and data from export seems to be okay so far, but does it affect anything?

user-d407c1 05 February, 2024, 15:23:32

Hi @user-f76a69 ! You will need to subscribe to the specific topics you are interested. Here you have an example of subscribing to gaze 3d.

user-d407c1 05 February, 2024, 15:26:57

Hi @user-c075ce ! Pupil Mobile is deprecated, we recommend to use a laptop/SBC unit to make Pupil Core portable. That said, if you had recordings, the software still supports it as you can see.

The intrinsics of the cameras are not corrected when using Pupil Mobile, unless you use none standard cameras this should not affect much your data. And if you obtain a good accuracy I would not care much about this.

user-c075ce 05 February, 2024, 15:31:38

Thank you for your answer. Also, the videos from eye cameras are 7-8seconds longer than the world camera video, I guess it is again due to pupil mobile. It shouldn't also affect the export data?

user-d407c1 05 February, 2024, 15:35:54

There are some quirks with Pupil Mobile. Not sure about the eye video's length, but it should not be an issue. I guess it loads fine in Pupil Player, is that right?

user-c075ce 05 February, 2024, 15:43:24

Yes, it does. But overall, world video obtained from pupil capture seems to have better quality than the one obtained from pupil mobile, although they have the same resolution.

user-d407c1 05 February, 2024, 15:45:47

The processing on the phone might be a limiting factor; as mentioned, the current recommended workflow for Pupil Core in a bit dynamic environments is to use it with a laptop or an SBC unit.

user-181b57 05 February, 2024, 17:46:35

I am currently using Pupil Core, and I want to create a plot of the gaze points for a specified fixation ID. Is there a good way to do this?

user-d407c1 05 February, 2024, 17:50:13

Hi @user-181b57 ! Could you further develop which kind of plot are you looking for? Btw, have you seen these tutorials ?

user-181b57 05 February, 2024, 17:57:20

"I want to plot the position of the gaze point on the surface onto a photo of the same size

user-d407c1 05 February, 2024, 18:00:49

then I would recommend you get a look at https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb

user-181b57 05 February, 2024, 18:09:39

Thank you ! I try it!

user-2bf7f8 05 February, 2024, 18:06:49

Hi Miguel This is Enze from Pace University. If my coworker didnt put any Apriltag marker on her screen when recording with Pupil Capture but we still want a heatmap from the surface of her laptop (an app UI) post-hoc with pupil player, is it possible?

user-d407c1 06 February, 2024, 12:03:14

Hi @user-2bf7f8 I saw you also asked by email, we have answered you there. Unfortunately, we dont offer a markerless solution to detect surfaces in Player. I would recommend you to look at feature matching algorithms online.

user-2bf7f8 05 February, 2024, 18:07:52

@user-d407c1

user-2bf7f8 06 February, 2024, 14:33:43

tks

user-c07f5f 06 February, 2024, 17:38:16

Hi, I need help with a project using Pupil Invisible. I would like to know with what uncertainty I find the position of the Apriltags. Thanks!

user-d407c1 06 February, 2024, 20:56:34

Hi @user-c07f5f ! There are no uncertainty metrics for the location of the April tags marker.

If you use Cloud what you would see is a grey/purple bar on the timeline, being purple located and grey surface not found in a sort of boolean fashion.

If you want to see how the surfaces are located relative to the scene camera, you can do something like this. Note that this tutorial was made for Core, if you want to follow it step by step you may want to use the Surface Tracker in Pupil Player, rather than the Marker Mapper.

user-904376 06 February, 2024, 21:36:16

Hi! I am running into an issue where after performing post-hoc gaze calibration I am not getting any fixations within the calibrated timeframe. I was reading through older messages and saw that in order for fixation detector to work, post-hoc calibration must be "disabled." Can I have more clarity on this please?

user-c07f5f 07 February, 2024, 07:50:11

Thanks

user-d407c1 07 February, 2024, 08:06:24

Hi @user-904376 ! Do you have the fixation detector enabled in Pupil Player?

user-904376 07 February, 2024, 13:48:58

Hi! Yes, I have the fixation detector enabled. It gathers fixations with the original gaze data, but after I do post-hoc calibration the detector fails to capture any fixations.

user-c075ce 07 February, 2024, 10:27:29

Hello, I'm using pupil mobile to collect the recordings (I know it's not supported anymore), and process them in Pupil Player. When loading the data, it says "No camera intrinsics available, Loading dummy" in Pupil Player. By loading dummy, does it load the real camera intrinsics? (since I guess the default intrinsics are known for core glasses?) Or can I obtain the camera intrinsics by running Capture app and then load these intrinsics into Pupil player when processing recordings from mobile app?

user-d407c1 07 February, 2024, 10:53:55

Hi @user-c075ce ! the default intrinsics are defined here, Pupil Mobile does not define any, so dummy is used.

If have another recording made on the computer you can find .intrinsics files and copy them to your recording (Assuming same parameters are used as in Pupil Mobile).

user-c075ce 07 February, 2024, 11:01:35

Thank you for your answer. Also, "dummy instrinsics" are just random intrinsics? And by "Assuming same parameters are used" you mean using the same glasses?

user-d407c1 07 February, 2024, 11:20:53

The dummy camera is defined here is not random, but rather no intrinsic distortions. And yes! same glasses and resolution configuration.

user-c075ce 07 February, 2024, 11:22:00

Ok, thank you very much.

nmt 07 February, 2024, 23:18:22

Hey @user-904376! Can you just confirm that both the post-hoc calibration and gaze mapping was successful, and that you have high-confidence gaze data? Without it, the fixation detector won't find any fixations. Feel free to share a screencapture showing the recording after you've run post-hoc calibration/gaze mapping!

user-904376 08 February, 2024, 19:39:57

Hi! Yes, both the post-hoc calibration and gaze mapping status is "successful." Where can I see the confidence in the gaze data? The validation accuracy and precision result is 2 and .1 degrees, respectfully.

user-904376 08 February, 2024, 20:00:29

Hi! Yes, both the post-hoc calibration

user-2c86b2 08 February, 2024, 23:10:39

Hello. I have a question regarding the recording formats of the pupil core. Im using the LSL plugin to stream pupil core data from different computers to a single CSV file. The LSL plugin documentation says that the values passed in the LSL stream are a flattened version of the original Pupil gaze data stream. Where can I find the information on the original Pupil gaze data stream so that I can label the flattened pupil core data in my CSV file?

Thanks

user-cdcab0 09 February, 2024, 00:34:45

Hi, @user-2c86b2 - gaze data is streamed to LSL using the format described here: https://github.com/sccn/xdf/wiki/Gaze-Meta-Data

user-2c86b2 09 February, 2024, 00:40:24

@user-cdcab0 Thank you

user-0b5cc5 09 February, 2024, 01:04:17

Hi, I'm working with some data acquired by the Pupil Core which I proceed trough the software Pupil capture. However, I would like to use some function like fixation detection directly in a python. is there any easy way to achieve it?

user-5d2994 09 February, 2024, 01:06:05

Hello, team. The videos I shot have not uploaded to cloud even I set the images should automatically and privately be uploaded. Do you know how to fix it?

nmt 09 February, 2024, 01:49:04

Hi @user-5d2994! Double-check that your phone has internet connection and that Cloud uploads are enabled, then please try logging out and back into the Companion App. Does that trigger the uploads process?

nmt 09 February, 2024, 01:55:03

Hey @user-0b5cc5 ๐Ÿ‘‹. We don't have a Python script that would compute fixations using Pupil Core recordings. Although there is a community contributed export tool that would extract some of the raw data programmatically. That might be of interest!

user-0b5cc5 09 February, 2024, 02:16:14

thanks for your reply, I will take a look.

user-5d2994 09 February, 2024, 02:06:14

Thank you, Neil, the uploading has been started.

user-b407ae 09 February, 2024, 18:04:44

Hello team. I have a question regarding the data processing. I've been using Pupil Core to collect pupillometry data and am currently in the preprocessing stage. The two eye cameras return pupil diameter values along with two independent timestamps for the right and left eyes, respectively. In my long-format dataset, I have one column for the right pupil values and another for the left pupil values. Due to the non-synchronization of timestamps, often a single row (corresponding to a specific timestamp) has values for one eye but not the other. I'm conducting the preprocessing in R, using the pupillometryR package. Unfortunately, all these NA values are interpreted as missing data due to poor data quality, rather than the lack of data synchronization. I'm reaching out to ask if there's a method to address this issue, or if you know of any R packages for pupillometry data processing that take this aspect into account. Any advice or guidance would be greatly appreciated. Thank you in advance!

user-3cff0d 09 February, 2024, 18:24:32

I have a hardware-related question: When the Core's eye cameras are operating at 192x192 px, I've noticed that the contrast of the images is visibly higher than when the eye cameras are operating at 400x400 px. Is this a known phenomenon? And is there any insight to be had as to why this might be?

nmt 09 February, 2024, 23:38:21

I'm afraid I don't have any insights into why this is. However, what I can say is that the 2D detector is optimised for the lower resolution. So, I'd generally recommend that, especially as it runs at a higher sampling rate, if that's important!

nmt 09 February, 2024, 23:29:12

Hi @user-b407ae ๐Ÿ‘‹. Pupil Core's eye cameras are free-running, which means they aren't in sync and the timestamps might not match exactly for each pupil datum. This is expected. How you ultimately work with this data really depends on your research. I'm not familiar with the pupillometryR package. However, typical approaches to the pre-processing of pupillometry data include things like interpolation of missing samples.

You might want to check out this third-party Python tool for evaluating pupil size changes (which should be compatible with Core recordings), as it already has some steps implemented: https://pyplr.github.io/cvd_pupillometry/index.html

This paper has also proven useful in the past: 'Preprocessing pupil size data: Guidelines and code'. Behaviour Research Methods, 51(3), 1336โ€“1342. https://doi.org/10.3758/s13428-018-1075-y

Lastly, I'd highly recommend reading our best practices for pupillometry with Core: 'Best practices for doing pupillometry with Pupil Core': https://docs.pupil-labs.com/core/best-practices/#pupillometry

I hope these resources help!

user-b407ae 10 February, 2024, 16:41:48

Many thanks, @nmt I will follow the advice you've provided!

user-870276 10 February, 2024, 19:10:09

Hey Pupil Labs, I have a couple of recordings that are properly segmented for each task in an activity. However, one of the recordings is the entire activity recorded non-stop. So when comparing all the data from all the recordings, this full recording has more data, which is not a good comparison since there was more data. So is there any way I can precisely trim the video and get a separate output value for each trimmed clip?

user-d407c1 11 February, 2024, 20:11:07

Hi @user-870276 ! Generally you can trim using the trim marks on Pupil Player, if you need something more precise perhaps you can set some events and use their timestamps to crop your data

user-870276 11 February, 2024, 20:35:41

If I trim the Pupil Player, will it crop the data of the CSV file when exporting, or will it not? And should I do it manually?.

user-cdcab0 12 February, 2024, 07:14:02

Yes, setting the trim marks will exclude data from the exports that are outside of the trim marks

user-4514c3 12 February, 2024, 09:36:11

Good morning! I have a question, how do I know I am doing a good calibration? Sometimes those orange marks appear and other times they don't even though I calibrate the same way, what could be the cause? Thank you!

Chat image

nmt 13 February, 2024, 03:49:09

Those orange lines visualise what we refer to as residual training errors. They represent degrees of visual angle between the calibration target locations and the associated pupil positions. If they're not present, it means little pupil data, or limited gaze angles, were recorded during the calibration. So that we can provide concrete feedback, it would be helpful if you could record a calibration choreography and share it with us. Enable the eye overlay plugin first so we can see the pupil detection.

user-3515b1 12 February, 2024, 20:08:50

Hello, I have a question about how to read and interpret pldata files. I noticed on your website that file_methods.load_pldata_file() should be used to read the files, but when I run the method, the output is uninterpretable to me. Could someone explain how I can translate pldata files into a human-readable format?

nmt 13 February, 2024, 04:11:07

Hi @user-3515b1 ๐Ÿ‘‹. Just to provide a bit of context, these are binary files that store data in an intermediate format, not designed to be directly consumed by users. You can already access the data in a human-readable format by loading recordings into Pupil Player and exporting them into .csv files. Further instructions can be found in this section of the docs. That said, if you did want to access these files programmatically, have a look at this message: https://discord.com/channels/285728493612957698/285728493612957698/1097858677207212144

user-338a8c 13 February, 2024, 12:26:15

Hello, my world cam has started displaying upside down. Does anyone know how to resolve this?

user-d407c1 13 February, 2024, 16:32:57

Hi @user-338a8c ! Can you click on general settings and "Restart with default settings"?

user-338a8c 13 February, 2024, 16:44:09

Thanks Miguel. That didn't work, unfortunately.

user-d407c1 13 February, 2024, 16:47:59

Could you share some details as to when that happened? What OS are you running pupil capture on, version of pupil capture, etc

user-40931a 13 February, 2024, 22:46:23

Hi! I'm wondering whether it's possible to control the timing of the frame capture, for example with a sync pulse?

nmt 14 February, 2024, 02:32:46

Hi @user-40931a! Timing of frame captures with sync pulses is not possible I'm afraid. However, we do have several ways to synchronise Pupil Core with other systems. If you can share more details of your goal, we can try to point you in the right direction.

user-40931a 14 February, 2024, 02:36:43

@nmt We're trying to record from both the Pupil Core and an Optitrack motion capture system simultaneously. Since the Optitrack works by sending pulses of IR the two are currently in conflict, but if the Core could capture frames in sync with the Optitrack IR pulses then we thought they might be able to work together.

nmt 14 February, 2024, 02:42:46

What do you mean by the two are in conflict?

user-8bce45 15 February, 2024, 09:35:33

Hi,

Suddenly eye0 is not working in Pupil Capture anymore. I get the message:

eye0 - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting... eye0 - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID0. Estimated / selected altsetting bandwith : 200 / 256. !!!!Packets per transfer = 32 frameInterval = 83263

How can I fix this?

nmt 15 February, 2024, 11:31:01

Please double check that the cable connecting the camera is secure/fully connected, then follow these steps: https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting. Does that solve it?

user-8bce45 15 February, 2024, 11:41:34

Hi Neil, reinstalling the drivers worked. Thank you so much ๐Ÿ™‚

user-40931a 15 February, 2024, 20:17:41

We're still setting up equipment so this is just theoretical, but the optitrack system works by flashing IR lights at a given frequency. If the Core is capturing data out of sync with that flashing, that means that the level of IR reflecting from the participants eyes will differ from frame to frame for reasons unrelated to the participant's eye behavior. In your view, do you think this will present a problem in reliably tracking gaze with the Core or do you anticipate that it will be fine?

nmt 16 February, 2024, 02:25:19

Thanks for explaining. I understand what you mean now. You should be fine. Core has been used alongside optical motion tracking systems many times in the past without issue and the extra IR illumination shouldn't influence our pupil detection pipeline. That said, always best to run a quick pilot with both systems prior to data collection!

user-2cc535 16 February, 2024, 12:18:42

Hello everybody. I have some questions about installing Pupil capture. We bought the glasses for our labs. I need some info about installing the software.

nmt 19 February, 2024, 02:55:17

Hi @user-2cc535! Have you already seen our documentation for installing Pupil Capture? Also, feel free to just go ahead and ask any questions you have!

user-03c5da 17 February, 2024, 00:29:09

hey everyone! I have bunch of Pupil core recordings, I'm wondering if there is a utility to export all of them at once only for csv files (pupil_position...)?

nmt 17 February, 2024, 04:39:21

Hey @user-03c5da ๐Ÿ‘‹. Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1184385440388759562

user-f93379 17 February, 2024, 12:34:32

Hello! We have a problem related to static: one device has knocked out 2 world cameras, and one tracker usb-hub. We have replaced the chip on the hub - it started, but the cameras do not start. Can you promptly advise how to connect another camera (which is not in the list of recommended cameras)? We took a standard laptop camera

Unfortunately, we have to say that the glasses are not protected from static electricity and can break for this reason( how do you see the solution to this problem?

Chat image

user-41f1bf 18 February, 2024, 21:19:54

@user-f93379 Hi, I opened an issue (see https://github.com/pupil-labs/pupil/issues) with some notes for zadig and libusbk (I am assuming you are running on windows)

user-41f1bf 18 February, 2024, 21:20:57

Please, confirm your OS (Linux, Mac, or Windows)

user-41f1bf 18 February, 2024, 21:26:53

Also, unfortunately, Pupil Labs is not strong/powerful enough to be generous and support third-party cameras. I think they are not going to help you.

user-41f1bf 18 February, 2024, 21:31:06

As an early adopter, I can say that they changed. They used to believe more in open source.

nmt 19 February, 2024, 03:22:11

Hi @user-41f1bf! Thanks for your feedback. As I mentioned in my previous message: https://discord.com/channels/285728493612957698/285728493612957698/1201719271563202610, we're focusing on ensuring stability with the Pupil Core software for now. However, we're always pleased to hear that users are tinkering with the software and experimenting with different hardware. That's why we choose to keep the pupil repository open-source for the community for user to build and extend according to their own requirements. With that said, we would ask you to be mindful of how you communicate on this forum as we want it to be a constructive space and for people to be excellent to each other, referencing the https://discord.com/channels/285728493612957698/983612525176311809 ๐Ÿ™‚

nmt 19 February, 2024, 02:52:17

Hey @user-f93379 ๐Ÿ‘‹. I'm sorry to hear about that! If the USB hub and other integrated components have been damaged by electrostatic discharge, I'd highly recommend taking some precautionary steps before handling any electronic components inside your lab when working with the device you mention.

Regarding the cameras, the Pupil Core backend is compatible with UVC compliant cameras that meet the criteria outlined in this message: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498. So, if your laptop camera meets these criteria, then in theory they should appear in the software.

If you don't manage to get things up and running, we would need to receive the hardware at our Berlin office for inspection or repair. You can reach out to [email removed] with the original order ID in this case.

user-f93379 20 February, 2024, 10:46:45

Neil, would love to meet you in person and head office, but we are in another country. The trackers are really good and flexible. I wish you good luck with your developments!

user-41f1bf 19 February, 2024, 03:35:23

Thank you for your feedback @nmt . May I kindly ask you why you think I am not being mindful or being excellent?

user-6d7fe6 19 February, 2024, 13:28:20

Hi, I am using Pupil Player on Windows to do post-hoc calibration. When I calculate a new 3D gaze mapper, the process is very slow (>10 minutes for a 13 minutes video), but in Task Manager it seems that the CPU is mostly idle (pupil player requires ~1-2% CPU) and neither RAM, storage or network are being used. Is it normal that the gaze mapping is so slow? Which step would be the limiting factor? Is there a way to speed this up?

user-492619 19 February, 2024, 20:06:11

Hello, I am conducting an experiment on a transparent screen and will have gaze data from two eye trackers from both sides of the screen. However, on one side, the entire experiment (and therefore the April tags) is mirrored. How can I flip the video/recording to define the surface and extract the gaze data?

user-0e5193 20 February, 2024, 05:00:08

Hello. I am trying to describe how far the gaze is from the center in terms of angle using the gaze_point_3d_x/y/z data from pupil core. For example, I calculated the angle between the values (0, 0, 1) and the collected values (-12, 2.9, 36.9) within the same timestamp to obtain an angle of 18.5 degrees. Can I explain that the subject's eyes deviated from the central line by this angle at that timestamp?

user-cdcab0 20 February, 2024, 07:18:52

That sounds like a very interesting setup (side note: I'd love to see a picture!). To accomplish what you need, I think you'd need to write a custom plugin that extends the built-in Surface Tracker plugin so that you can flip the image prior to running detection on the AprilTag markers

user-cdcab0 20 February, 2024, 09:15:43

@user-492619 - on a second look, it's a bit more involved than I originally indicated. A much easier approach would be to simply have two sets of markers displayed - half of them being flipped

user-f93379 20 February, 2024, 10:44:40

Thank you! The camera settings have been made and everything is working. Pupil is a great tool, but it's not perfect. There should be some kind of protection against static. Perhaps some additional shielding is needed, as the kit comes with a very long wire, which apparently creates flooding. I'm not very knowledgeable, but it would be nice if this point was mentioned somewhere in the documentation. Maybe I just haven't found it.

user-c075ce 20 February, 2024, 11:23:39

Hello, I have a question regarding exported video. So the recorded world video has varying fps which can be seen in player, but after exporting does it have a constant fps? If yes, how the conversion is done to make it constant, does it just take mean fps or?

nmt 21 February, 2024, 04:07:26

Hi @user-c075ce! The exported video is the same as the one recorded live, just with the addition of whatever visualisations you added in Pupil Player

user-904376 20 February, 2024, 14:32:39

Hello! I was wondering if I could please have more specifics on the accuracy and precision given during validation of post-hoc calibration? For example, what is the origin of the degree offset and what is the significance of the sample rate next to the accuracy/precision values. Thanks.

nmt 21 February, 2024, 04:09:44

Hi @user-904376! I can recommend first reading the Core whitepaper. It covers in detail the specifics of the calibration/validation and how the accuracy and precision metrics are computed. You can download that from here: https://arxiv.org/abs/1405.0006. Look in the 'Performance Evaluation' section ๐Ÿ™‚

user-ca5273 20 February, 2024, 22:33:26

Hello!

I am working on the fixations.csv data and I want to calculate gaze transitions or jumps. I would like to understand whether the norm_pos_ columns are the correct values to use or the gaze_point_3d_. Which of these maps to the virtual world and would give more accurate coordinates?

hope this question makes sense..

nmt 21 February, 2024, 04:15:30

Hi @user-ca5273! Just to clarify, are you interested in saccadic eye movements, i.e. rapid shifts of gaze between fixations? If so, what outcome metrics are you interested in? Are you simply wanting to count, for example, the number of saccades, or are you delving deeper into things like saccade amplitude and velocity? I ask because this would likely determine which datastreams and approach to data analysis you'd want to use.

user-ca5273 20 February, 2024, 22:35:57

I've been looking at this documentation.. but it's not quite clear yet...

Chat image

user-ca5273 20 February, 2024, 22:36:13

https://docs.pupil-labs.com/core/terminology/#gaze-positions

nmt 21 February, 2024, 04:05:16

Hi @user-0e5193! You can say that the gaze angle deviated from the optical axis of the scene camera in this case. I'm not sure what your goal is here? But be mindful that the scene camera can technically be orientated arbitrarily with respect to the wearer's head/eyes. You might find this overview of Core's coordinate systems useful when thinking about this topic: https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-0e5193 21 February, 2024, 07:55:21

Thank you so much!

user-ca5273 21 February, 2024, 05:43:22

@nmt We are more interested in dwell time on each fixation and not really the rapid gaze shifts. With the gaze data, we want to calculate the total distance that (using euclidian distance). Hence why knowing which x, y, or z to use is very important

nmt 21 February, 2024, 06:40:01

๐Ÿค” in that case, I'm not sure I fully understand your goals. Fixations, in the classical sense, are characterised by periods where gaze does not shift. Can you elaborate what you mean by total distance? As this would imply a shift of gaze.

nmt 21 February, 2024, 06:40:51

You also talk about calculating 'gaze transitions or jumps', which is further confusing me ๐Ÿ˜…

user-ca5273 21 February, 2024, 16:03:43

@nmt We have this file of fixations. The data is from a user moving through a VR world an interacting with molecules.Each row is what fixation they were looking at. Take the first two rows for instance, they capture two fixations that the user looked at. You can think of it as their gaze moved from row ID 6 to row ID 7. That gaze move is what we're considering a gaze transition. A gaze distance would be calculating the distance (using the x,y coordinates) between these points. What we are not sure about is which coordinates give a true mapping of the coordinates of the VR world. Is it a norm_pos_x and norm_pos_y coordinates or the gaze_point_3d_x, gaze_point_3d_y, and gaze_point_3d_z? This is what we would like clarification on.

Chat image

nmt 22 February, 2024, 04:51:08

Thanks for elaborating. I'm getting closer to understanding your aim. However, I do have some follow-up questions. Are you looking to compute this 'gaze transition' in units of degrees of visual angle? Or something else, like the Euclidean distance between two fixation locations (perhaps molecules) in your VR space?

user-a81678 21 February, 2024, 16:31:01

Hi, we have Pupil Core. We have created Heat maps with Pupil Core using the Surface Tracker plugin.

user-a81678 21 February, 2024, 16:33:09

this is from one record (user). How ever, we would like to create an aggregate heat map from a group of record (users) watching a shelve with products, by example. As soon as possible, could you tell us how to do it, please?

user-a81678 21 February, 2024, 16:36:42

Of course, we have used Pupil-player. If we can use other tool or the cloud, donยดt hesitate to tell me about it, please!

user-a81678 21 February, 2024, 16:56:57

And also, We would like to Gaze metrics on Areas of Interest like it is possible in Pupil Cloud with Pupil Invisible

user-a81678 21 February, 2024, 16:58:17

from the group of records (users)

nmt 22 February, 2024, 04:29:42

Hi @user-a81678! Pupil Core software does enable the generation of heatmaps for single recordings. However, it doesn't have aggregate capabilities, meaning it doesn't generate heatmaps from multiple recordings like you can do in Cloud. Unfortunately, Core recordings are not compatible with Cloud.

If you're comfortable trying out some coding, we expose all of the raw data necessary to build custom visualisations yourself. We also have a tutorial showing how to generate heatmaps from surface-mapped data. It would be feasible to modify this to build a heatmap from multiple recordings.

As for gaze metrics, these are not generated in Pupil Player. However, the exports in .csv format would enable you to compute your own metrics of interest. For some examples of how we compute metrics, you might want to look at this Alpha Lab article.

user-9e2a60 21 February, 2024, 22:48:14

Hello I have a question regarding my pupil core glasses. I have been using them for a while and today when i started the pupil capture exe i was only able to view one eye camera. Is there any other reason why the other eye camera not showing up? How can i check if the camera is damaged. I also noticed that while using the software that same left side starts to heat up a bit.

nmt 22 February, 2024, 04:21:18

Hi @user-9e2a60 ๐Ÿ‘‹. Double-check that the cable attaching the camera is securely connected (sometimes it can come loose) and then restart Pupil Capture with default settings. Let me know if the eye video shows after that.

user-40931a 22 February, 2024, 00:51:06

Follow up: we finally got the Core and the Optitrack powered up together and there's definitely an issue. The flickering is visible on the Core's cameras and it isn't possible to even calibrate to the eyes. Our lab has had this problem before with a different eye tracker brand in the past and were able to solve it by syncing frame capture to the optical tracking IR pulses via a TTL pulse from the Optitrack's Sync2 box. If that's not possible, do you have any other suggestions on how we may to be able to sync the two?

nmt 22 February, 2024, 04:19:33

Thanks for sharing that! The IR strobe is certainly bright. I don't think it's optimal to have the OptiTrack camera pointing directly at the Core headset, but I suppose that's a constraint of the experiment. So, the strobe is definitely causing the eye image to be overexposed. However, even when the IR strobe is off, the image seems overexposed. The first thing I would try in this case is to set the eye cameras to 'auto-exposure' mode, or, reduce the exposure time in manual mode. You can access these settings in the eye windows. I still think there's a chance you could find a setting that would work.

user-8563d1 22 February, 2024, 10:33:34

I am struggling installing pupil capture and pupil player on linux. I install the .deb packages but the programs do not open. Is there any guide with all the steps to follow during installation?

user-f43a29 23 February, 2024, 09:02:10

Hi @user-8563d1 ๐Ÿ‘‹ ! Are you on a laptop with an Nvidia card? If so and you are running Ubuntu, then do not directly open the app. Rather, right click the app icon and choose "Open with dedicated graphics card" and see if that works. If you are on a different flavor of Linux, then you will need to check how to open it with the dedicated GPU on that system. Could you also provide the logs? You will find them in your home directory, under "pupil_player_settings/player.log" and "pupil_capture_settings/capture.log".

user-c4e70b 23 February, 2024, 10:03:56

Hello! We collected 5 videos (without eye-tracking) using Pupil Core cameras with Pupil Mobile v.1.2.3 (all videos lasting around 15 minutes each). However, we have been having issues with exporting some of the videos. One video exports just fine using World Video Exporter in Pupil Player v3.5. The other 4 videos are constantly being stuck on โ€œStarting video export with pid: <number>โ€, and cannot seem to progress. We see no obvious differences between the files linked to the one video that exports, and the others that do not. We tried on several different computers but we cannot figure out what the problem is. All videos work just fine (including audio) on Pupil Player. Could you please help us resolve this issue?

nmt 26 February, 2024, 15:50:17

Would you be able to share a recording with us such that we can take a closer look? If so, please share it with data@pupil-labs.com

user-f61999 23 February, 2024, 13:20:40

Hello @nmt I am trying to do reference image mapping and did all the steps as instructed (having Invisible) but the processing is stuck on "processing 0%" for all day - any help please ?

user-c68c98 23 February, 2024, 16:10:26

Hello ! Is there a way to disable the IR source on my eye camera ? Thank you for your reply

nmt 26 February, 2024, 15:49:26

There's no way do this. May I ask why you would want to do so?

user-904376 23 February, 2024, 19:34:08

Hello! I have been troubleshooting with different post-hoc calibration/validation techniques and I had a question regarding including the validation calibration at the end of a recording within the trim markers for calibration. I obtained higher accuracy/precision when I did this, but I am not entirely sure what is happening in the background. Can you please clarify the technique and if this would be appropriate?

nmt 26 February, 2024, 15:49:04

Hi @user-904376. Conceptually, there's no difference between performing calibrations or validations post-hoc versus in real-time. You can read about the underlying processes in the whitepaper that I linked to in my previous message.

user-2b8918 24 February, 2024, 11:32:24

Hi, i am having no luck and no help with the pupil cloud. I need to analyse this data for my dissertation and it's just not working. As i cannot create enrichments right now due to whatever reason, is there a way of export just data for the number of fixations and saccades? Look forward to hearing back from you.

nmt 24 February, 2024, 13:27:06

Hi @user-2b8918 ๐Ÿ‘‹. I'll ask thr cloud team to look into why the enrichment are taking longer than expected. In the meantime, you can right-click on your recording of interest and download timeseries data. That has basic metrics, like fixations, blinks etc.

user-066ac9 25 February, 2024, 13:37:55

Hello everyone! I been reading the Terminology section of the docs, and im a bit confused when is the 2D pupil detection is used, and when is the 3D detection used. It seems like just 2D pupil detection is enough to triangulate gaze location in the world image using polynomial fit?

nmt 26 February, 2024, 15:39:14

Thanks for your query. We do indeed run two pupil detectors in our Core software: 2D and 3D. Let me explain them below: - 2D Pupil Detection: The 2D detection algorithm uses computer vision to locate the 'dark pupil' in the infrared illuminated eye camera images. It outputs 2D contours of the pupil that feed into a 3D model (described below). You can read more about this in our whitepaper. The 2D detector is always running and forms the foundation of Core's gaze pipeline. - 3D Pupil Detection: We alspo employ a mathematical 3D eye model, pye3d, that derives gaze direction and pupil size from eyeball-position estimates. The eyeball position itself is estimated using geometric properties of 2D pupil observations. The pye3d model regularly updates to account for movements of the headset on the wearer's face, thus providing slippage compensation. You can read more about how it works in our documentation.

So, how do these relate to gaze data produced after calibration? Well, it depends on which calibration pipeline you select. - 2D Pipeline: You're correct that 2D pupil detection is enough to compute gaze in scene camera coordinates. For this, you would choose the 2D calibration pipeline, which uses polynomial regression for mapping based on the pupil's normalised 2D position. It can be very accurate under ideal conditions. However, it is very sensitive to headset slippage and does not extrapolate well outside of the calibration area. - 3D Pipeline: This pipeline uses bundle adjustment to find the physical relation of the cameras to each other, and uses this relationship to linearly map pupil vectors that are yielded by the 3D eye model, pye3d. The 3D pipeline offers much better slippage compensation, but slightly less accuracy than the 2D mapping.

Please let me know if you have any follow-up questions!

user-93ff01 26 February, 2024, 05:32:16

Lu, C, Chakravarthula, P, Liu, K, Liu, X, Li, S, & Fuchs, H (2022) โ€œNeural 3D Gaze: 3D Pupil Localization and Gaze Tracking based on Anatomical Eye Model and Neural Refraction Correctionโ€ In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), (IEEE), pp. 375โ€“383

user-93ff01 26 February, 2024, 05:33:15

In this paper, the authors propose an improved 3D eye model and demonstrate ~1ยฐ improved error, it would be great if this could be added to Pupil Core?

Chat image

user-93ff01 26 February, 2024, 05:33:49

They tested using Pupil Core cameras and compared to the pupil core 3D model as I understand it

nmt 26 February, 2024, 15:30:22

Hi @user-93ff01! Thanks for sharing that. We're aware of the work, it's certainly an interesting paper with a neat experimental setup! While we don't have any plans to implement their approach, our Core software is open-source, which should enable users to do so themselves if they want to experiment with it ๐Ÿ™‚

user-a81678 27 February, 2024, 15:59:57

[email removed] รngel Juรกrez! Pupil Core

user-5c56d0 28 February, 2024, 05:39:26

Dear sir Thank you for your help. Could you please answer the following? Best regards.

Q1 Can the Pupil core measure the angle of eye orientation (the angle of how many degrees the eyeball is tilted)?

Q2 Is the pupil neon the only one that can measure the eye orientation? (Is the pupil invisible also impossible to do it?).

Q3 Can the Pupil Core be used to obtain the depth direction of gaze (such as looking at something 10 cm away or 100 cm away)?

Does the following metric mean the depth direction of gaze ?

gaze_point_3d_z - z position of the 3d gaze point (https://docs.pupil-labs.com/core/software/pupil-player/)

Q๏ผ”. Which indicators in the csv can be used to determine the data that signify the saccade?ใ€€ Which of Pupil Core, Neon or Invisible can be used to obtain the saccade?

Chat image

nmt 28 February, 2024, 06:42:24

Hi @user-5c56d0! 1. Yes. We output eye orientation for each eye in both Cartesian and spherical coordinates. You can look at this page for an overview of each data stream. I think theta and phi would be useful for you. 2. Neon does indeed output eye state/orientation. However, Invisible does not. Invisible does provide the orientation of a cyclopean gaze ray originating at the scene camera in spherical coordinates. You'll want to look at elevation and azimuth in the gaze export 3. Not reliably. Please see this message for further explanation: https://discord.com/channels/285728493612957698/285728493612957698/1100125103754321951 4. I already responded to your question about saccades in https://discord.com/channels/285728493612957698/1047111711230009405/1212284516077404180

user-6586ca 28 February, 2024, 09:32:30

Hello Pupil Lab team, I hope this message finds you well. I have a question regarding the precise calculation of the confidence index. In Pupil Player videos, the confidence is displayed separately for each eye per index, whereas in the export files, it appears as the confidence for both eyes combined and given for each data row. Could you please provide more insight into this? Thank you

user-480f4c 28 February, 2024, 09:40:10

Hi @user-6586ca! You should be able to have one confidence value per eye. The column eye_id includes 0s or 1s for left/right eye data points, and the column confidence has the confidence value associated with each data point for a given eye. Please have a look also at this relevant tutorial that shows how you can split your dataset into left vs. right eye data and select only high confidence values for each eye.

user-01bb59 28 February, 2024, 18:26:01

is there a way i could run validation remotely, and if so what string would I pass in? ex: C starts calibration, R starts recording

user-d407c1 29 February, 2024, 08:41:33

Hi @user-01bb59 ! Are you using the IPC backbone message format ?

if so:

import zmq
import msgpack

topic = 'your_custom_topic'
payload = {'topic': topic}

# create and connect PUB socket to IPC
pub_socket = zmq.Socket(zmq.Context(), zmq.PUB)
pub_socket.connect(ipc_pub_url)

# send payload using custom topic
pub_socket.send_string(topic, flags=zmq.SNDMORE)
pub_socket.send(msgpack.dumps(payload, use_bin_type=True))

where the topic 'notify.recording.should_start' starts the recording, 'notify.calibration.should_start' and then 'notify.validation.should_start', should work

user-8619fb 28 February, 2024, 21:50:54

Hi! I have Pupil Capture installed on multiple laptops. They work great on all laptops but one, a Ryzen 5 AMD OMEN HP laptop. I attached the capture.log. The weird thing is that initially, it works. After 3-4 times, when I run Pupil Capture from the Desktop icon, a small window with the message "Configuring Pupil labs.." pops up with a progress bar. Once it is completed, it opens up. Later, it breaks and never opens, I try to check the C:/ProgramFiles(x86)/Pupil-Labs/Pupil v3.5.1/Pupil Capture v3.5.1/pupil capture.exe and it disappears from that folder. This is after a clean reset of the laptop, all drivers updated. The same camera set has no issue when run on another laptop. So, the issue seems to be related to the computer altogether. I also repaired, tried uninstalling then reinstalling all with the same output. Finally, I also deleted the libUSBk drivers from device manager, then reran the system again and still same issue arises. It works some times then stops working randomly at other times.

capture.log

user-c075ce 29 February, 2024, 09:33:39

Hello, I was wondering if it is possible to merge different calibrations? Or is there a way to set several trim marks in pupil player? It's for the case when the video contains several single marker calibrations from different distances (for the post-hoc calibration).

user-b368ff 29 February, 2024, 19:58:40

Hi, I'm a beginner in the use of eye tracking device, and I'm using pupil core. I am using psychopy 2023.2.3 and I'm trying to send triggers from psychopy to pupil labs recorder. I am quite inexperienced in programming. I've been trying with different kind of forums, git-hub and so, but I am not understanding what I am supposed to do to make them work together. I also tried to use labrecorder, but it is not working for me

user-cdcab0 29 February, 2024, 23:50:16

To send events from PsychoPy to Pupil Capture, you'll need to use remote annotations. You can find a more complete example in the pupil-helpers repository

End of February archive