πŸ‘ core


user-cdcab0 01 May, 2023, 06:08:13

Hi, @user-6a684e πŸ‘‹πŸ½ - the UI does require a value there, and the maximum is 4,000ms. Can you tell us a little more about what you're studying?

user-6a684e 01 May, 2023, 12:35:59

Hi @user-cdcab0 . I am using Pupil Core to see interpreters' eye movements. Because interpreting is different from reading or scene cognition, sometimes we will need more time for interpreters to process information which may correlate with longer fixation duration. So I'm wondering if i could generate their "natural" fixation duration without a maximum limit. πŸ™‚

user-38b270 01 May, 2023, 09:40:30

what is the difference between Pupil core and Pupil Neon?

user-cdcab0 01 May, 2023, 10:08:09

Hi, @user-38b270! πŸ‘‹πŸ½

Both Pupil Core and NEON are wearable eye-tracking solutions that we offer (along with Pupil Invisible). Pupil Core was our original product. It's ideal for studies demanding high accuracy or pupillometry. It is fully open-source and can be extended and customized to meet research aims. A wealth of raw data is exposed to the user. Pupil Core requires calibration and a controlled environment such as a lab to achieve the best results. It connects to a laptop/desktop via USB. For more details see https://pupil-labs.com/products/core/

NEON is our newest eye-tracking product. It has a modular design that fits a variety of headsets and utilizes deep learning for calibration-free and slippage-invariant gaze estimation. This allows for long recordings in any environment, with no decrease in data quality. You can even take the glasses off and on without needing to re-calibrate. For more details, see https://pupil-labs.com/products/neon/

Pupil Core has been a reliable tool for researchers since 2014, providing accurate gaze, eye state, and pupillometric measurements. However, to achieve the best results, Core requires a controlled environment such as a lab. Neon's deep learning-powered gaze accuracy matches Core's and even surpasses it in certain conditions. This versatility makes Neon extremelyΒ powerful.

user-38b270 01 May, 2023, 10:22:33

And what is the prices difference?

user-cdcab0 01 May, 2023, 10:42:54

The prices are listed at the top of each product page. Right now, NEON is listed at €5900 and Core is €3440. Academic discounts are available if you're affiliated with an academic institution.

It should be noted that some import fees, tariffs, taxes, or other charges may be applicable depending on your region. You'll want to reach out to us via email at info@pupil-labs.com if you need more specifics about that for your region.

user-38b270 01 May, 2023, 10:44:03

Does that include the software and license? what would be the total for each?

user-cdcab0 01 May, 2023, 10:52:36

NEON comes with a companion device (included in the price) and uses an app that we make available for free. Recordings and data can be exported from the app and/or uploaded to our Pupil Cloud service (included).

Pupil Core requires an active connection to a PC running our Pupil Core software. We do not provide the PC, but we do provide the software which is free and open source.

user-cdcab0 01 May, 2023, 12:40:29

What type of visual stimuli will your interpreters be viewing? It seems like you may be more interested in gaze duration over different areas-of-interest, rather than the specific fixations within those AOI's

user-6a684e 01 May, 2023, 12:41:18

Their notes, audience and the environment.

user-6a684e 01 May, 2023, 12:45:39

You are right, AOIs will be a major part of my research interest but I still would like to see, for example, their first fixation duration within one of these AOIs. πŸ™‚

user-cdcab0 01 May, 2023, 12:51:14

Ah, I see. I believe there are data/research that shows that 4000ms is something of an upper limit for fixation duration. I don't have any references readily available, but I think you may find it worthwhile to review the literature in this regard

user-6a684e 01 May, 2023, 13:09:22

Thank you very much @user-cdcab0. I have another question on calibration and validation. The website says "Β Using the 3D Gaze Mapping you should achieve 1.5-2.5 deg of accuracy." Does it mean that the angular accuracy I have achieved each time through calibration or validation, no matter it is screen marker, single marker or natural feature, the value should go into this 1.5-2.5 range? Or the calibration or validation is not going to be that successful?

user-cdcab0 01 May, 2023, 13:16:25

Yes - if you have a good calibration you should expect that type of accuracy regardless of the method. The best calibration method depends on the environment and situation. Importantly, calibration values can vary based on distance. If your participants are looking at notes that are very close to their face as well as audience members who are potentially quite far away, you will likely experience reduced accuracy at whichever distance you did not calibrate at.

user-6a684e 01 May, 2023, 13:21:55

So I suppose these three calibration/validation types all fall under the 3D Gaze Mapping mechanism? I'm not really sure what the 2D Gaze Mapping is... I thought it's related to screen marker calibration/validation...

user-cdcab0 01 May, 2023, 13:32:50

All three calibration methods (screen marker, single marker, natural feature) are available under either of the pupil detection methods (2d gaze mapping and 3d gaze mapping).

The 2D gaze mapper works by analyzing 2d image data available from the pupil cameras. The 3D gaze mapper generates a 3D model of each eye and updates the model based on images from the pupil cameras.

The 3D mapper is generally more accurate and more robust to things like headset slippage. It's the default setting and what we recommend for most scenarios.

user-6a684e 01 May, 2023, 14:27:14

Thank you very much @user-cdcab0! I still have a few questions and hope you could help me with, thank you very much in advance!

  1. If the validation is not satisfactory, how do I practice the manual post-hoc? I can see there are demonstration videos on your website that you start with β€œPost-Hoc Gaze Calibration” in Pupil Player but I haven’t figured out how to manually locate participants fixation point afterwards. Can you give me some instruction please?

  2. The β€œFixation Detector” section in Pupil Player presents an example of combining 2 consecutive fixations of length 300 ms into one single fixation length of 600 ms. How does it work? And will this new 600 ms fixation be located in the centroid of the two 300 ms fixations?

user-cdcab0 01 May, 2023, 15:21:41
  1. In Pupil Player, after you change the Gaze Data data source to "Post-hoc Gaze Calibration", you'll see 3 settings groups - the first one is "Reference Locations". Just to the left of the words "Reference Locations" is an expand-icon (it looks like a downwards pointing triangle with an overline). If you click on that, it'll expand the "Reference Locations" menu. I think you'll find what you're looking for there.

  2. I think that explanation about the two consecutive fixations has given you a slightly incorrect understanding of how the offline fixation detector works. It does not work by examining existing fixations for candidates to be merged. Rather, it works by examining gaze data frame-by-frame. When a fixation is detected (according to the minimum duration and maximum dispersion settings), the algorithm then employs a binary-search method to determine the end-time of the fixation. The initial bounds of the search are the start time of the fixation and the start time + maximum duration setting. These search bounds are refined until the duration is determined. The location of the fixation is defined by all of the gaze points that are included in the duration of that fixation.

user-e91538 02 May, 2023, 11:48:51

Hi, I'm using Pupil Invisible for an eye-tracking project. The goal is to have a stream of the Video, as well as the x and y coordinates of the gaze in ROS as two individual messages. The best way to do this, would be to have an android application on the phone that is connected to the glasses, that already sends this data in the right format. Are there any resources from your side, that could assist me to develop such an app? Thanks in advance!

user-cdcab0 02 May, 2023, 12:52:48

Hi @user-e91538 πŸ‘‹ - since you're using Invisible, do you mind posting this to the πŸ•Ά invisible channel instead?

user-e91538 02 May, 2023, 13:58:27

sure

user-ded122 02 May, 2023, 14:05:45

hello, is it possible to retrieve the real time video stream from pupil core using the network api only? Or do I need to tinker with the source file here https://github.com/pupil-labs/pupil/tree/master/pupil_src/launchables

user-cdcab0 03 May, 2023, 12:40:54

Hi, @user-ded122 - yes, you can stream video with the network api. Here's an example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py

user-074e3a 02 May, 2023, 14:46:04

Hey there, I am using Pupil Core for a quantitative research study and I am still quite unexperienced with the software, so aplogies in advance if the questions are unclear. Within my study I am interested in the number of fixations of a certain AOI, duration of these fixations, and the pupil dilation/diameter of the participants. Participants will be divided in two groups, and they need to watch a one-minute advertisement where the AOI remains constant. My struggle right now is 1) creating these AOI's using the april tags, and 2) exporting the data. Since I will have at least 60 participants, my plan was to compare the average values of the number of fixations, duration of fixations and the pupil diameter to examine possible differences between the two groups. However, I am unsure how to obtain these average values based on the raw extracted data that the pupil player provides.

user-cdcab0 03 May, 2023, 12:45:30

Hi, @user-074e3a πŸ‘‹ - let's tackle your challenges one at a time. I believe you first mentioned struggling with defining AOIs using AprilTags. Have you followed the guide at https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking ? If not, please take a look. If you have followed that and are still struggling, can you tell us more about which part of the surface tracking process you're stuck at?

user-786f38 03 May, 2023, 12:33:46

Hi, I am Stefano Lasaponara from Sapienza University. I purchased Pupil Core a couple of months ago. Today for ethics purposes I need a certificate of conformity with EU standards, does anybody knows where to find it?

user-13fa38 03 May, 2023, 16:24:17

Another question, do you know which CPU on the market runs best with the eye trackers? Disregarding cost issues and we want to achieve 200 Hz performance if possible.

user-4c21e5 03 May, 2023, 19:45:44

We prefer Apple silicon - e.g. M1 with 16gb unified memory - we easily max out the sampling rate of Core with these

user-1ed146 03 May, 2023, 18:13:56

Hi there, I have some questions about data analysis. We have collected gaze data using Pupil Core from 10 participants. We want a heat map on a reference image for each participant's data, like what the reference image mapper enrichment does. I found this example here: https://docs.pupil-labs.com/alpha-lab/multiple-rim/, but it is provided with Pupil Invisible. So can you please point me to any repositories/examples if possible?

user-4c21e5 03 May, 2023, 19:44:22

Hi @user-1ed146 πŸ‘‹. Reference Image Mapper is only available for Invisible and Neon recordings, unfortunately as it runs in Cloud. For Core recordings, you can of course use the Surface Tracker Plugin + AprilTag markers to generate heatmaps: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker

user-1ed146 04 May, 2023, 20:41:08

Thank you, Neil. I will check it out.

user-13fa38 03 May, 2023, 21:24:11

Are there windows options?

user-d407c1 04 May, 2023, 06:18:46

Hi @user-13fa38 πŸ‘‹ ! Just stepping here for Neil! To answer your question, yes! Any recent generation Intel i7 CPU with 16GB of RAM (if possible DDR4/DDR5 for transfer speeds) should also be able to capture 200Hz. Key specs are CPU and RAM, the better they are, they most chances you get to reach 200Hz, but if buying new ones, consider potential bottlenecks, a PCIe 4.0/5.0 may avoid them.

Additionally, you may want to disable unnecessary plugins in Pupil Capture, some of those may consume more processing power while you may not need them, as you can run them post-hoc in Pupil Player.

user-eeecc7 05 May, 2023, 08:31:42

Hi @user-4c21e5 @user-e3f20f Can you please let me know where I can find the reason behind why the bivariate polynomial is used? Is there a paper or study that I could refer for this?

user-9e9b86 05 May, 2023, 09:06:20

Hi @user-d407c1 @user-4c21e5 - we want to use the glasses in shop alongs and soon after sit down with the participant to just playback/ show them where/what they were looking at. Is this easy to play back instantly? Or do we need to create a scanpath following the steps listed on the website

user-c2d375 05 May, 2023, 09:44:02

Hi @user-9e9b86 πŸ‘‹ I just replied to you by email. Please feel free to reach out if you have any further questions.

user-89d824 05 May, 2023, 10:17:05

Hi,

This is a new problem I've encounter in Unity - I'm unable to start the recording from Unity (last week, I still could) and I'm wondering this could be the culprit. I haven't touched any of my scripts in my projects for the past two months so it couldn't be that.

Please could you help me out with this? Thanks!

Chat image

user-074e3a 05 May, 2023, 12:04:19

Hey Dom, thanks for the reply! I figured out how to create the AOIs and it works well now! The only thing that I am still working on is finding the duration and numbers of fixations on these AOIs in the extracted data. Based on the names I expected them to be in 'fixations_on_surface_Surface 1' file, but that file only contains the column headers and no data?

user-cdcab0 05 May, 2023, 13:18:56

You're looking in the right place. As long as fixations occurred on the defined surface, they should be enumerated in that data file. Does the heatmap display (in Pupil Player) work in that recording?

user-074e3a 05 May, 2023, 15:15:38

ahh that was the issue, I forgot to add them after adding the surface! Thank you! One last thing, does the duration column display the duration of each fixation in nanoseconds? or which metric is used for this?

user-cdcab0 05 May, 2023, 15:43:23

Should be milliseconds - same as the general fixation export (https://docs.pupil-labs.com/core/software/pupil-player/#fixation-export)

user-c828f5 05 May, 2023, 20:23:24

Hi all, I want to know if the AddOns (especially HoloLens AddOns) have been spec'ed for gaze tracking accuracy and 3D pupil location accuracy? I understand that we don't have a "ground truth" 3D pupil location on real imagery but if anyone has attempted to spec out on CGI images? I also posted this on vr-ar but since the platform is Core, I didn't know which channel to post this question to.

user-b2f430 07 May, 2023, 02:28:45

Hi sir, I have problem with using camera in pupil_src. I can't use the camera to detect the image and calibrate the point of view. Please help me. Thankiu so much

user-93ff01 07 May, 2023, 05:11:33

I didn't get an answer previously, so here are my questions again:

user-93ff01 07 May, 2023, 05:11:35

Hi All, we just got some uncased Pupil core systems that we want to try to adapt to use in non-human primates. We need to custom build our own helmet, and some 3D models of the eye and world cameras would be helpful for prototyping. Is that possible? Secondly, the pye3d model must have some constants that are based on human eye anatomy, would it matter on the substantially smaller eye diameters of non-human primates like macaques? Could we tune these parameters somehow? The interpupil distance is also much smaller in monkeys, and I suspect this may affect some of the 3D estimations that may use binocular data? And finally, as a user of Eyelink and Tobii Spectrum Pro hardware, I really appreciate the beautiful hardware and great software design coming from Pupil Labs, congratulations!!!

user-93ff01 07 May, 2023, 05:13:35

TLDR: 1) do you have 3D models of the eye and world camera we can use to build a custom headset. 2) can pye3d model apply to non-human subjects given the constants dervied from human eyes. 3) does inter-pupil distance have any impact on your algorithms?

user-b2f430 07 May, 2023, 05:42:57

ERROR eye0 - video_capture.uvc_backend: Could not connect to device! No images will be uvc_backend.py:133 supplied. meanwhile my device has uvc support and is not being used on any software

user-cdcab0 08 May, 2023, 06:11:01

Hi @user-b2f430 πŸ‘‹πŸ½ - were there any other messages besides that one? What device are you using and on what operating system?

user-6cf287 07 May, 2023, 13:59:34

Hi there, I understand that the core tracker needs to be connected to a PC or laptop via USB but is there any possibility for it to be connected to a data logger that can allow it to be mobile? Do you have any recommendations? thank you.

user-cdcab0 08 May, 2023, 06:12:27

Hi, @user-6cf287 - if you're looking for a truly mobile solution, you really should consider Neon over Core

user-4c21e5 08 May, 2023, 07:44:41

Hey @user-6cf287! Just to expand on @user-cdcab0's response - Neon provides a more robust solution to tracking in more dynamic situations in which the wearer is typically mobile. Neon is definitely the recommended solution there. That said, some of our customers have made their existing Core systems more portable by using a small form factor tablet-style PC in a backpack.

user-6cf287 08 May, 2023, 18:50:37

Hi @user-4c21e5 , thanks for the response. at this point, since we have already bought Core, we would like to see how to use it in a mobile manner so we will explore the tablet solution as mentioned. thank you.

user-d407c1 08 May, 2023, 07:55:48

Hi @user-93ff01 πŸ‘‹ ! Thanks for the positive feedback! And sorry that we missed the first time you asked. Here you have the 3D models to help you prototype your own custom mount https://github.com/pupil-labs/pupil-geometry

Yes! pye3d is opensource, so you can go to the repository of pye3d https://github.com/pupil-labs/pye3d-detector fork it and modify it to your needs. As small guide, you can find these physiological parameters such as ranges for the location of the eye-ball, diameter of the eye ball, phi and theta... at https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/detector_3d.py#L575

You would probably not need to modify the pupil diameter range, but the sphere diameter (_EYE_RADIUS_DEFAULT) and potentially circle_3d_center (x,y,z) values at which one can expect it to be in the eyeball.

Specially if you are interested in binocular data, as the algorithms would estimate the distance from the eyeball's centre and the gaze vector to infer the binocular data.

Alternatively, and depending on what data you are looking for, you can use the 2D detector, although it would not be able to correct for slippage.

In short, yes, to all the questions. πŸ˜‰

user-93ff01 08 May, 2023, 10:32:49

Thanks Miguel. As I'm not a python coder and there aren't too many comments in the code it is a bit hard for me to grok exactly, but I'll have a go. I also found this file: https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/constants.py - it would be great if all relevant physiological constants could be collated in a single place? I've opened an issue on pye3d about this.

user-93ff01 08 May, 2023, 10:35:35

We may be able to get away with the 2D detector, but monkeys can be like kids and can move quite a bit and we are making a prototype helmet so we may certainly get some slippage, which was why the 3D detector looked more robust if applicable to non-human primate.

user-93ff01 08 May, 2023, 10:42:22

I am still a bit unsure of which parameter aligns with the inter-pupil (or inter-center) distance?

user-dd1963 08 May, 2023, 18:30:04

I am using Pupil Core. Currently, eyes cannot be detected. Are you familiar with this issue and any solution for this? πŸ‘ core @user-cdcab0 @user-92dca7 @user-4a6a05

Chat image Chat image Chat image

user-cdcab0 08 May, 2023, 19:45:55

Hi @user-dd1963 πŸ‘‹πŸ½ - are you using a Core headset purchased from Pupil Labs or a DIY unit?

If it's a unit from us, it seems the drivers weren't installed properly. There are some troubleshooting steps here: https://docs.pupil-labs.com/core/software/pupil-capture/#windows

If it's a DIY unit, can you tell us more about the cameras you're using? The first two images indicate an issue with the drivers you're using or the hardware itself. That will need to be resolved before it'll ever work with Pupil Capture

user-dd1963 08 May, 2023, 19:46:17

From Pupil Labs

user-cdcab0 08 May, 2023, 19:47:15

Good to know! When you have a chance, please try the steps listed for Windows at the link I provided

user-1ed146 08 May, 2023, 20:43:08

Hi there, I have a question on the exported fixation data. In the data frame, I have values larger than one and values less than zero in the "norm_pos_y" column. However, the bounds should be x:[0, 1] and y:[0, 1] in 2D normalized space (the coordinate system). Can you please help me figure out what is going on here?

user-4c21e5 09 May, 2023, 05:46:51

Gaze/fixation coordinates can exceed the bounds of the scene camera's FoV, since the eye cameras are able to capture a bigger range of eye movements. Note that such data can also be of low quality, so it's worth checking the associated confidence values.

user-dd1963 08 May, 2023, 21:51:20

When I pressed the wire connection part on the back of the Left and right eye camera(s), the issue was solved. Thank you! Have a nice day!

user-cdcab0 08 May, 2023, 21:52:40

Glad you sorted it out! πŸ™‚

user-93ff01 09 May, 2023, 09:04:40

Hi, some of our experiments will take place in a dim room: does the world camera have an IR-cut filter or could we use IR illuminator in the room to help the world camera? Any other advice?

user-93ff01 09 May, 2023, 09:21:52

Question 2: With the Eyelink 1000 we start and stop recording on every trial (with additional trial /sync markers to time sync properly), with our Tobii Spectrum Pro we were recommended to keep recording for the whole block and only send trial sync markers. For Pupil core, what is the preference, is it ok to start / stop recording before / after each trial giving the same session name (I didn't see a pause / unpause command), or record the whole session and use trial markers? It seems given we can record all camera data raw this might grow the files with lots of useless inter-trial video etc?

user-4c21e5 09 May, 2023, 16:34:17

How long are your trials going to be? We recommend to split your Pupil Core experiment into chunks to avoid slippage errors accumulating, and for managing the size of recordings, like you mention. Read more about that here: https://docs.pupil-labs.com/core/best-practices/#split-your-experiment-into-blocks

user-b2f430 09 May, 2023, 12:32:11

Hi QG TrαΊ§n TiαΊΏn 1665 πŸ‘‹πŸ½ were there any

user-6b1efe 09 May, 2023, 13:14:24

@user-4c21e5 Hi! If I want to define a rectangular area of interest, should the tags added to the four corners of the rectangle be the same or different?

user-c2d375 09 May, 2023, 13:42:41

Hi @user-6b1efe πŸ‘‹ it is crucial to use unique Apriltag markers, so please ensure to add different markers to each corner of the area of interest

user-4c21e5 09 May, 2023, 16:41:02

For synchronisation, I'd recommend sending events over the network api. This example script shows how you can do that using Python: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py. Or use our LSL relay: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture#pupil-capture-lsl-plugins

user-4c21e5 09 May, 2023, 16:44:01

The scene camera records visible light. What are your participants going to be looking at and how low is the ambient illumination?

user-93ff01 10 May, 2023, 00:33:53

We use a transparent LCD to allow two subjects to look at each other face-to-face and shared stimuli on screen. The problem is the glass also reflects the subjects own face (which we consider a distractor) so we have to carefully drop the illumination to the bare minimum. I haven't yet measured light intensity in the room but our other USB3 cameras (low-light SONY IMX291 CMOS) struggle somewhat and I expect your world camera is not optimised for low-light.

user-4c21e5 09 May, 2023, 16:45:42

I've responded to your question in the software-dev channel (it'd be great not to duplicate Qs in future πŸ™‚)

user-93ff01 10 May, 2023, 00:29:05

Thanks Neil. Our experiments are lab based. With the Eyelink and Tobii systems we tend to do drift corrections / validations by pausing the experiment (you could consider these blocks I suppose, but this is chosen dynamically). I have an API that allows me to use the same task design with either the Eyelink or Tobii, and ideally I'd like to use the same design for the Pupil core. So ideally I could start and stop the recording but still use the same session file. It seems this is not possible?

user-4c21e5 10 May, 2023, 11:44:08

Start/stop a recording will generate new recordings (and thus files) each time. It's also possible to validate and/or re-calibrate mid-recording. In fact, we'd recommend recording the calibration at the beginning of your trial regardless. This enables post-hoc calibrations (and also transfer of calibrations to different recordings). However, I would still recommend splitting very long trials into separate recordings - how long do you plan to record?

user-93ff01 10 May, 2023, 00:35:25

Using IR illuminators would solve this problem for us, but we are worried about the world camera (and any possible side effect on the eye cameras?)

user-4c21e5 10 May, 2023, 11:50:32

It's difficult to make concrete recommendations here. I've used Core down to about 3 lux ambient illumination and could see things in the world video. Suggest just piloting and seeing what you get. The scene camera has autoexposure, but you can manually increase the exposure time in very low light. IR illumination isn't picked up by the scene camera - we know this first-hand - but it does show up on the eye cameras. If it's consistent (i.e. not strobing), that's usually not a problem - the eye cameras also have autoexposure and manual adjustment options.

user-93ff01 11 May, 2023, 00:22:43

OK, it sounds as if the world camera does have an IR-cut filter (most normal camera do), and we will just bite the bullett and see what happens, thanks for the feedback!

user-660f48 10 May, 2023, 17:23:24

Hi Pupil Labs, just checking if this problem has been encountered by anyone: https://discord.com/channels/285728493612957698/1093713310261723217/1104725246440902658 Thanks in advance:)

user-f77049 10 May, 2023, 20:17:35

Hello I'm trying to use core device with pupilcapture 3.5.1 but i need to add audio and sync it with my pupillometry. I find in old request that this is possible using LSL. I'm really not familiar with coding etc ( i'm in my medical fellowship) So i m trying following every step... Unfortunately, plugin seems to be well installed in pupil capture but i always have the same problem which is : see the screen i tried to remove the flush fonction with "#" (sorry fisrt time for me) error is catched from resample fonction and all audio frames still droped

I think i missed something... if you can help me.. did someone succeed in recording audio ?

Thanks a lot

Chat image

user-93ff01 11 May, 2023, 00:21:10

Thanks, and yes I love the idea of recording the calibration / validation (very different to eyelink/tobii). Our overall session will probably be around 30mins but each trial will be 4 - 20 seconds, and we can revalidate every X number of trials. (1) For eyelink / tobii we do have "drift" trials where we can check the change in accuracy as the session progresses, do you recommend something similar? (2) If I pass the same session name with R rec_name I assume capture will append some modifier, or should my code always ensure no name collision? (3) Can pupil player knit together several different recordings into one?

user-4c21e5 11 May, 2023, 08:53:43

May I ask how much accuracy do you actually need? the 3d pipeline is more robust to slippage but less accurate, whilst the 2d pipeline can be very accurate (e.g. <.6 degrees) but is more affected by slippage.

Further responses:

  1. Running the validator (press T) will get you something similar - the target will appear that participants should follow with their eyes - accuracy and precision are reported

  2. Suggest having a naming scheme in this regard

  3. This isn't possible

user-fcd05e 11 May, 2023, 00:35:57

Hi, I am working with pupil data that was recorded using pupil labs hardware about 8 years ago. Does pupil player allow me to upload videos from the past or does the recording have had to be recent/live for me to analyze pupil dilation? If that is the case, does anyone have any recommendations for software that allows analysis of pupils based on past video recordings?

user-d407c1 11 May, 2023, 09:04:55

To expand on @user-4c21e5 answer: Over the last few years, there have been a lot of updates and improvements to Pupil Capture/Player. If your recording was made in an old version of Capture, it might not be supported by the latest Player versions.

To get all the latest fixes and improvements, you will need to upgrade the recording format. To do this, please follow these steps. Note that it's worth firstly only trying this with one recording to see if it works.

  1. Back-up data: To avoid any data loss, create a copy of the recording(s)
  2. Upgrade recording format: this operation is only possible by opening the recordings in Pupil Player 1.16 (see release notes at the link for further information)
  3. Get the upgraded recording folder and now you should be able to open it on recent versions of Player
user-4c21e5 11 May, 2023, 08:55:21

Hey @user-fcd05e! It's a bit difficult to say without knowing the specific version the recordings were made with. In the first instance, I would make a backup of the recordings, and then just try to open them in Pupil Player. If it doesn't work, feel free to report back here πŸ™‚

user-59397b 11 May, 2023, 10:09:14

Hello everyone,

I've broken a piece (see photo) Do you know where I can order another or where I can find the file for the 3D printer?

Thanks

Chat image

user-cdcab0 11 May, 2023, 10:16:28

Hi, @user-59397b - I believe you'll find those here: https://github.com/pupil-labs/pupil-geometry

user-59397b 11 May, 2023, 10:18:26

Thank you !!

user-934716 11 May, 2023, 10:18:01

Hey there!

I want to use the export functionalities of the Pupil Player in a script for batch processing. I need to include marker/surface detection and gaze on surface mapping. Does anyone have any suggestions on what could be a good approach to build this pipeline using the Pupil repository? I'd also be happy if anyone has some scripts to share.

user-4c21e5 11 May, 2023, 11:33:19

Hey @user-934716! There is a community-contributed batch exporter, which looks like it handles existing surface data: https://github.com/tombullock/batchExportPupilLabs

user-d407c1 11 May, 2023, 10:37:24

To add on top of @user-cdcab0 reply, here you might find it ready to order it: https://www.shapeways.com/shops/pupil_store

user-59397b 11 May, 2023, 10:46:56

Thank you

user-52c504 11 May, 2023, 20:28:57

Hi, I used Pupil Core to collect data during sight interpreting tasks (in which participants read text printed on paper and interpret orally simultaneously) and want to calculate saccade lengths moving out from one fixation to another to see if there are various reading processes, e.g., scanning with relatively longer saccades while typical reading with relatively shorter saccades. I searched the chat history and found similar discussion. I'm not sure if the velocity-based approach would be fit for purpose so I intend to use the equation attached. I wonder if head movements will affect the results and if there's a way to account for them? Many thanks!

Chat image

user-480f4c 12 May, 2023, 15:03:49

Hey @user-52c504 πŸ‘‹ . This equation calculates the distance between consecutive 2D coordinates. If you are not interested in angular metrics (e.g., angular displacement, velocity etc) this is fine. Now, regarding your question about the effects of head movements, this depends on whether you have surface-mapped fixations or not:

  • If you do have surface-mapped fixations, using this equation could be useful, even in a head-free condition.
  • If you do not have surface-mapped fixations, and instead the fixations are only in scene camera coordinates, the distance between consecutive fixations calculated using this equation will be more sensitive to head movements/VOR and smooth pursuit.

I hope this helps!

user-52c504 14 May, 2023, 09:47:48

Hi Nadia. Thanks very much! Can I just clarify that by "surface-mapped fixations" you mean those singled out by drawing AOIs with the surface markers?

I'd like to know the distance traveled to see what kind of reading process is involved, so it'd be fine as long as I can compare all the saccades in this data set. I wonder if there's any alternative that are less sensitive to head movements and can measure the "distance", provided that I'm looking at raw data freshly exported from pupil player with no AOIs drawn?

Many thanks!

user-f77049 12 May, 2023, 16:45:19

Hey, anyone using LSL can help me ? https://discord.com/channels/285728493612957698/285728493612957698/1105951795756421272 thank you πŸ™‚

user-4c21e5 12 May, 2023, 20:10:05

Hey @user-f77049 πŸ‘‹. Apologies, your message slipped off the radar!

It looks like your using the audio-rec branch of the Pupil Capture LSL repo. Unfortunately, this is a dev branch and may or may not work - in your case it's the latter and I don't have an obvious fix.

One workaround is as follows: 1. Run 'pupil_capture_lsl_relay' Plugin from the master branch: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture – this will relay pupillometry over the LSL network 2. Use LSL AudioCapture app to capture and stream audio data over the LSL network: https://github.com/labstreaminglayer/App-AudioCapture 3. Record everything in LSL LabRecorder https://github.com/labstreaminglayer/App-LabRecorder

That method is a bit cumbersome, but will get you a unified collection of a raw audio waveform and pupil size. Let us know how you get on!

user-480f4c 15 May, 2023, 09:00:24

Hey @user-52c504! Thanks for following up. I understand that you have already collected the data, is that right? If the data you have is just pilot, you could consider using markers and defining surfaces for your main data collection. Note that even though you have not defined AOIs in your recording, you could define them post-hoc as long as markers were included in the environment that was recorded.

If you already have the full data set (without any markers presented), you can still use this equation to calculate the distance but it depends on the head movements of the participants and it’s difficult to say how much they would affect your data without knowing how much β€œfreedom” participants had in moving their head while reading. I’d assume though that people do not move their head a lot while reading text printed on paper.

user-52c504 15 May, 2023, 21:07:05

Hi Nadia. Thanks! Yes I already have the full data set, but luckily I did have the markers for all trials. I'll use the markers then. Thanks again for the patience!

user-e2fd67 15 May, 2023, 11:44:13

Hello everybody! IΒ΄m want to use the Core-Tracker for my research project, however IΒ΄m facing a problem. When I open the Pupil Capture Software and I try to record, IΒ΄m not getting any recordings - not from the world camera and not from the eye cameras. The software says that it has recognized the world camera and the pupil cameras. When I record, it says that no world video has been captured (and I canΒ΄t find any pupil recordings in the file that has been created automatically). ItΒ΄s probably a very basic problem or even something hardware-related, but IΒ΄m completely new to the system and would love some help from you guys. Thank you in advance πŸ™‚

user-4c21e5 15 May, 2023, 12:24:57

Hey @user-e2fd67! What operating system are you using on your computer running Pupil Capture?

user-1c31f4 15 May, 2023, 20:47:53

Hey folks! Tomorow I'm starting a project that will use a pupil core to control some servomotors. I will need to get the gaze position in real-time and send it to an Arduino. I just want to make sure I'm starting in the right place -- should I be using the Network API (https://docs.pupil-labs.com/developer/core/network-api/)?

user-cdcab0 15 May, 2023, 20:55:22

Hi, @user-1c31f4 πŸ‘‹πŸ½! That sounds like a pretty neat project! The Pupil Core Network API which you linked to is exactly what you want for that πŸ™‚

user-1c31f4 15 May, 2023, 20:57:09

Sweet! Thanks!

user-cdcab0 15 May, 2023, 21:03:23

I'd be interested in hearing more about your project if you're able and willing to share

user-e2fd67 16 May, 2023, 09:19:09

Hi @user-4c21e5, thanks for answering but I already solved the problem by opening the software in the terminal via the sudo-command. IΒ΄m not sure why it worked (maybe something to do with administrative rights) but it did. Edit: And IΒ΄m using Mac btw πŸ™‚

user-4c21e5 16 May, 2023, 09:22:27

Hey @user-e2fd67! Yes, for Mac OS Monterey and later, you'll need to do that - that's why I was asking about which OS you're on.

user-4c21e5 16 May, 2023, 09:22:36

Glad to hear you're up and running now!

user-1c31f4 16 May, 2023, 21:20:32

Hey again folks -- is it possible to run the pupil core software on a Raspberry Pi 3?

user-1c31f4 16 May, 2023, 21:21:02

Or at least, receive the data stream from the Core Glasses?

user-cdcab0 16 May, 2023, 21:28:20

Hi, @user-1c31f4 - yes, we have had users report success running Core on Raspberry Pi's. You can find various discussions using Discord's search feature

user-1c31f4 16 May, 2023, 21:33:22

Great, thank you!

user-93ff01 17 May, 2023, 08:56:57

Wow, this is pretty amazing, but raises a question from me. I'm running Pupil Service (latest github source + Python 3.11) on a beefy workstation (32GB RAM, RTX2080 GPU, 8core/16HT i7), and Pupil Capture or Service maxes all 16 threads of my workstation. Does the pupil core software scale to the hardware? How can we know if the pupil core software cannot keep up?

user-cdcab0 17 May, 2023, 09:25:57

Indeed performance on a Raspberry Pi will be reduced. Most people will probably want to limit the Pi's responsibility to just creating the recording. You can do a calibration sequence during the recording and then run pupil detection and the actual calibration routine afterwards on stronger hardware.

user-cdcab0 17 May, 2023, 09:28:58

Another option is to use the Raspberry Pi to stream the video to another PC where Pupil Capture is running. Check out https://github.com/Lifestohack/pupil-video-backend

user-37b6ea 18 May, 2023, 16:40:50

Hello, I am new here.☺️ Glad to see you! I hope someone is willing to help me with a problem I am having with Pupil Core. The point is that I'm doing some aviation research and I made a recording with eyetrackers on an air traffic controller. Now I want to transfer the information to the Blickshift software for a detailed analysis. But at the time of export I have no fixations and I don't know how to fix this problem. Can anyone help me with some information or anything else?

user-4c21e5 19 May, 2023, 07:26:35

Greeting @user-37b6ea πŸ‘‹. Make sure you've enabled the fixation detector plugin in Pupil Player before you export: https://docs.pupil-labs.com/core/software/pupil-player/#fixation-detector

user-00cc6a 18 May, 2023, 17:08:55

Can we use reference image mapper with pupil core data?

user-c5ca5f 18 May, 2023, 23:40:54

Hi, So I am using Pupil Mobile app and Pupil Player. I want to know what would the sampling frequency for each camera for the data collected with Pupil Mobile. I suspect its is 30 fps since but is there a way to increase it ?

user-cdcab0 19 May, 2023, 08:12:21

Hi, @user-c5ca5f - it looks like you can change the framerate on Pupil Mobile for each camera. Open the pupil cam, then hit the hamburger button ☰ at the top left. Scroll down to frame rate and tap on it to see the available options.

user-00cc6a 19 May, 2023, 06:02:10

@user-cdcab0

user-cdcab0 19 May, 2023, 06:14:10

Hi, @user-00cc6a - sorry, but the Reference Image Mapper Enrichment is only available on Pupil Cloud for Neon and Pupil Invisible recordings

user-00cc6a 19 May, 2023, 10:50:05

Then what could be the other alternative?

user-cdcab0 19 May, 2023, 10:54:21

There really isn't a direct alternative. Probably the closest you could get with existing tools is the surface tracker https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-2dcf9f 20 May, 2023, 22:33:42

Sorry for a basic question but is this something I would be able to use to build an eye tracker to speech or to use a computer for my aunt with locked in syndrome? Her insurance is denying the one she needs so I'm seeing if I can use something open source.

user-d407c1 22 May, 2023, 06:59:30

Hi @user-2dcf9f πŸ‘‹ ! Yes, you might be able to use Pupil Core for that, yet you would need an additional layer of software that you'd need to implement by yourself to do some of these things.

Other users have written some routines to use the computer's cursor with their gaze https://github.com/emendir/PupilCore-CursorControl or to communicate using simple signals.

Nonetheless, it might be preferable to go for a calibration-free system like Neon to avoid the nuisance of frequent calibration for your aunt.

If you need further information, do not hesitate to contact info@pupil-labs.com

PS. We will soon release an article in our blog that might be of your interest.

user-8e2030 22 May, 2023, 13:50:21

Hello, I would like to know the differences between Neon and Tobii Pro Glasses 3 with a 100Hz sampling rate. Specifically, what kind of data can be captured, whether there are any paid software for analyzing the data, and any other relevant information. Thank you.

user-4c21e5 22 May, 2023, 16:26:01

Responding in πŸ‘“ neon πŸ™‚

user-1c31f4 22 May, 2023, 19:15:17

Hey guys, I'm having a very bad time trying to run from source on a raspberry pi. I've worked through a bunch of 'collecting packages' errors, but now I'm getting errors following the 'Building wheels for collected packages: ndsi, pupil-detectors, pyglui, opencv-python, pupil-labs-uvc' stage.

user-1c31f4 22 May, 2023, 19:15:27

Here is the pastebin of the errors I'm getting: https://pastebin.com/SZs9nf2x

user-cdcab0 22 May, 2023, 21:12:50

Pip requires some build dependencies to install some of those packages. You probably just need to run sudo apt install libglew-dev libeigen3-dev libturbojpeg0-dev to get what you need.

Alternatively, you may be able to find pre-built wheels for those packages for Raspberry Pi

user-1c31f4 22 May, 2023, 19:15:57

If anybody could point me in the right direction, it would be very helpful. This is the first time I've tried to build any program from source, so these errors seem quite impossible to me.

user-1c31f4 23 May, 2023, 05:50:14

This was it. Still no luck getting it running on the pi, but that fixed those errors!!

user-d407c1 23 May, 2023, 07:12:56

Hi @user-1c31f4 πŸ‘‹ ! I think that running on a RPI is a nice project (although quite demanding), and I don't want to discourage you from trying it. But I just wonder if it is really necessary for you, and if it would not be easier to just use it as a proxy to stream the cameras using the previously commented library: https://discord.com/channels/285728493612957698/285728493612957698/1108325281464332338

user-04c6a4 23 May, 2023, 09:21:15

Dear author

I keep getting this error while running the code, but my environment is always configured. Is there any way to solve this problem

Chat image

wrp 23 May, 2023, 10:19:40

Looks like this was responded to here: https://discord.com/channels/285728493612957698/446977689690177536/1110509528082026566

user-04c6a4 23 May, 2023, 09:26:18

@user-d407c1

user-ded122 23 May, 2023, 11:47:52

Hello, is there a way to access the pupil core API without keeping the Pupil Capture GUI application running in the background?

user-4c21e5 23 May, 2023, 15:27:46

Hey @user-ded122! Just to expand on Miguel's response, Pupil Service has no functionality for scene video, so you won't be able to calibrate and obtain gaze data (unless you're doing calibration in some VR scene). If you're just interested in pupillometry and eye state obtained from the eye cameras, Service will record those things.

user-d407c1 23 May, 2023, 11:53:33

Hi @user-ded122 ! that's what Pupil Service is meant for., depending on your needs, as it does not have scene video feed. https://docs.pupil-labs.com/core/software/pupil-service/

user-1c31f4 23 May, 2023, 15:39:59

Unfortunately, running the core on a single board computer is necessary for my project. I have decided that the RPi will not work, though, and will be looking into alternative computers.

user-cdcab0 23 May, 2023, 20:46:47

I'm curious (and it might be helpful for others here in the future) to know what your showstopper was with the Raspberry Pi

user-1c31f4 23 May, 2023, 21:15:19

I was using a pi 3b. I was almost done installing dependencies. I had to install some of the pupil dependencies by hand, because I had issues when trying to install them using 'python -m pip install -r requirements.txt'. I had just got to the point of building the wheel for OpenCV -- I left it for an hour, and while it was running, I read some posts on here and read that the RPi was not powerful enough to run the pupil detection (https://discord.com/channels/285728493612957698/1039477832440631366/1061964270285361193) . In addition, I found online that pupil capture needs 8gb RAM and at least a 1.6 GHZ cpu, which the RPi 3b wasn't even close to. Streaming the data to another computer is not an option for me, as I need real-time data and the device I am creating needs to be portable. So, I abandoned the RPi and I have began researching options for other SBC's. I am currently looking at the lattepanda sigma (x86), khadas edge 2 (arm), and the jetson orin nano (arm). I'm hesitant to move forward with an ARM SBC because I have read about dependency issues with cchardet (https://discord.com/channels/285728493612957698/446977689690177536/1078292253413474324).

user-cdcab0 23 May, 2023, 21:17:41

Thanks for sharing and for including links!

user-1c31f4 23 May, 2023, 21:16:36

I hope that's helpful to people in the future!

user-cdcab0 23 May, 2023, 21:17:56

Keep us posted on how your efforts go!

user-1c31f4 23 May, 2023, 21:18:32

Will do! Has anybody here had success running pupil capture from source on an ARM chip?

user-93ff01 24 May, 2023, 02:35:57

No but we are probably going to try to buy some https://wiki.radxa.com/Rock5/5B (or other RK3588 board) soon as the CPU+GPU are way faster than the RPi-4 and hopefully have MESA support so we can test and feedback

user-93ff01 24 May, 2023, 02:36:27

LattePanda sigma looks great too, but much more expensive...

user-6e2996 24 May, 2023, 07:35:44

Hi Pupil Lab, Why I can't find the "forget password" button on the current Pupil Cloud login page? Any solution for this? Thanks.

user-4c21e5 24 May, 2023, 09:13:45

Looks like this was responded to in https://discord.com/channels/285728493612957698/633564003846717444/1110839688383692820

user-b83651 24 May, 2023, 13:47:27

hello. I bought a core product. I want to directly receive and output data from an ir camera using Python on Ubuntu. However, even with reference to related references, it is difficult to find a way to connect to a serial port to receive input data, convert it, and output it. where can i check it?

user-cdcab0 24 May, 2023, 19:08:46

Hi, @user-b83651 - have you tried using the Network API https://docs.pupil-labs.com/developer/core/network-api/ ?

You may find some helpful samples on this repo as well https://github.com/pupil-labs/pupil-helpers/tree/master/python

user-b83651 26 May, 2023, 04:47:10

Thank you for answer. I have already checked all the records you left. I want to directly connect the camera in the local environment without using the provided interface to receive raw data and display it on the screen. Is that possible? And I'm not sure, so I'm asking, is this camera able to receive data only through remote access?

user-9a0705 25 May, 2023, 00:21:25

Hi everyone, IΒ΄m having an issue while running the codes as it seems there is a problem with compatibility in the version i have of python, as i keep getting this error: AttributeError: module 'packaging.version' has no attribute 'LegacyVersion'. the code in particular is the fixations detector I downloaded from the github associated with pupil labs: https://github.com/pupil-labs/pupil/tree/318e1439b9fdde96bb92a57e02ef719e89f4b7b0/pupil_src/shared_modules IΒ΄m wondering if I can fix this by downgrading the python version, which python version was used when writting this codes?

user-eb6164 25 May, 2023, 16:10:30

hi i have a question i was reading about the best practices to use the markers and it was mentioned that it should be set upward always? is that correct for me am using several markers to create a a squeeze surface on the tv screens so that i can capture everything inside. I assigned some markers downwards some to the left is that an issue?

user-eb6164 25 May, 2023, 16:11:19

Chat image

user-9f7f1b 25 May, 2023, 16:13:57

Hello, team! I have a question about 3D calibration. 2d gaze calibration/mapping works via polynomial regression and 3d calibration is calculated using bundle adjustment. As we can see, pye3d model outputs circle_3d_normals, why not map them to 3d ref points(we can fix ref points' depth, so we can map [x, y, z] normals to [X,Y] ref points), but use complex bundle adjustment algorithm?

user-a7d513 25 May, 2023, 17:22:38

Hi! I was wondering how can I configure a surface and obtain its related data (fixations_on_surface, gaze_positions_on_surface, etc.) via API or without directly using Pupil Capture and Pupil Player. Thanks a lot!

user-b49db3 25 May, 2023, 23:13:01

Hi there. We have managed to break a wire that is going into the eye camera plug. Is it possible/easy to do a DIY fix ? e.g., is the white plug a standard size/type? Any help appreciated - this is a pupil core headset

user-93ff01 26 May, 2023, 02:29:10

Have there been any updates regarding running pupil core on arm64 macs natively? The instructions suggest to use an intel python, requiring quite a bit of extra setup (I use pyenv or conda-forge and getting them to build intel binary isn't straightforward)?

user-cdcab0 26 May, 2023, 03:42:47

Hi, @user-eb6164 - no, I don't think the orientation of the markers makes any difference. You can arrange and orient them however you want - even mix and match, as long as you don't change that after you define the surface. If you do change the orientation or position of any of the markers, you'll want to delete and redefine the surface

user-eb6164 29 May, 2023, 20:27:01

thank you!

user-cdcab0 26 May, 2023, 03:50:04

Hi, @user-a7d513, I'm actually working on a project right now which has accomplished this. In short, the Surface Tracker class can be used without Pupil Capture, but the API isn't really made to be used stand-alone, so getting it done is a little bit hack-ish. I'll be wrapping up some of the harder bits into a separate Python package and we'll have a couple of demos to show how to use it.

We're still probably a couple of weeks out from making it publicly available if you can wait just a bit. Otherwise if you have more specific questions I can probably help answer them sooner.

user-a7d513 26 May, 2023, 15:46:16

Hi Dom! thank you for your reply.

My project deadline is in three weeks πŸ˜….

From my app I can currently record videos via Pupil Remote. What I would need now is to be able to configure a surface and obtain it's data in the recordings folder directly after being recorded, without having to export it from Pupil Player.

If there's any way I could get help before those two weeks, I would really appreciate it.

Thanks a lot πŸ™‚

user-cdcab0 26 May, 2023, 06:54:49

You'll want to look at the pyuvc package to get started with that: https://github.com/pupil-labs/pyuvc/

user-b83651 26 May, 2023, 08:27:00

thank you I'll check it out and get back to you if I have any further questions.

user-1aa180 26 May, 2023, 09:09:18

Hello, I'm am experiencing frequent crashes in the Pupil Capture app. I am using the Pupil Core device, and during recording, the application often gives me the warnings in the screenshot. Furthermore, the eye camera windows often freeze - in this case, I can 'unfreeze' them by manually selecting the camera source; if I don't do that in a timely manner, the window crashes and the Capture app will not let me open it again. I thought the issue could be related to high CPU usage (I'm running the app on a 11 gen i5 laptop), but it also occurs when the CPU is at low usage. Previous messages in this server hit at a low quality cable causing similar issues, but we are using the original cable that came with the glasses. Do you have any suggestions to prevent this issue?

Chat image

user-1aa180 29 May, 2023, 09:27:38

Hello, I am still experiencing this issue. Anyone could please help me out?

user-d99595 26 May, 2023, 10:51:26

Hi everyone, for my thesis we are using the Pupil Core. I have two questions regarding something me and my thesis buddy can't seem to figure out:

  1. does the core also record audio and if so, which setting do I enable. In Pupil Capture I have it selected on 'sound only' in general settings. However, I cannot hear any audio when I play back the recordings.

  2. We're specifically interested in pupil size measurements. After creating an export file we're looking into the pupil_positions csv file. After seperating the text to columns we'll get a whole bunch of data. Is pupil size among on of these columns?

Sorry in advance lol... we're complete eye tracking noobs and my supervisor has no clue as well πŸ˜…

user-d407c1 26 May, 2023, 10:58:58

Hi @user-d99595 πŸ‘‹ !

  1. Pupil Core has no microphone so it can't record audio by default. Yet, if you rely on audio, you can use iMotions software or the Lab Streaming Layer framework to record audio and sync it with the data.

  2. Yes, you should look for the diameter_3d column. Check out this link for a description of all fields on pupil_positions.csv https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

user-d99595 26 May, 2023, 11:23:33

Hi Miguel,

thanks for you swift response. So just to clarify, the diameters for both eyes are displayed in the same column?

user-d407c1 26 May, 2023, 11:57:26

you would need to filter by the column eye_id - where 0 stands for left and 1 for right eye

user-cdcab0 27 May, 2023, 03:19:20

Are you avoiding Pupil Player because it's a manual process or for some other reason? This may be of interest to you: https://github.com/tombullock/batchExportPupilLabs

user-a7d513 05 June, 2023, 10:29:21

Thank you! I wanted to avoid it so the customer doesn't need to use Pupil Capture and Pupil Player. Can you let me know when you publish the feature? Thank you πŸ™‚

user-a246bc 28 May, 2023, 17:56:47

Hello, does Pupil Labs work for DIY headset units that only have one camera (eye camera with IR reflections)?

user-a246bc 28 May, 2023, 17:57:13

I understand that would assume static head position, but I would like to experiment nonetheless.

user-9f7f1b 29 May, 2023, 05:14:22

HI, team! Does anyone have time to reply this question? "Hello, team! I have a question about 3D calibration. 2d gaze calibration/mapping works via polynomial regression and 3d calibration is calculated using bundle adjustment. As we can see, pye3d model outputs circle_3d_normals, why not map them to 3d ref points(we can fix ref points' depth, so we can map [x, y, z] normals to [X,Y] ref points), but use complex bundle adjustment algorithm?"

user-d407c1 29 May, 2023, 08:52:53

Hi @user-a246bc ! Depends, how do you plan to relate (calibrate) it to the environment without scene camera?

user-a246bc 29 May, 2023, 11:58:28

I would just calibrate it relative to the monitor, in a specific head position.

user-d407c1 29 May, 2023, 09:07:54

Hi @user-9f7f1b, I just wanted to understand better what you mean, do you want to map the circle_3d_normal output of pye3d onto a sphere of fixed diameter (at a given depth) using, for example, some polynomial regression? Is that right? Note that this approach may have issues as it will be limited to only one distance, and the polynomial mappers may not work so well with slippage.

To estimate gaze using the 3D Mapper, we use a geometric method that involves intersecting visual axes. This requires knowing the eyeball position and gaze normals in SCENE camera coordinates. The bundle adjustment is used to determine the relative poses of the eye cameras with respect to the SCENE camera as in Pupil Core, they can be adjusted.

user-9f7f1b 29 May, 2023, 11:51:13

Thank you! I got it.

user-d407c1 29 May, 2023, 09:34:30

Hi @user-1aa180 ! Would you ming checking if there is any physical damage to any of the cables connecting the cameras? Also would you mind writing at info@pupil-labs.com so we can better assist you?

user-1aa180 01 June, 2023, 10:40:27

I think I figured out what the problem was - it was the auto exposure setting that kept causing crashes. I turned it off and the software seems much more stable now.

user-1aa180 29 May, 2023, 09:36:54

the cameras don't seem damaged. I'll inspect them more in detail, and send a mail to the address you provided.

user-1aa180 29 May, 2023, 09:36:57

thank you!

user-eb6164 29 May, 2023, 20:28:42

i am having an issue today with pupil capture it is not allowing me to set the fixation threshold even i watched the video of recorded eye tracking all was what i fixate on is wrong so i discovered that minimum fixation seemes emtpy or low and i cannot change it. I do not want to reinstall the software because i already defined my surfaces what could be done? i attached an image

user-eb6164 29 May, 2023, 20:31:08

Chat image

user-eb6164 29 May, 2023, 20:35:08

also if i need to reisntall everything is there a way i can backup my surfaces and use them again? any clear guide for this?

user-cdcab0 30 May, 2023, 04:33:28

You'll find your settings in C:\Users\<username>\pupil_capture_settings, including surface definitions. You can back up those files if you want to be safe

user-eb6164 30 May, 2023, 20:06:46

thank you

user-5fbe70 30 May, 2023, 08:17:33

pupil-tutorials

user-74c615 30 May, 2023, 10:12:50

hi !! I am wondering if pupil invisible could record pupil size data ? i cant find this data in those csvs. Thank you!!!!!

user-d407c1 30 May, 2023, 10:15:27

Hi @user-74c615 ! Pupil Invisible does not perform pupillometry. The eye cameras are off-axis and for some gaze angles, the pupils aren't visible. This is not a problem for the gaze estimation pipeline as higher-level features in the eye images are leveraged by the neural network, but it does make it difficult to do pupillometry robustly

user-74c615 30 May, 2023, 10:13:05

πŸ‘€ πŸ‘

user-74c615 30 May, 2023, 10:13:28

@user-cdcab0

user-74c615 30 May, 2023, 10:22:00

thank you for your reply!!!

user-74c615 30 May, 2023, 10:31:27

i have another question ? dose core has this ability?

user-d407c1 30 May, 2023, 10:33:51

Yes, Pupil Core and Neon (in the future) can do pupillometry.

user-74c615 30 May, 2023, 10:35:20

ok ! thank you !

user-740d7c 30 May, 2023, 12:59:36

Hi everyone, I'm pretty new with Unity and Pupil Core so I'd be glad if someone could help me! Is it possible to use a totally custom-made calibration instead of the one that pops up when I connect my pupil core glasses and start the unity project? How can I separate the calibration from all the other script that I need? is there an efficient way to do it? Thanks in advance!

user-4c21e5 30 May, 2023, 19:12:57

Hi @user-740d7c! The demo calibration scene can be modified, i.e. it's possible to change the number and positions of the reference targets: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#calibration-settings-and-targets. Would that be sufficient for your needs?

user-744321 30 May, 2023, 13:16:18

Hello, I'm trying to find out the accuracy of the calibration for a recording but I can't find it in the data. Can I access it directly? thank you !

user-5fbe70 30 May, 2023, 13:17:42

Hi@user-e3f20f ! This is a set of experimental data. I am using Aggregating Fixation Metrics Inside AOIs, how do I export the sections.csv file? thank you !

Chat image

user-4c21e5 30 May, 2023, 19:22:39

Hi @user-5fbe70! Would you be able to clarify what Aggregating Fixation Metrics Inside AOIs is? Are you referring to this Alpha Lab content: https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#define-aois-and-calculate-gaze-metrics ?

user-4c21e5 30 May, 2023, 19:19:04

Hey @user-744321! You'll need to recompute the accuracy and precision values. Here's a tool you can use for this purpose: https://github.com/papr/pupil-core-pipeline

user-744321 31 May, 2023, 08:27:04

Thank you !

user-908b50 30 May, 2023, 21:51:48

What do negative values for the absolute time cell mean? This happened for 4 of my participants. I am using those values to calculate duration in my code.

user-908b50 30 May, 2023, 22:03:32

For these participants, the gaze timestamps, base data, and gaze point (x and y), as well as eye centre (xyz) coordinates are all negative.

user-5fbe70 31 May, 2023, 01:56:02

@user-4c21e5 yes!I found that Define AOIs and Calculate Gaze Metrics only work for the Products used: Pupil Invisible, Neon, and Cloud. I'm using Pupil Core, can't get the sections.csv file.😭

user-4c21e5 31 May, 2023, 10:35:15

Yes, that tutorial is meant for post-Reference Image Mapper enrichments using Neon or Invisible. That said, you can use the Surface Tracker Plugin in Pupil Core to map your gaze to flat surfaces in the environment: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker. The plugin gives you the option to generate different AOIs. Metrics would need to then be computed, but it shouldn't be too difficult πŸ™‚

user-4c21e5 31 May, 2023, 10:32:38

Pupil Time can indeed be negative. Read more about that here: https://docs.pupil-labs.com/core/terminology/#timestamps

user-908b50 31 May, 2023, 16:51:45

Okay, great, I did convert. Does this apply to the absolute timestamps values in the export info doc? I am using it calculate the duration of the clip exported. Would it be valid to use absolute values?

user-7a517c 31 May, 2023, 11:10:12

Hello! Under the support webpage, I see that an Onboarding Workshop is included with Core / Invisible. May I ask if anyone knows how to register for this workshop? Many thanks πŸ™‚

user-4c21e5 31 May, 2023, 11:12:52

Hey @user-7a517c! Please reach out to info@pupil-labs.com with your original order id and someone will assist you with getting that scheduled!

user-7a517c 31 May, 2023, 11:18:21

Thank you @user-4c21e5 πŸ™‚

user-6cf287 31 May, 2023, 15:09:52

Hi, I have a question regarding the eye tracker calibration. We are having studies where the screen is projected to a wall and is about 2-2,5 meters away from the participant. How should we calibrate the eye tracker for this situation. Do we just use the card or do we also have to place the calibration points on the screen digitally or manually? the second question is do we need to calibrate the eye tracker for every participant or is it enough to calibrate it once and save the settings? thank you.

user-4c21e5 31 May, 2023, 16:38:49

Hey @user-6cf287 πŸ‘‹. Either would work. The key point is to make sure you capture sufficient gaze angles to cover the screen as part of the calibration choreography. You'll need to calibrate each wearer. We recommend a new calibration whenever they put on the headset.

user-740d7c 31 May, 2023, 15:39:35

Thanks! I've already modified something using the documentation but I think I need something more complex: Moreover I've already made a Calibration project in Unity I would like to interface with Pupil Core. I was wondering if I could do it without rearranging heavily the script flow.

user-4c21e5 31 May, 2023, 16:33:55

It would be possible, although custom Unity projects aren't something we can really provide assistance with. The recommended workflow would be to build on top of the demo calibration Unity scene.

user-1c31f4 31 May, 2023, 17:24:07

Hey folks, very quick question -- what version of python do you reccomend I use to use the network API without running into dependency issues?

user-908b50 31 May, 2023, 23:16:20

Hi, just a quick question, did you smooth the data after interpolating, right? Also, would you smooth just pupil size related data? Or, is it typical to smooth gaze data as well?

user-98789c 01 June, 2023, 08:29:08

Hello, yeah I smooth after the interpolation. I have never worked with gaze data, maybe others can guide you on that.

End of May archive