core


user-75ca9b 01 May, 2021, 10:11:58

@papr I have issues trying to make pupil capture recognize the pupil

Chat image

papr 01 May, 2021, 15:03:44

You do not seem to have an IR filter installed. It makes detecting the pupil a lot easier. Also, in your case, the reflection is occluding the pupil.

user-75ca9b 01 May, 2021, 10:12:50

is there a guide which covers all aspects like resolution of the image to adjust...?

user-10fa94 01 May, 2021, 20:57:55

how are the world_timestamps.npy file in the primary recording folder and the exports folder related if i am exporting the entire video? for some reason mine are different lengths (the npy file in the exports folder has more timestamps).

EDIT: I found the difference but have a followup question. I realized the difference is because the main process stops during calibration, but during the pupil player export, gray frames are included at these timepoints. Since the main process stops during calibration, does the pupil process continue? in the exported files, it seems that there are still pupil readings

papr 01 May, 2021, 21:35:44

The pupil processes continues. It is possible that if the world process stops for too long, the pupil data receiving queue fills up and drops data temporally.

papr 01 May, 2021, 21:34:48

Player interpolates gaps with gray frames. These are also exported and therefore you have more exported timestamps than in the original

user-b772cc 02 May, 2021, 05:11:00

@papr my markers did not work well during the experiment, is there a way I could still add surface post hoc using the pupil player?

papr 03 May, 2021, 07:03:37

You can add and detect surfaces post-hoc in Player. But it uses the same algorithm for detection, i.e. it is likely that the markers will not be recognized either. Have you ensured that there is sufficient whitespace around the markers? It is essential for good detection.

user-0b60a4 02 May, 2021, 05:11:27

I have one question. If I want to output Eye Camera (both eyes) and World Camera video at 120Hz, what kind of PC specs (CPU, GPU, RAM) do I need?

user-7daa32 03 May, 2021, 04:29:34

After validation, the visual space covered reduced. Please Would you expect angular accuracy and precision to from validation step to be smaller than that of the calibration?

papr 03 May, 2021, 07:01:21

Angular errors for accuracy and precision usually go up in the validation and it maps new data. The covered area is independent of that. It is ok if the validation area is not the same as the calibration area

user-af5385 03 May, 2021, 07:30:39

It's App-Pupillabs 2.0, so it's not too old. I'll try and update and see if that makes a difference

I have 2 other streams (3 in total), where I can see both of their timestamps are running between 0 and 300 seconds, whereas pupil captures stream's timestamps are running between 6000 and 6300 seconds. One LabRecorder starts recording the three streams at the same time, the two first streams are running on one PC and the thirds (pupil capture) is running on another PC (Same network ofc). One of the two first streams is continuously pushing data at 100 Hz.

papr 03 May, 2021, 07:59:28

You are comparing the timestamps recorded by LSL Recorder, correct?

user-af5385 03 May, 2021, 07:59:35

Yes

user-af5385 03 May, 2021, 08:01:05

I'm currently doing a side-by-side comparisons of the versions. I tried on two different machines and saw no offset, those were running the 2.1 plugin

papr 03 May, 2021, 08:01:42

In 2.0 and later, App-Pupillabs syncs pupil time the pylsl.localclock(). LSL time sync then is applied on receiving side (LSL Recorder).

user-7daa32 03 May, 2021, 08:29:29

Thank you. What's now the essence of validation? Just to see how much error has accumulated?

papr 03 May, 2021, 09:15:46

It is a concept from machine learning: We need to calibrate because we do not know the relationship (mapping()) between the pupil positions (pupil) and the subject's gaze location (ref).

ref = mapping(pupil)

During calibration, ref and pupil are known, mapping is unknown. mapping is inferred based on ref and pupil.

Afterward, when mapping is known, the calibration error (training error) is calculated based on the same data:

training_ref = mapping(pupil)
error = angle(ref, training_ref)

The question is, how big is the error for unknown data, or better: How well does mapping generalize. For that, we need the validation. We collect new ref_new and pupil_new data, but do not change mapping. Afterward, we perform the same math to calculate the error:

validation_ref = mapping(pupil_new)
validation_error = angle(ref_new, validation_ref)

This error is more meaningful, as mapping was inferred from independent data. It tells you how well mapping works for new data.

user-af5385 03 May, 2021, 08:30:37

I tried replicating the issue without luck. I suppose that is good, but still concerning that I don't know the source of the issue.

papr 03 May, 2021, 08:59:23

You have not been running other plugins or scripts, e.g. Time Sync, in parallel, have you?

user-af5385 03 May, 2021, 09:00:06

Accuracy Visualizer and Pupil Remote only

papr 03 May, 2021, 09:00:36

Which commands have you sent via Pupil Remote?

papr 03 May, 2021, 09:00:57

The T command conflicts with the LSL time sync.

user-af5385 03 May, 2021, 09:02:14

It seems I am acutally sending 'T' somewhere in my app!

papr 03 May, 2021, 09:03:34

I guess I will add that as an example conflict here https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture#lsl-clock-synchronization

papr 03 May, 2021, 09:02:36

ok, this explains the offset then.

user-af5385 03 May, 2021, 09:14:40

Okay, thank you! You should make it bold, because it can cause a subtle but quite devastating problem if the timestamps are incorrect. Perhaps it's possible to record "T" events, and compensate for the reset when you're pushing the samples on the LSL stream

papr 03 May, 2021, 09:19:35

This sounds like you are attempting to sync Pupil time to two separate clocks. This is not supported. Instead, I suggest following the Pupil clock in your script to avoid conflicts. See our new time sync example here: https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py

user-af5385 03 May, 2021, 09:15:46

Somehow accumulating the "T" reset timestamps to get an accurate local_clock()

user-af5385 03 May, 2021, 10:09:05

Well the intention is to synch LSL time to two seperate clocks, which is partially the purpose of that platform, and I was under the impression that the lsl plugin for pupil capture would support this.

papr 03 May, 2021, 10:22:38

I was not clear. The purpose of the LSL clock is to sync multiple independent, local clocks to one target clock (LSL Recorder). For that to work, the sending software needs to use the local clock provided by the lsl framework.

Pupil Capture uses a fixed clock uvc.get_time_monotonic, and adjusts to other clocks by applying an offset g_pool.timebase.value. https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/world.py#L237-L241 g_pool.get_timestamp() is used to calculate timestamps all over the software.

App-PupilLabs sets this offset on init g_pool.timebase.value = g_pool.get_now() - lsl.local_clock() https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L34-L35

Sending T x will change g_pool.timebase.value such that g_pool.get_timestamp() returns time since x.

x and lsl.local_clock() behave like to separate clocks. And this is what I meant by

This sounds like you are attempting to sync Pupil time to two separate clocks. This is not supported.

user-af5385 03 May, 2021, 12:16:12

Ah right okay. Anyways, removing the section sending the "T" command did solve the issues, everything is synced up nicely now. Thank you for the assistance!

user-7daa32 03 May, 2021, 13:37:39

Thanks

So the calibration accuracy is what I will use in calculating for stimulus size right ? While the difference in accuracy of the C and V step just tell us The error accrued? Or how well the mapping works for new data

papr 03 May, 2021, 13:39:30

No, you should use the validation accuracy because it is more meaningful for the new data recorded during your experiment.

how well the mapping works for new data That is exactly why you want to know V error

user-7daa32 03 May, 2021, 13:47:47

I decided the angular accuracy error of 1.5degree, distance of 75cm. Used these to calculate the stimulus size which is 1.96cm.

Calibration Accuracy: 0.96, Precision: 0.082

Validation

Accuracy: 1.76, Precision: 0.083

The angular accuracy after validation has gone above 1.5.

My understanding is that it must be equal or lower than 1.5.

Another way I thought I should solve it: create surfaces that satisfy the new accuracy and precision or even enough margin using discretion

papr 03 May, 2021, 13:52:24

What ever value you choose, V error should be equal or below it. Beware that subjects usually are not used to the calibration procedure. So it is possible that their C/V errors are higher than yours.

papr 03 May, 2021, 13:49:47

My understanding is that it must be equal or lower than 1.5. Correct!

Another way I thought I should solve it: create surfaces that satisfy the new accuracy Correct. You could go for 2 degrees instead and increase the stimulus size accordingly

user-7daa32 03 May, 2021, 13:54:29

Thank you

I actually need to keep the stimulus size constant. What other things need to be varied easily among subjects ?

user-7daa32 03 May, 2021, 13:55:10

I read a paper that varied 60-65 cm distance

papr 03 May, 2021, 13:55:20

Do you mean constant during one experiment session or constant as in it needs to be 1.96cm?

user-7daa32 03 May, 2021, 13:57:04

Constant throughout the experiment for all participants. Whatever distance I decided... I might likely use 2degree instead of 1.5

user-7daa32 03 May, 2021, 13:59:43

I mean all participants get the same stimulus size throughout the experiment

papr 03 May, 2021, 14:00:57

just run some test calibrations with friends or colleagues and check if they can reach 2 degrees consistently.

Reducing the distance if the error is higher is possible of course, but this only works to some degree. If the viewing distance is 10cm, it might become uncomfortable for the participant

user-7daa32 03 May, 2021, 14:10:45

Thank you. I will try that. To my understanding, I might have 2degree, the distance and size fixed for vary numbers of participants as long as the validation is going on well as we said.

A quick question please.

What did you think might gone wrong a video in player blank out ?

papr 03 May, 2021, 14:11:51

This happens during disconnects or if the world process is busy calculating, e.g. shortly after the calibration is over

papr 04 May, 2021, 14:21:07

@user-ae6ade Please see this conversation ☝️

user-7daa32 03 May, 2021, 14:12:17

That's right !

user-7daa32 03 May, 2021, 14:12:33

Thanks. I will get back later

user-7daa32 03 May, 2021, 14:14:11

The AOIs are almost hit perfectly but I will still need to add a little margin

user-2fc6b2 04 May, 2021, 12:37:45

The new version of Pupil player (3.2.20) doesn't seem to read fixations from Core recordings. Drag & drop of the recording loads it, but I get the "Fixation detection - [WARNING] fixation_detector: No data available to find fixations" warning. I would appreciate your help. Thanks.

papr 04 May, 2021, 12:38:45

The fixation detector requires gaze data. Are you using Gaze from recording or post-hoc calibration to create gaze data?

user-2fc6b2 04 May, 2021, 12:40:13

I'm using gaze from recording, so directly the raw recording. This worked perfectly a few days ago with v2.6

papr 04 May, 2021, 12:43:31

Also, please be aware that you need to calibrate to get gaze data.

papr 04 May, 2021, 12:42:49

Can you see the green gaze circle when playing back the recording?

user-2fc6b2 04 May, 2021, 12:45:43

No. It was there when I played the recording in the older player. Calibration was always done before the recording.

papr 04 May, 2021, 12:46:43

Ok, so this is a recording made with v2.6, that you are now opening in v3.2.20, and the gaze is no longer displayed?

user-2fc6b2 04 May, 2021, 12:49:08

It was actually 2.3 (recording software version). Exactly, I'm opening them with 3.2.0 and fixations are not detected. "fixations.pldata" and "gaze.pldata" are all there.

papr 04 May, 2021, 12:51:01

Can you check the file size of gaze.pldata?

user-2fc6b2 04 May, 2021, 12:51:48

For most recordings around 10 Mb.

papr 04 May, 2021, 12:52:19

Are all of your recordings no longer showing gaze? Or only one specifically?

user-2fc6b2 04 May, 2021, 12:53:44

unfortunatelly, all of them

papr 04 May, 2021, 12:54:58

ok, could you please share one of them with [email removed] such that we can try to reproduce the issue? In the meantime, you should be able to open the recordings with your previous Pupil Player version.

user-2fc6b2 04 May, 2021, 12:57:27

I'd be happy to. Can I still find the older versions on GitHub?

papr 04 May, 2021, 12:58:03

The old version should still be installed, unless you uninstalled it explicitly. You can find all releases here https://github.com/pupil-labs/pupil/releases

user-2fc6b2 04 May, 2021, 13:02:59

I'll install it again. Thanks a lot for the support. I'll send a recording so that the (presumably small) bug can be solved.

user-ae6ade 04 May, 2021, 13:54:26

I had the same issue and thought I must be the problem

papr 04 May, 2021, 13:59:00

Thanks for letting us know! Feel free to share a recording as well, such that we can check if it is indeed the same underlying issue.

user-427730 04 May, 2021, 13:58:18

Hi people, I have some questions about the exported data. I am using AprilTag to create a surface and it seems like 2 out of 4 were visible and detected during the session. Now when I export the gaze data I can't tell if any of the gaze positions x and y are in the referance frame of the tags.

papr 04 May, 2021, 13:59:47

The exported data should have a column indicating if the gaze is within the surface or not (on_surf I believe)

user-427730 04 May, 2021, 13:59:45

Do I expect to see a specific variable name for the gaze position in the referance frame of the surface?

user-427730 04 May, 2021, 14:01:41

ah!! I see that it is FALSE for the entire dataset

user-427730 04 May, 2021, 14:01:53

I wonder if that is because only two tags were visible

papr 04 May, 2021, 14:02:33

To improve marker detection, please ensure a sufficiently large white border around them. Also make sure the scene is sufficiently illuminated.

user-427730 04 May, 2021, 14:03:53

they are actually on a screen with a black background. Do you recommend that I add a white frame to the images of the markers? Also, do you recommend that I sharpen the markers (by repeating the pixels) or using them blurry?

papr 04 May, 2021, 14:06:56

The edges should be as sharp as possible. The recorded contrast between the white area/border and the black area should be as strong as possible. If you printed the markers without the white border I highly recommend including it. The detection (green border around detected markers) should be very stable in the preview.

user-427730 04 May, 2021, 14:07:40

got you. Thanks for your help.

user-ae6ade 04 May, 2021, 14:20:26

Is anybody else encountering gray parts in the world video? as if the video wasn't recorded, just the whole screen gray. the fixations and the eye videos work nevertheless. Is this just about a very slow device or has this other reasons? I encountered that with the newest version (where the fixations don't work for me) and with v3.1.16 (which I now used because of the missing fixations in v3.2.20).

user-ae6ade 04 May, 2021, 14:23:04

it's mostly around 1,5 minutes in. But also the calibration part takes about 45 seconds because it's so slow. might be right here, too. thank you.

user-d230c0 04 May, 2021, 15:58:22

hi i wanted to know in which coordinate space is the gaze point and gaze direction data that is provided by gazedata.cs?

papr 04 May, 2021, 16:00:34

Gaze is mapped into the Unity main camera

papr 04 May, 2021, 16:01:14

See https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#map-gaze-data-to-vr-world-space on how to transform it into unity world coordinates

user-d230c0 04 May, 2021, 16:25:46

Thanx!!

user-7daa32 05 May, 2021, 00:06:00

I have issue using two computers. The Annotation plugin don't work when two computers are used

papr 05 May, 2021, 07:10:48

Keypresses are only recorded in the Pupil Capture instance in which the key was pressed. I am not sure if you are expecting the keypress to be transferred to the other computer. Please provide more details about the setup, the performed actions, and the expected outcome.

user-42a6f7 05 May, 2021, 08:50:29

Hey, i am using the pupil labs wearable eye tracker on windows, and when i use the pupil capture program, it tells me that it cannot connect to device

user-42a6f7 05 May, 2021, 08:51:45

And when i open the device manager of the device, it tells me that the installed driver is the best one

wrp 05 May, 2021, 09:51:59

@user-42a6f7 What version of Windows? What version of Pupil Capture?

Device manager: you should see Pupil Cam devices listed under libusbK devices. Do you see any Pupil Cam devices listed in Cameras or Imaging Devices category?

user-42a6f7 05 May, 2021, 09:52:40

I am using windows 8

user-42a6f7 05 May, 2021, 09:53:02

And pupil capture v3.2.20

wrp 05 May, 2021, 09:53:16

Pupil Core software is only supported on Windows 10. Please try to use a machine with Windows 10.

user-42a6f7 05 May, 2021, 09:53:42

Okaaay great

user-42a6f7 05 May, 2021, 09:54:09

I will test that, thanks @wrp

user-42a6f7 05 May, 2021, 09:54:33

But libusbK does not appear in the device manager on windows 8

user-42a6f7 05 May, 2021, 09:55:00

Is it due to windows 8 also?

wrp 05 May, 2021, 09:55:17

Driver installer was compiled/programmed for Win 10 if I recall correctly.

user-42a6f7 05 May, 2021, 09:56:04

Okay great

user-7daa32 05 May, 2021, 13:00:27

I have a laptop and a desktop. Both connected by HDMI cable. There were key presses during the exercise. The Annotation commands on Capture and Player are the same. The issue is that it shows 0 annotations on players but there were actually annotations. The keys were pressed

papr 06 May, 2021, 09:15:11

So, to clarify, you are using an external display, not two separate computers, correct? It is important that Capture World window is focused when pressing the keys. If it is not, the keys cannot be recorded.

user-7daa32 05 May, 2021, 18:28:09

Please do you have any reasons why one can end up having zero Annotations even when the hot keys were pressed ?

user-8e5c72 06 May, 2021, 08:36:13

Hi I we have a problem with the calibration - single marker

nmt 06 May, 2021, 09:13:25

Hi @user-8e5c72. Please can you explain your problem in more detail so we can be in a better position to offer advice?

user-0cc7c5 06 May, 2021, 10:16:58

following up on susi5's comment: We use the single marker calibration but it doesn't turn blue when fixated, only the borders stay green enlighted - is that normal? then, the calibration is very bad. compared to previous studies the gaze is still flickering around after calibration. can that be due to the camera used? this time we are using the fish-eye camera - last time it was the other one.... Thank you!

papr 06 May, 2021, 10:18:22

Is it possible that you have been using the "Manual marker" calibration previously?

user-0cc7c5 06 May, 2021, 10:19:19

No, last time we used the screen marker calibration. but it's not possible this time.

papr 06 May, 2021, 10:20:14

ok, thanks for letting me know. And you are using the physical marker mode in the Single Marker calibration, correct?

user-0cc7c5 06 May, 2021, 10:20:37

jup.

papr 06 May, 2021, 10:24:14

It is normal that the world window shows a green circle around the detected marker. I am not sure why you would expect it to become blue. Did you see this effect somewhere else before or in a video tutorial?

user-0cc7c5 06 May, 2021, 10:26:06

ah yes I saw it in a video that's why I asked myself if the marker might not be working properly. But obviously it works the way it should, thanks for clarifying! but do you think the bad calibration results are because the single marker is just not as good as screen marker calibration or because of the different camera or what could be the problem?

papr 06 May, 2021, 10:29:45

It is important to understand that the single marker calibration requires a different procedure than the screen marker calibration. The latter expects the subject to fixate the different targets while having their head still. The former ha two options: 1. Fixate a still target while moving the head around 2. Keep the head still, and follow a moving gaze target

Which of those two are you employing?

user-0cc7c5 06 May, 2021, 10:33:57

we tried both.

papr 06 May, 2021, 10:35:58

ok, could you please share a Capture recording of you or someone else performing the single marker calibration with [email removed] This way we will be able to give concrete feedback.

user-7daa32 06 May, 2021, 13:53:16

Thanks

At what resolution would you make image like a 3x3 square to appear clearly on player? Images on player are usually small and/or blurry

user-0cc7c5 06 May, 2021, 10:56:54

done. Appreciate your help!

nmt 06 May, 2021, 12:57:04

Hi @user-0cc7c5. I have responded by email with feedback about using the single marker to calibrate (and post-hoc calibration and gaze mapping).

user-0cc7c5 06 May, 2021, 12:57:51

Yes I just saw it. Thank you very much!

papr 06 May, 2021, 13:54:59

The display resolution of your stimuli is not primarily relevant. Instead, check the scene camera resolution and make sure it is focused correctly.

user-7daa32 06 May, 2021, 14:31:03

Focussed with a resolution of 1080x720

user-7daa32 06 May, 2021, 16:17:42

Is there a recommended chinrest for head mounted eye tracker?

user-7daa32 06 May, 2021, 16:20:21

What about ideas on how to get the participant to click a space bar or some other devices like the mouse when they find the target.... Clicking to align with the Annotations

user-f36813 06 May, 2021, 16:48:24

Hello, the right eye camera of my pupil core (Pupil w120 e200b) do not respond, I have measured impedance in S1 and s2 in both and you can see the difference in the photograph. I also measured pins at the SN9C5256AJG (camera controller) with low resistances (3.3 ohms… 3.1ohm..). Could you help me with this?? I want to keep me

Chat image

papr 10 May, 2021, 08:12:20

Please contact info@pupil-labs.com in this regard

user-f36813 06 May, 2021, 16:51:42

I want to keep using it monocular, but pupil player is not working. Is there a way solve that?

user-2be752 07 May, 2021, 02:27:02

Quick question about the fixation report csv from pupil player, is start_frame_idx-end_frame_index an inclusive or an exclusive range? Thank you!

papr 10 May, 2021, 08:14:01

inclusive 🙂

user-07abaf 07 May, 2021, 13:57:33

hey, a few questions - however we/I are really new to eye-tracking and procedures

  • we are planning a mirror gaze experiment, however I have noticed that only so far screen marker calibration yields the best angular accuracies -> we tried to calibrate w single physical marker and natural features, both really resulting in far off gazes
  • we are aware of the implications if we would use screen cal. - such as minimal movement as possible (however our participants will stand in front of the mirror, so there will be natural head movement)

  • Is there any possibility to improve the single marker calibration? Average accuracies right now are always >2.5 at least. Would "natural features" be more promising (e.g. calibrating on body features)?

  • However we noticed that in every recording there is a slight left shift, such as that even with good cal. + validation (accuracy <1) the fixations are systematically left shifted to the intended positions

  • Is this a solvable (known) problem or is it suffering from the experimental design (mirror reflexion, angle)

I would be very grateful for suggestions!!

papr 10 May, 2021, 08:27:29

1) Single marker calibration is very dependent on how ones performs the procedure. Feel free to share a Pupil Capture recording of you performing it with [email removed] such that we can give feedback. You should be able to receive comparable accuracies with screen and single marker detection. After any calibration, it is important to avoid slippage. But you can use the screen marker calibration and then let the subject look somewhere else.

Natural features calibration is less reliable as it requires good communication between subject and operator.

2) Usually the offset is systematic but not in a linear fashion. So it is difficult to correct it. Nonetheless, the post-hoc calibration in Pupil Player allows you to recalculate the calibration using a manual (linear) offset.

papr 10 May, 2021, 08:12:45

Could you let us know what is not working for you specifically?

user-f36813 10 May, 2021, 16:54:06

When I throw the folder in Pupil Player, it opens and closes immediately. I copied the data from the camera that is ok, and renamed them as if they were the missing camera, so that I could see the data without using post hoc calibration. But I can not use the calibration and validation in a monocular way (selecting just one eye data), is there a way to do that??

user-597293 10 May, 2021, 16:24:12

Hi, I have two questions regarding using Pupil Core with the newest release of Pupil software on Win10.

1) I’m using a second monitor for calibration, as my experiment involves visual stimuli on a monitor. Once the calibration is done, it remains frozen with a black color. Any idea what this could be? It stays this way until Pupil capture is closed and makes me unable to perform the experiment as I cannot show anything on the screen.

2) Are there any recommended settings in regards to frame rate and resolution on both world and eye camera? Couldn’t seem to find anything about this in the docs.

Thanks!

Edit: The frozen black screen does not happen when running calibration on the primary screen on the laptop.

papr 19 May, 2021, 08:17:59

We are testing a workaround for the black-screen-on-exit issue https://github.com/pupil-labs/pupil/pull/2144

Did you know that you can run the calibration in window mode, too? Is this an acceptable solution for now? I am trying to assess how quickly we need to release this bugfix.

papr 15 May, 2021, 20:03:06

I was able to reproduce the issue. I will let you know on Monday when we will be able to fix it

papr 10 May, 2021, 16:57:19

Player should be supporting monocular recordings. Just don't copy the video. Leave it as it is. Should you continue having issues with it, please share it with data@pupil-labs.com such that we can try to reproduce the problem.

user-f36813 11 May, 2021, 17:29:09

👍 I will try again, thanks

papr 10 May, 2021, 16:58:38

Thanks for reporting the issue and the detailed context information. We will try to reproduce the issue.

Re 2) we recommend the lowest resolution and highest frame rate that your computer allows without dropping frames

user-597293 10 May, 2021, 17:15:27

Thank you for your quick answer. This happened in v. 3.0.7 as well, but I updated today to see if it worked with a newer version - without luck. Nothing in the .exe terminal implies that there is an error, and once I calibrate again or validate the black screen goes away and the choreography is launched.

user-b772cc 11 May, 2021, 04:01:52

@papr how do we validate calibration result? Noted it indicates % dismissed for accuracy below .8. How much % dismissed would mean failed calibration? Thanks.

papr 11 May, 2021, 07:00:59

The most important metric is the accuracy. Which threshold to use, depends on your experiment/setup.

user-d8879c 11 May, 2021, 23:11:35

How do i delete an annotation within pupil player ( i annotated a section of my video with the wrong annotation so I need to delete and correct it)

user-d8879c 11 May, 2021, 23:14:06

I have invisible eye-tracking glasses I had an individual wear them in an arena and the signs that were 150 feet away are blurry and you cannot see what the signage says. any way to fix this to make the video footage sharp?

user-b772cc 11 May, 2021, 23:29:58

Thank you @papr. Where can i see the accuracy metric ? I can only see % dismissed for accuracy below 0.8 confidence.

papr 12 May, 2021, 07:28:13

Check out the accuracy visualizer menu. It shows the accuracy and precision for the latest calibration/validation.

user-370594 12 May, 2021, 09:06:46

Hello! We have tried recording eye movements in paragliders and skiers. The problem is that due to the sun or snow, eyes light up and the pupils are not visible. Is it possible to somehow change the settings to fix this? Or the eye-tracker is unusable in high-light conditions.

Chat image

papr 12 May, 2021, 09:10:35

Hi, unfortunately, there is nothing one can do post-hoc in Pupil Player to fix the overexposed eye images. 😕 We highly recommend checking camera exposures during the recordings.

user-370594 12 May, 2021, 10:11:36

Thanks, I see. So, the problem is just in exposures of eye cameras and it's possible to change it in the soft before recording, right? I didn't pay attention to such a function before.

papr 12 May, 2021, 10:12:04

Correct. See the "Video source" menu of the eye windows.

user-370594 12 May, 2021, 10:12:49

Thank you! I'll check it.

user-0b60a4 12 May, 2021, 11:44:15

I have one question.I am doing research using Pupil Core. Is there a setting to disable gaze measurement in real time?

papr 12 May, 2021, 11:44:57

If you do not calibrate, gaze will not be estimated. If you are talking about pupil detection, that can be disabled, too.

user-0b60a4 12 May, 2021, 12:03:32

Thank you for your reply. I have an additional question. Currently, I have World/Eye Camera set to output at 120[Hz], but the frame rate of the camera does not reach 120[Hz]. I'm looking for a solution to this problem, other than increasing the specs of my PC, is there any other way to set the frame rate of the video (to lighten the processing)?

papr 12 May, 2021, 12:05:05

Disabling the pupil detection (without closing the eye window) should solve the issue. Unfortunately, the UI for disabling the pupil detection does currently not. We will release an update shortly that fixes this issue.

user-0b60a4 12 May, 2021, 12:13:10

Let's try the experiment without calibration. Thank you very much.

user-0b60a4 12 May, 2021, 12:40:53

A red dot appears on the screen even though I have not calibrated it. Does this mean that it is estimating the line of sight? If it is, what is the solution to stop it from estimating the gaze?

papr 12 May, 2021, 12:41:34

This comes from the fact that Capture has restored the previous calibration. You can start with default settings in the general settings to reset it.

user-c5a9dc 12 May, 2021, 12:44:58

Hi folks, one question. Is information regarding an analysis done in Pupil Player (i.e. specifically intensity range) stored somewhere in an export/offline_data? Just wondering if this is available somewhere (to ensure e.g. reproducible analyses). Thanks!

papr 12 May, 2021, 12:46:05

No, the pupil detection configuration is not stored as part of the data.

user-c5a9dc 12 May, 2021, 12:48:39

Is there any way to do it – or would you recommend taking a screenshot?

papr 12 May, 2021, 12:49:31

Either that, or you can use the real-time pupil detector api to read out the configuration and store it programatically. https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_detector_network_api.py

papr 12 May, 2021, 12:50:25

You can use the API to set the properties, as well.

user-acc963 12 May, 2021, 14:23:09

Hi, just wondering what sort of filtering is done to the pupil confidence signal to detect blinks? I see it turns drops into positive peaks and gains in confidence into negative peaks. Is this some sort of median filter? Thanks!

nmt 12 May, 2021, 14:31:27

Hi @user-acc963. We run the 2D detector on every frame of eye data and assign a confidence value to show how well the pupil was detected. Our blink detector leverages the fact that the confidence drops rapidly in both eyes during a blink. It processes both eyes' confidence values by convolving it with a filter whose resulting values (called filter response or activity) spike the sharper the confidence drop/increase is. The onset of a blink is defined when the filter response rises above the onset confidence threshold (which corresponds to a drop in the 2D pupil detection confidence). The offset of a blink is defined when the filter response falls below the offset confidence threshold (which corresponds to a rise in the 2D pupil detection confidence).

user-7daa32 12 May, 2021, 15:04:02

Please, do you mean we can use the screen marker calibration but look elsewhere outside the screen ?

papr 12 May, 2021, 15:04:33

Correct

user-7daa32 12 May, 2021, 15:09:02

I have chosen (changed) to have the stimulus inside the screen for similar reasons @user-597293 pointed out. Our original plan was to have it placed on the wall. Never thought it is reasonable to calibrate that way. It still looks unreasonable to me because the subject must look at the calibration marker

papr 12 May, 2021, 15:18:59

I think we misunderstood each other. Of course the subject needs to look at the screen during the calibration. But afterwards, the subject can also look somewhere else without losing the validity of the calibration. Remember: Gaze is calibrated in the coordinate system of the scene camera, not the coordinate system of the screen or the wall.

user-597293 12 May, 2021, 18:51:55

Any news on this @papr? Have you been able to reproduce the bug, or may it be a local problem with my computer?

If a (potential) bug fix is far away I might implement single marker calibration by showing the markers directly in the stimuli software.

Thanks a lot ☺️

papr 13 May, 2021, 10:50:44

Unfortunate, no news yet. I will be able to update you by tomorrow.

user-a98526 13 May, 2021, 10:44:27

Hi@papr,I would like to ask whether the head tracking plugin needs to adjust the camera parameters and Apriltag parameters. When I was using Apriltag, the detection result was related to the camera internal parameters and the size of the Apriltag.

papr 13 May, 2021, 10:52:12

Apriltag detection in the head pose tracker should work out of the box. Are you having issues with the marker detection?

user-a98526 13 May, 2021, 10:54:10

I care if I adjust the resolution of the camera, will the detection result be the same (Realsens 1280720 and 640480)

papr 13 May, 2021, 11:07:13

I do not know that for sure. I assume apriltag detection will be better at higher resolution.

user-a98526 13 May, 2021, 11:34:50

Thanks a million.

user-44bf4f 13 May, 2021, 12:21:35

Hi, I'm part of a group of students that is meant to use a pupil core headset for a small university project. We currently have the problem that Pupil Capture does not recognize the world camera of our hardware. Both Eye Tracker cameras work fine, and the PC also recognizes the camera (my colleague can use it like a normal webcam) but the Pupil Capture software only shows three "unknown @ Local USB"-devices that can not be selected (tying to do so results in the error message "WORLD: The selected camera is already in use or blocked."). We are running the software under Win 10 and have already tried uninstalling and reinstalling the drivers once or twice. Do you guys have any idea what could be wrong?

papr 13 May, 2021, 12:25:12

Sounds like the drivers are indeed not being installed correctly. To debug driver installation could you please do the following: 1. Unplug Pupil Labs hardware 2. Open Device Manager 2.1. Click View > Show Hidden Devices 2.2. Expand the libUSBK devices category and expand the Imaging Devices category within Device Manager 2.3. Uninstall/delete drivers for all Pupil Cam 1 ID0, Pupil Cam 1 ID1, and Pupil Cam 1 ID2 devices within both libUSBK and Imaging Devices Category 3. Restart Computer 4. Start Pupil Capture 4.1. General Menu > Restart with default settings 5. Plug in Pupil Headset - Please wait, drivers should install automatically (you might be asked for admin privileges automatically)

user-44bf4f 13 May, 2021, 12:45:07

Hm, we did that now (twice, we also deleted camera drivers on a second attempt), but the problem still seems to persist...

papr 13 May, 2021, 12:46:07

Could you please check the Cameras and Imaging Devices categories in the device manager? What camera names do you see there?

user-44bf4f 13 May, 2021, 12:51:00

Cameras:

Integrated Camera
Intel(R) RealSense(TM) 3D Camera (R200) Depth
Intel(R) RealSense(TM) 3D Camera (R200) Left-Right
Intel(R) RealSense(TM) 3D Camera (R200) RGB
Microsoft(R) LifeCam VX-2000
Pupil Cam1 ID0
Pupil Cam1 ID1

Imaging Devices:

WSD Scan Device

libusbK Devices:

Pupil Cam1 ID0
Pupil Cam1 ID0
papr 13 May, 2021, 12:53:48

Please delete the Pupil Cam drivers from the Cameras category. Leave the libusbk category as it is. ~~Please be aware that you need to use a Pupil version with pyrealsense support or the third-party backend plugin to access the realsense camera as scene camera https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29~~

You will need to use a very old Pupil version if you want to use that R200 as scene camera

user-44bf4f 13 May, 2021, 14:09:19

Ah i see.... where can i get that? 😄

papr 13 May, 2021, 14:09:42

You can find all release at https://github.com/pupil-labs/pupil/releases

papr 13 May, 2021, 14:11:33

v1.9 might be a working version

user-44bf4f 13 May, 2021, 14:18:40

Okay, we will see if that works and keep you updated on how its going 🙂

user-7daa32 13 May, 2021, 14:44:38

Hello ,

Capture is crashing after I open it. The world window and one of the camera window will go off suddenly after I open capture

papr 13 May, 2021, 14:44:59

Please share the capture.log file

user-7daa32 13 May, 2021, 14:46:59

Sorry, where is the file location?

papr 13 May, 2021, 14:47:19

Home -> pupil_capture_settings -> capture.log

user-7daa32 13 May, 2021, 14:48:25

Chat image

papr 13 May, 2021, 14:48:50

The capture file is the correct one.

user-44bf4f 13 May, 2021, 14:54:30

We got it to work on version 1.2.7 now, thanks a lot for the help!

user-555c9d 14 May, 2021, 01:59:27

Does anyone know the meaning of Pupil_timestamp? Can it be converted to UTC?

nmt 14 May, 2021, 07:27:24

Hi @user-555c9d. You can read about pupil's timing conventions here: https://docs.pupil-labs.com/core/terminology/#timing

At recording start, the current time of the pupil clock is written into the info.player.json file of the recording under the “start_time_synced_s” key. Additionally, the current system time is saved as well under the “start_time_system_s” key.

You can convert Pupil timestamps to System time with some quick conversions. Here is an example of how to do this with Python using the system/synced start times and an example first Pupil timestamp:

import datetime
start_time_system = 1533197768.2805 # unix epoch timestamp
start_time_synced = 674439.5502 # pupil epoch timestamp
offset = start_time_system - start_time_synced
example_timestamp = 674439.4695
wall_time = datetime.datetime.fromtimestamp(example_timestamp + offset).strftime("%Y-%m-%d %H:%M:%S.%f")
print(wall_time)
# example output: '2018-08-02 15:16:08.199800'
user-518ca2 14 May, 2021, 10:51:46

Hi,

We are conducting experiment with Pupil Labs using Pupil Mobile app. When I first plugged in there was an error: "error: shoutAttachedsensor is not available KeySensor..." After that I unplugged and plugged it again. And there was no errors. I started the experiment.

There was a recording approximately 40 minutes long. But When I copy the data and analyze with Pupil Player, there is data for only 3:56 minutes. Any ideas ? Where to look for log/error history etc. on the phone? The cable connection might be the problem, because the metal surrounding of the cable seems damaged. Trying again to connect with Pupil Capture, can not connect to device and not showing the eyes images. Using Samsung Galaxy S9+ with Android version 10.

user-d1072e 14 May, 2021, 13:02:24

Hello, I'm writing a report for my project and since I use Pupil Core, I also have to document some of its key components. I have a question about pupil detection method Pupil Core use. In the docs as well as Pupil Player, it's apparent that the device run 2 methods at the same time: 2D detection and 3D detection. The 2D detection is quite clear to me as it detect the pupil location in the camera image and the algorithm is based more or less on the one mentioned in the section "Pupil Detection Algorithm" in this paper (https://arxiv.org/pdf/1405.0006.pdf). But how about 3D detection, it uses a 3D model of the eye(s) that updates based on observations of the eye.... (https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detection), but what kinds of 3D model of eyes it is talking about here? How it is gonna detect the pupil based on this model? Are there any paper or docs where I can find more information about this?

papr 14 May, 2021, 13:06:37

Please see https://docs.pupil-labs.com/developer/core/pye3d

user-d1072e 14 May, 2021, 13:26:04

Thank you so much. Also, one more thing, where can I find more info about the gaze mapping function? In the same paper I send earlier, it says "The user specific polynomial parameters are obtained by running one of the calibration routines", and there are 4 of them: Screen Marker, Manual Marker, Natural Features and Camera Intrinsic. So, I guess each routine has its own way of doing gaze mapping, right? Or maybe there's still an underlying function/method and depends on the routine, there will be some additional thing built on top? I can find only find this code for it on Github (https://github.com/jeffmacinnes/mobileGazeMapping), but it seems not enough and I still struggle finding the docs on website.

papr 14 May, 2021, 13:56:25

Please note, the camera intrinsics estimation is not a gaze calibration. It is just meant for estimating camera specific parameters, e.g. focal length. Capture comes with a set of default intrinsics such that running the estimation is usually not necessary.

Screen/single marker/natural feature calibration only refers to the choreography which defines how reference locations are being collected. The actual calibration (2d or 3d) happens afterward, based on the collected reference and pupil data.

2d calib. uses polynomial regression. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_2d.py#L72-L73

3d calib uses bundle adjustment to estimate the physical relationship/transformation between eye and scene cameras. The result are two matrices (for left and right eye) that you can use to transform eye to world coordinates. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L207-L209

user-d1072e 16 May, 2021, 17:56:19

thank you so much, that's very helpful for me 👍

user-8e2443 14 May, 2021, 16:20:19

Hi, can I use core with glasses?

nmt 15 May, 2021, 08:35:15

Hi @user-8e2443, the eyeglasses would need to go on top of the Core headset, and the eye cameras adjusted such that they capture the eye region from below the glasses frames. This is not an ideal condition but does work for some people but is dependent on physiology and eye glasses size/shape. The goal is to get a direct image of the eye (not through the lenses).

user-7daa32 14 May, 2021, 18:52:43

It has not worked well for me using glasses

user-7daa32 14 May, 2021, 18:51:06

I still don't know why I am getting zero Annotations. Even when I was clicking the right hotkeys

user-b96e8e 14 May, 2021, 19:00:45

Hello. We've been running two instances of Capture (two headsets) on one computer for about 12 months. We've been using the Groups plugin and Time Synch to mirror recording on both headsets. Two days ago, though, we've run into the issue where Groups no longer mirrors activity and we are force to hit Record on both instances of Capture. Any solutions to this? Or explaination as to why this has suddenly become an issue?

papr 15 May, 2021, 08:03:06

This is a known bug that we fixed in our latest v3.3 release. :)

user-7daa32 15 May, 2021, 17:47:07

This might seem impossible but I have a recording for it. While clicking the Annotation hotkeys with the world window in view it will have the loaded Annotations show on player, but when I switch the scene to that of the visual stimulus, the Annotations done at this time would get loaded and won't show in player.

For example, If I click the Annotation during recording with the world scene in view, I will have loaded Annotations in player. But the Annotations I do when I look at the stimulus instead of the world camera scene won't be loaded when view in player.

papr 15 May, 2021, 19:20:45

Again, the Capture application window needs to be in focus/active. The operating system is designed such that keyboard input can only go to the active/focused window. For example, right now, Discord is in focus for me as I type this message. Once I focus a different app, e.g. Spotify, I can no longer type into Discord. Windows highlights the active app in the task bar. When you switch to your stimulus software, you are taking the focus away from Capture and therefore it does no longer receive your keyboard input and therefore cannot record any annotations.

user-7daa32 15 May, 2021, 17:47:16

I don't know what's happening

user-7daa32 15 May, 2021, 19:24:12

The stimulus is just a picture... Jpeg file... Do you mean I don't need to minimize capture window ? I don't think I do. I just start recording and then click the jpeg stimulus picture on task bar to bring it on the screen

papr 15 May, 2021, 19:24:49

Exactly. The app showing the picture takes the focus in this case!

user-7daa32 15 May, 2021, 19:24:52

I'm using two computers and both are in duplicate. What appears on the desktop also appears on the laptop

user-7daa32 15 May, 2021, 19:25:54

Then how would I bring the stimulus into view... ? I have not been having this trouble

papr 15 May, 2021, 19:30:14

I do not know your requirements. I guess one screen is for the subject and one for the operator? One idea would be to not mirror the screens. Instead, show the stimulus on the subject-screen and Capture on the operator screen. Make sure to refocus Capture such that it can process the keyboard input after you opened the stimulus

user-7daa32 15 May, 2021, 19:33:56

I will like to do the calibration on the stimulus screen (desktop). The calibration don't work and the markers don't come up during calibration

user-7daa32 15 May, 2021, 19:43:21

Chat image

user-7daa32 15 May, 2021, 19:43:47

Both computers are connected by HDMI cable. What is on the laptop must be on or from the laptop. The screen can either be duplicated or transferred. The desktop can't function on its own. When I was on the duplicated mode, the calibration worked but annotation. Now I am trying to use the other mode to check for annotation, the calibration refused to work . Don't know if you get this

papr 15 May, 2021, 19:46:12

Have you chosen full-screen or window mode? If full-screen mode, make sure that the display is correctly set. If it is window mode, please check if the calibration window is hidden behind another window. Also, tip: From the top bar of that window, you can see that this window is not active/focused. It is greyed out.

user-7daa32 15 May, 2021, 19:51:17

Both computers are connected by HDMI cable. What is on the laptop must be on or from the laptop. The screen can either be duplicated or transferred. The desktop can't function on its own. When I was on the duplicated mode, the calibration worked but annotation. Now I am trying to use the other mode to check for annotation, the calibration refused to work . Don't know if you get this

papr 15 May, 2021, 19:53:05

I understand that the calibration is not showing. I am asking for these setting such that we can figure out possible causes for the markers not showing.

Screen marker calibration menu:

Have you chosen full-screen or window mode?

user-7daa32 15 May, 2021, 19:55:31

Under screen marker calibration, there is no "full screen or window mode " written here

user-7daa32 15 May, 2021, 19:56:36

Chat image

user-7daa32 15 May, 2021, 19:57:32

WOW

user-7daa32 15 May, 2021, 19:57:48

I switched it to the generic PNP monitor (1) one and it worked. I have to check the Annotations now

papr 15 May, 2021, 19:58:29

ok, great. Make sure the Capture window is active

user-7daa32 15 May, 2021, 20:02:32

The capture window can't be in focus at the same with the stimulus

papr 15 May, 2021, 20:03:45

Correct. That is what I was saying before. Capture needs to be in focus.

user-7daa32 15 May, 2021, 20:04:25

How would you do that ?

papr 15 May, 2021, 20:05:20

You can put the world window into focus by clicking on it. You should see how the text in the window bar at the top changes colors.

user-7daa32 15 May, 2021, 20:08:52

Meaning the stimulus should not be in full screen while the capture window will be in full screen. What about user viewing natural features without using the screen?

papr 15 May, 2021, 20:14:17

I feel like I do not know enough about your setup to be able to help you here. I can only tell you what the requirements are if you want to create annotations via the keyboard: The world window needs to be in focus.

Also, the stimulus is not shown during the calibration, correct? Because the subject needs to focus the markers, not the stimulus. I do not know if natural feature calib would solve your problem.

If you are having trouble with the full-screen mode in the screen-marker calibration, you can turn it off. See your last picture -> disable Use fullscreen. This way you get a window that you can position where ever you like.

user-7daa32 15 May, 2021, 20:16:07

I'm not saying the stimulus is not showing.... The stimulus doesn't need to show during calibration. I am concerned about having zero annotation when though I pressed the hotkeys during recording.

user-7daa32 15 May, 2021, 20:18:38

You said for me to have annotations, capture window must be in focus.

Then I asked how can it be in focus when the stimulus should be in focus during recording?

papr 15 May, 2021, 20:19:38

Maybe we are having different terminology here? Multiple windows can be visible but only one can be in focus.

papr 15 May, 2021, 20:20:24

Also, focus does not mean that the subject needs to be looking at it. Focus is a technical term that determines to which window the operating system sends the keyboard input.

user-7daa32 15 May, 2021, 20:32:28

@papr I just sent my capture file to your email address since it is crashing again

papr 15 May, 2021, 20:33:17

ok, I will have a look at it on Monday.

user-b772cc 16 May, 2021, 00:49:30

Thanks @papr. Where can I see the results from the accuracy visualiser? I don’t seem to see any from the player. Thanks again

user-d1072e 16 May, 2021, 17:58:05

I think it's in Capture, but you need to enable this plugin

Chat image

user-7daa32 16 May, 2021, 23:06:45

@papr I went over all your responses and thought about them. This made me understood what is meant for capture to be in focus. I know what went wrong. I wasn't having issue getting annotations before. I used to calibration using single Maker and when I switched to screen marker calibration, I had both capture window and the stimulus on the desktop. So if I minimize capture on the desktop so have the stimulus on the screen, it won't be in focus again.

Here is how I solved it.

I moved capture window from laptop to desktop so I can calibrate. After calibration, I moved it back to the laptop so that capture can be in focus (active). The desktop will have the stimulus in view and I won't have the need to minimize capture( already showing in the laptop)

I hope you get it ? Thanks for your responses.

papr 17 May, 2021, 08:03:47

Yes, sounds like you got it right! 🙂

papr 17 May, 2021, 08:01:58

@user-d1072e is right. I was referring to Capture. You can get these values in Player, too, if you do post-hoc calibration. Each gaze mapper entry has a submenu validation that calculates the accuracy/precision for the corresponding section.

user-b772cc 17 May, 2021, 12:53:34

Thank you @user-d1072e and @papr If I have enabled the visualiser accuracy at capture, can I still get the results from player? Thanks again

papr 17 May, 2021, 08:02:58

I had a look at the file. There is no indication for a crash 😕 Do you continue having this issue?

user-7daa32 19 May, 2021, 13:41:46

I have experienced it twice and it turned back to normal after I switch it or my computer off for a while.

Quick question on fixation detection algorithm

I saw some recommendations on Max duration on Pupil Lab website and I understand that whatever one chose depends on a lot of factors..

Please do you have any resources that talked about best practices for a variety of tasks ?

I am doing a visual search task for example

user-ae6ade 17 May, 2021, 09:25:53

maybe a weird question, I will use two eye trackers on different computers. Is the timestamp of those two recordings then comparable? How do I synchronize them when they are not in the same room? I want to find out if they look at the same AOI at the same time (on a screen).

papr 17 May, 2021, 09:30:33

Checkout the Capture Time Sync plugin https://docs.pupil-labs.com/core/software/pupil-capture/#network-plugins

papr 17 May, 2021, 12:56:36

Yes, see this message https://discord.com/channels/285728493612957698/285728493612957698/843760243611140116

user-b772cc 21 May, 2021, 06:27:04

Thanks @papr How do I perform post hoc calibration? Thanks again.

user-a09f5d 17 May, 2021, 20:32:18

Hi

I have started data collection with the pupil core. While playing around with some early data I plotted the results (i.e. the angle moved by the eye within a period of time) from the SAME recording based on data obtained from: 1) real time pupil detection in capture v2.5, 2) posthoc pupil detection in player v2.5, and 3) posthoc pupil detection in player v3.2. While I thought there might be some small differences between the results from these different sources I was not expecting them all to be so different. I expected the data processed by v3.2 to differ somewhat since it uses the new pye3d method but there was even a difference between 1 and 2 which I found very surprising given that I would assume they should yield the same, or very similar, results.

To try and figure out if the difference between 1 and 2 was because I might have used different settings when running the posthoc pupil detection I decided to run the posthoc pupil detection in player v2.5 on a separate recording twice with exactly the same settings and exported the data from each. Although the results were more similar there were still some large difference between that would definitely impact my study.

Do you know why there is such a large amount of variability in the data from the same recording but from repeated posthoc pupil detection? Any help getting to the bottom of this would be amazing as this has really cast some doubt on the reliability of using the pupil position data for my current study.

Many thanks.

papr 18 May, 2021, 14:43:48

Which data are you looking at exactly? And would it be possible for you to share a raw data export for each version that exhibits this high variability? The email for sharing would be [email removed]

user-7ff310 17 May, 2021, 21:39:09

Hi Pupil Community, I have a question very specific for my use case. I need to be able to calibrate eyes individually, that is 1) Calibrate left eye while right eye is covered, then 2) Calibrate right eye while eye is covered. When performing gaze mappings, I need to have mappings based on step 1) for left eye and based on step 2) for right eye. Is there a plugin for that? I know there is this one: https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350 It does monocular mappings, but eyes are not calibrated individually. Am I missing something here? Many thanks in advance!

papr 17 May, 2021, 21:39:49

You should talk to @user-a09f5d 🙂

papr 17 May, 2021, 21:40:03

I will have a look at this tomorrow

user-a09f5d 17 May, 2021, 21:56:45

Thanks @papr

user-a09f5d 17 May, 2021, 21:56:12

Hi @user-7ff310 I needed to do exactly the same thing in the past so I'd be happy to chat with you. In a nutshell (and @papr please correct me if this is no longer the case), as far as I am aware it is not currently possible to calibrate each eye individually within the same recording. In the end I found a way to use the raw data from the eye cameras to avoid the calibration step altogether. Still work in progress but hopefully it will workout.

user-7ff310 19 May, 2021, 20:59:49

Thanks! I was thinking to do the same thing, basically to develop my own calibration procedure outside of pupil core. Unfortunately, I am very limited in time and don't have it enough to dive into development.

user-a09f5d 18 May, 2021, 17:01:49

I use the mean of Circle_3d_normal_x,y,z at time 1 and the mean at time 2 and calculate the change in eye position (in degrees) between the two time points

Absolutely, happy to share. Should I send just the exported files or do you need the whole recording?

papr 18 May, 2021, 17:24:37

The exported files should be sufficient for a first look. 🙂 If you say the mean, what values are you averaging? The normals from both eyes or do you average multiple normals from one eye in a given time period?

user-a09f5d 18 May, 2021, 17:53:48

I do the latter. For the left eye I average the normals in a 2 sec time window when both eyes are viewing the target (time 1- binocular viewing) and compare it to the mean of the normals during a 2 sec window when the right eye is covered (time 2- monocular viewing). I then calculate the angle between the position at time 1 and 2.. I then do the same for the right eye (except time 2 is when the left eye is covered).

papr 18 May, 2021, 17:58:43

ok, please include the timestamp ranges for time 1 and 2 in the shared recording.

user-a09f5d 18 May, 2021, 17:59:48

They are all listed in the annotation file.

user-a09f5d 18 May, 2021, 18:01:32

I'll try to get that sent over to you ASAP. Thanks @papr.

papr 18 May, 2021, 18:02:22

Since it is fairly late over here, I won't have time to look at it today. So no hurry 🙂

user-3bb37c 18 May, 2021, 20:30:14

reading the docs and having little experience with Pupil Core, I am wondering what does the red overlay on the eye camera (a red circle and a dot) mean, in comparison with the blue overlay described here https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings

nmt 19 May, 2021, 06:28:27

Hi @user-3bb37c, the red overlay represents output of the 3d pupil detector; the blue overlay represents output of the 2d detector. You can read more about that here: https://docs.pupil-labs.com/core/terminology/#pupil-positions (with links to detection methods)

user-3bb37c 18 May, 2021, 20:30:46

maybe I am missing more detailed documentation

user-189029 18 May, 2021, 20:30:48

Hello. I am working the gaze_report (the recording was binocular). Gaze postions from both eyes are included in the gaze_report, aren't they? Is it possible to get eye positions for each eye separetely?

user-1256ca 19 May, 2021, 00:56:36

Hi , the Pupil Capture world video window failed to appear quite frequently. How do I fix this issue on Mac? Thank you

nmt 19 May, 2021, 06:51:31

Hello @user-189029, gaze_positions does contain metrics from both eyes, yes. You might be interested in the gaze_normals. These describe the visual axis that goes through the centre of each eyeball and the object that’s looked at. But there are other possibilities. We report two kinds of data with Core recordings: Pupil data (this refers to any data that is relative to the eye camera coordinate system); and Gaze data (this refers to data that has been mapped to the world camera coordinate system following calibration). You can read more about the data made available here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

user-189029 19 May, 2021, 11:39:23

Thank you! I have another question. I am analyzing timestamps in the gaze_report for one eye and see that sometimes the lag between successive obseravtions is higher or lower than sampling rate (not approximately 5 ms as I expected). Is it because samples are missing, or because the samples are delayed/shifted?

papr 19 May, 2021, 07:31:32

Hi :) Just to clarify, the whole window is missing? Or only the video feed?

user-1256ca 19 May, 2021, 13:15:44

Thank you. I've solved that problem.I have another question.I have another question.There is no red ring around the pupil, and the green outer ring sometimes appears and sometimes disappears

papr 19 May, 2021, 13:17:20

The cameras do not have a fixed sampling rate. In other words, it is expected that the samples are not recorded exactly every 5 ms. In addition, if the computer does not have sufficient computational resources, it is possible that Capture drops frames in order to keep up with the real-time data. As the last point, I would like to note that the gaze positions are composed of three different streams: 1. Monocular left 2. Monocular right 3. Binocular

Wether a gaze datum is mapped monocularly or binocularly depends on the confidence of the underlying pupil data.

So, in total, the gaze data usually has an higher frequency than 200 Hz.

user-189029 19 May, 2021, 15:58:07

Thank you again! Is possible to get information how mapping takes place (monocularly left / monocularly rigth / binocularly) for a particular recording?

papr 19 May, 2021, 13:17:37

Which Capture version do you use?

user-1256ca 19 May, 2021, 13:26:39

3.3.0

papr 19 May, 2021, 13:30:50

Green and red circle represent the pye3d pupil result. Green is the eye model outline, red the estimated pupil detection. For this to work well, you need to fit the model. This can be done by simply rolling the eyes. Make sure that the blue ellipses (pye3d input) fit the pupil well, first though.

Once the model is stable, you should see the green circle around the eye ball and the red and blue circles should match for various gaze angles

user-1256ca 19 May, 2021, 13:36:04

Thank you for your patience. I will try again

papr 19 May, 2021, 13:42:16

https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds

user-7daa32 19 May, 2021, 13:58:41

I have actually read there multiple times before today. It seems this depends much on researcher's discretion while putting the nature of the task into consideration. Researchers might also check the literature. This is because you need to know the duration-dispersion filter chosen is appropriate.. the dispersion might be hard to get... The research on chess is somehow not going to be a day read to grasp

user-7daa32 19 May, 2021, 13:47:04

I have been thinking about getting the values too. Why post ad hoc calibration?

Would this go if calibration was not included in the recording file ?

papr 19 May, 2021, 13:50:16

No, Player needs reference data in order to calculate the accuracy post-hoc.

user-1256ca 19 May, 2021, 13:57:32

Please take a look at this picture. I'm in this situation right now. I can't align my pupils

Chat image

user-7daa32 19 May, 2021, 13:59:30

The eye camera is adjustable

papr 19 May, 2021, 13:58:10

I see. Are you using a Pupil Labs Core headset?

user-1256ca 19 May, 2021, 13:59:16

yeah.That was not the case last night, and it's been the case today😅

papr 19 May, 2021, 13:59:54

@user-7daa32 is right. Check this example camera adjustment video https://www.youtube.com/watch?v=7wuVCwWcGnE&t=13s

user-7daa32 19 May, 2021, 14:14:19

This is a paper that used the PUPIl Lab and here is what they said

Chat image

nmt 19 May, 2021, 14:32:44

Maximum dispersion of 3 degrees would probably suit an experiment whereby the wearer is moving around a lot. Such movements would elicit larger eye rotations in order to fixate static regions of the environment. We don't specifically recommend any particular parameters in general. Many researchers examine their respective datasets and tweak the thresholds until reasonable fixations are evident. Relying on existing literature to define thresholds can be problematic due to experimental/participant differences.

user-7daa32 19 May, 2021, 14:39:51

That's true. In my case participants will use a chinrest and the stimulus is static and a 2D image. It's a visual search task.. I will definitely read more literature to see if I can see anyone that gave a rationale for selection. Eye tracking research are hard to replicate. Many research hide a lot of details and some just state the manufacturer's recommendations or what they thought it is. For example the tracker accuracy and precision

user-3bb37c 19 May, 2021, 14:49:02

So, can I register safely with only a stable red overlay? I am running exps. with head movements.

In adittion, I have trouble setting the auto-brightness option of the world camera. There are 4 options but only the "Manual" can be set in my case. Other options do not set with an error saying "Could not set values..." Because of this, when my subject enters or exists a, it seems world camera is not adjusting brightness correctly. There are several options not listed in the online documentation. Is there a more detailed Manual so I don't have to ask too much in here?

papr 20 May, 2021, 13:34:26

The scene camera performs auto-exposure in its default setting (see screenshot). If you have trouble with that functionality, please share two short recordings where you change from a bright to a dark environment, and vice versa. Please make sure the exposure is ok during the start and that the exposure mode is set up as indicated in the screenshot (you can use the Restart with default settings button in general settings to get these settings, too).

Chat image

user-1256ca 19 May, 2021, 15:01:23

The same experimental material, different experimental personnel to view, the data results can be superimposed statistics to view?

user-3bb37c 19 May, 2021, 15:04:19

This seems to be the key. If the blue overlay depends on 2d model, then the output of the 2d model is the input to the 3d model? Should I adjust fixation filters thresholds first of all?The weird thing is that I have a lot of data with confident red overlay but the blue missing (or hidden in the back of the red overlay if they match?)

i am running vehicle conduction experiments

nmt 19 May, 2021, 15:12:25

If the 3d model is well-fitted, characterised by a robust red ellipse overlaying the pupil and a green circle that outlines the eyeball, you likely will not notice the blue ellipse as it will match the red one (like in this video: https://www.youtube.com/watch?v=7wuVCwWcGnE&t=13s).

papr 19 May, 2021, 16:03:17

Yes, this information is included in the base_data column. It includes 1-2 entries per row, where one entry is has the format <eye timestamp>-<eye id>. Two entries indicate a binocular mapping.

user-189029 20 May, 2021, 09:29:31

Thank you

user-7daa32 19 May, 2021, 17:04:23

I have a question please..

Still on Annotations

The participants are going to click hotkey when they find the target. Is there any USB connected devices I can use instead of the keyboard? For example a device like the mouse in which left-click will means start and right-click will means end.

If using two hotkeys will be confusing to the users because they might make mistake in the clicking, the space bar will be used (how can we set it to spacebar?) Or just one side of the mouse.

Using letter keys can be distracting

user-3bb37c 19 May, 2021, 19:55:12

With the natural calibration. Can the subject move the head between fixations? Or should it make fixations with head still for this type of calibration?

papr 20 May, 2021, 09:45:20

Head movement between fixation points is allowed. This way you can perform a similar choreography to the single-marker-calibration, where you only have one static target and the subjects looks at it from different angles.

user-7ff310 19 May, 2021, 21:22:34

I would also be willing to fund development of a plugin that would do this. Is there someone in this community interested in doing that? I would pay for development and code would be released back to the community under open source license. Please message me directly if interested.

Plugin should enable calibrating eyes individually, that is 1) Calibrate left eye while right eye is covered, then 2) Calibrate right eye while left eye is covered. Both 3d and 2d models should be possible to use. Thanks software-dev core

papr 20 May, 2021, 09:57:33

I have an idea on how to implement that. What is your time frame?

user-7ff310 20 May, 2021, 14:25:03

couple of weeks to a month would be perfect

papr 20 May, 2021, 10:01:14

There are external usb devices that have only a few buttons, specifically made for experiments. They behave like a keyboard. I think one can configure them what letter they generate. Unfortunately, I do not have a link to a specific product right now

user-7daa32 20 May, 2021, 12:35:19

Thanks.. I just checked online.. I noticed there are mini keyboard..I saw ones with 2 and three keys. These ones have no alphabet on them and they are completely blanked. They are programmable for OSU something and gaming. I saw one with ON and OFF written on it. The issue is how can I program them to work with capture? Maybe one that have the letters or numbers will worl just like the number keyboard

user-3bb37c 20 May, 2021, 11:56:09

I am starting to get things working now. Thanks for your answers, I will soon reduce my questions frequency.

Now I'd like to know 2 things regarding screen calibrarion. When does the center of the marker turn green? Is it when the world cam detects it correctly? I have noticed that too much exposure saturates the world camera, and only by reducing screen brightness (so that it matches the brightness outside the screen) I can get the center of the marker turn green.

I know that theoretically if am interested in natural world viewing, I should try the natural calib., But sometimes the screen calib. extrapolates the gaze outside of the screen quite precisely.

papr 20 May, 2021, 12:31:09

I would always use the screen marker calibration choreography. The natural features one is mostly for the case when you do not have access to a marker. And yes, it turns green when the marker is being detected.

papr 20 May, 2021, 12:37:23

As long as they behave like a keyboard, you should have no problem in Capture. Just program them to emit the letters u and i for example, and setup the annotations to use these keys.

user-7daa32 20 May, 2021, 12:39:11

Since we don't know how we can do that, I am going to look for the ones that have letter or numeric keys just like the keyboard

user-7daa32 20 May, 2021, 12:45:08

4-key USB Keyboard Mini Keyboard DIY Custom Shortcuts Keyboard

I think this mean we can easy program it

papr 20 May, 2021, 14:25:42

Ok, I think we can provide that in this time frame.

user-7ff310 20 May, 2021, 16:35:02

that's great, thanks!

user-7890df 20 May, 2021, 15:15:39

Hi, we have a quick question with regards to versioning improvement of algorithms, and older recordings. Can we think of the raw video recordings made by 2.6 and 3.x as essentially the same (so we can think of the video hardware capture is constant), and that the later pupil player versions doing posthoc analysis are improved capture algorithms? For example a session recorded by 2.6 analyzed by 3.3 player is the same as one captured by 3.3 and analyzed by 3.3? And that one recorded by 3.3 but posthoc analyzed by player 2.6 would be the same one captured by 2.6 and posthoc analyzed by player 2.6? Thanks!

papr 20 May, 2021, 15:34:42

Everything is correct, but this detail:

And that one recorded by 3.3 but posthoc analyzed by player 2.6 would be the same one captured by 2.6 and posthoc analyzed by player 2.6? Recordings are not backwards compatible. Only forward compatible.

user-7890df 20 May, 2021, 16:28:56

Thanks papr! Is backwards incompatability inherent to the video recording itself, or in the supporting files? For example, if for some reason scientifically we wished to use the 3.3 video recordings in an older algorithm, it would theoretically be possible, but not practical because manually editing the supporting files would be too onerous?

papr 20 May, 2021, 16:29:30

The supporting files, correct.

user-597293 21 May, 2021, 15:44:41

Thanks for looking into this. I also realized that a quick fix is to use the primary screen for the participant. Thus fullscreen screen marker calibration could be launched on top of the fullscreen stimuli software without getting the black screen, as on the second monitor 👨🏼‍💻

papr 21 May, 2021, 15:46:42

For me the issue triggers independently of primary/secondary displays. It triggers for me if the fullscreen window is on a different screen than the world window.

papr 21 May, 2021, 15:47:14

https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

user-b772cc 23 May, 2021, 14:48:34

Thank you!!

user-597293 21 May, 2021, 16:06:45

Weird. I actually changed computer today, from a laptop to a more powerful desktop PC. And the problem actually seems to be gone, and full screen calibration can be done on both monitors 🤷🏼‍♂️ Regardless of where the world window is.

papr 21 May, 2021, 16:07:54

Maybe it is dependent on the windows version? Can you check which one you have on your machines?

papr 21 May, 2021, 16:08:38

I am on 20H2 (OS Build 19042.985)

user-597293 21 May, 2021, 16:13:23

Yes. I am on the same version and build as you - for both PCs. The only difference is that the laptop where the issue originated runs Win10 Home, whereas the desktop PC runs Win10 Education.

papr 21 May, 2021, 16:14:41

ok, thanks for letting us know.

user-b14f98 21 May, 2021, 19:52:17

Hey folks, can you describe to me how pupil confidence influences the process of the py3D model fit during pupil detection? Is there a hard threshold? Are individuals frames weighted by confidence?

papr 21 May, 2021, 19:56:56

I am actually not quite sure about that. Let me come back to you on this on Monday

user-b14f98 21 May, 2021, 21:22:22

Ok. Thanks!

user-a98526 24 May, 2021, 12:02:30

Hi@papr, would you tell me what is the latest Pupil software that supports Realsense.

user-7daa32 24 May, 2021, 15:33:55

''Perform another validation. This can give you an estimate of how much slippage error you accumulated during the experiment. If you are recording multiple blocks with a non-fixed calibration setup, you can also re-use the calibration session of the following block as post-validation for the previous block.'' TRYING TO UNDERSTAND THIS STATEMENT. HOW DO YOU REUSE A CALIBRATION SET UP?

nmt 25 May, 2021, 06:51:27

Hi @user-7daa32 , I would highly recommend you replicate the steps in these videos: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection.

user-d1ed67 25 May, 2021, 04:02:19

Problem with pupil-video-backend on Raspberry Pi Hi. I bought a pupil core with 120 Hz eye cameras and would like to use it to do capturing remotely, so I connected my pupil core to a Raspberry Pi model 4 B. I want to use the pupil-video-backend to stream the videos of 3 cameras from the Raspberry Pi to another computer where Pupil Capture is running. Things work well when I only streamed the world camera. When I only streamed 1 eye camera, the FPS on the Pupil Capture is unstable (stay around 100-120). When I tried to stream 2 eye cameras, the FPS for both eyes on the Pupil Capture drops significantly (one around 60-70, the other around 30-40). When I streamed all 3 cameras, the FPS for 2 eye cameras are around 30-40, and the FPS for the world camera is less than 10 on the Pupil Capture. I also noticed 2 abnormal things. Firstly, when I streamed just one camera, the upload speed of my Raspberry Pi already reached 9 MB/s, but neither the world camera nor the eye cameras should take so much bandwidth. Secondly, when I streamed a single eye camera, the CPU usage of the capturing computer already reaches 50%, but processing video with 192*192 resolution and 120 Hz should not take so much CPU. Does anyone know where the problem is? Thanks for any help in advance. The Pupil Capture is run on Win10. The Raspberry Pi is installed with Ubuntu 20.10 64-bit. The Linux kernel is 5.8.1-1020-raspi aarch 64, and the MATE is 1.24.1. I installed the dependencies and ran the pupil-video-backend through Pycharm.

papr 25 May, 2021, 13:13:05

It is possible that the backend sends the raw bgr buffer which indeed could generate high bandwidth usage. The limited bandwidth would also explain the lower fps when streaming more cameras. I suggest contacting the backend's author directly.

user-d1ed67 25 May, 2021, 04:46:58

BTW, the frame read latency and frame send latency was fine on my raspberry pi

user-d1ed67 25 May, 2021, 05:16:52

Also, sometimes the pupil-video-backend will just crash and say TypeError: None does not provide a buffer interface. Usually after such a crash, the code cannot recognize an eye camera.

user-af5385 25 May, 2021, 11:59:00

Hey all, I have generated some extremely long pupil capture recordings by mistake (4-5 hours long). Is there any way to cut such a long recording in smaller bites? Including the video

papr 25 May, 2021, 13:13:26

Unfortunately, there is currently no tool to do that.

user-98789c 25 May, 2021, 12:22:08

What does "at the population level" mean here: Py3d detector is able to provide gaze angle independent pupil size estimations at the population level.

user-b14f98 25 May, 2021, 14:24:34

To be fair, I haven't had a good chance to go through the py3D code. I'm not afraid to, but was checking to see if you had an easy/quick answer.

papr 25 May, 2021, 14:38:10

I had a quick look and was not able to find a place where we filter for confidence when it comes to model fitting. It might make sense to do that though. We will have a closer look and discuss this.

papr 25 May, 2021, 14:28:11

pye3d requires some physiological parameters to work, e.g. eye ball radius, refraction parameters, etc. Since it is difficult to measure them for each subject, it uses average values that have been measured by other researchers in the past. Therefore, the correction is not correct on the subject-level (per subject) but should be correct on average (population level). In other words, the correction only works well if you aggregate the data from multiple subjects. As example, the pye3d might overestimate the required correction for one subject but underestimate it for another.

user-98789c 25 May, 2021, 16:16:22

very interesting, thanksss 👍

user-b14f98 25 May, 2021, 14:38:29

In a meeting now, but we should talk! It must be filtering blinks, at least.

papr 25 May, 2021, 14:42:49

Found it. It is a bit hidden. This is done internally by the observation storage classes e.g.https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/observation.py#L127 These are the used thresholds:

    threshold_short_term=0.8,
    threshold_long_term=0.98,
user-b14f98 25 May, 2021, 14:39:44

I have reason to think this is the case - we've been trying to implement our own segmentation method, and we see it explode during blinks. IN part, this is because all our samples seem to produce a high confidence value.

user-b14f98 25 May, 2021, 14:39:56

I'll check in again later so I can answer questions etc.

user-d1ed67 25 May, 2021, 15:39:18

Hi @papr, I found the same issue last night. The raw bgr buffer really takes a lot of bandwidth. A single eye camera already takes 13 MB/s. All 3 cameras combined takes 33.45 MB/s, which is quite high. I'm considering about sending out compressed images to reduce the bandwidth requirement to a reasonable level. Does Pupil Capture support any frame format other than bgr?

papr 25 May, 2021, 15:45:16

Apologies, I confused this with the frame publisher. In theory yes, the backend can be extended to compress the image buffers before sending them. But this is purely up to the backend. Capture does not have any influence on it.

papr 25 May, 2021, 15:43:07

~~Yes, it supports sending the jpeg payloads that it receives from the cameras.~~

user-d1ed67 25 May, 2021, 15:46:39

If the backend compresses the image, then the Caputre must be able to know how to decompress it into the origninal image. Does the Pupil Capture supports decompression of image?

papr 25 May, 2021, 15:47:51

The backend has two components. The OpenCV camera reader running on the PI and the "backend" plugin running in Capture. Capture just receives the raw buffer from the backend. How the backend gets the buffer is up to the backend. You would have to implement the compression on the PI side and implement the decompression on the backend side.

user-d1ed67 25 May, 2021, 15:47:01

Or do I have to write my own decompressor and pass the decompressed image to the Pupil Capture?

user-d1ed67 25 May, 2021, 15:50:23

OK. So I need to write the compressor on the raspberry pi that sends out the image, and write the decompressor on the computer where Pupil Capture is running, right?

user-d1ed67 25 May, 2021, 15:51:24

Could you please tell me where can I find the "backend" plugin running in Caputre?

papr 25 May, 2021, 16:01:29

Apologies, I just noticed, that it uses the built-in HMD_Streaming_Source backend. It does not support any compression. But you should be able to extend it by extending FRAME_CLASS_BY_FORMAT with your compressed frame class. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/hmd_streaming.py#L121

Capture will load all python files in ~/pupil_capture_settings/plugins/. Create a python file in there with the following code:

from video_capture.hmd_streaming import FRAME_CLASS_BY_FORMAT, Uint8BufferFrame

class CompressedFrame(Uint8BufferFrame):
  ...  # TODO implement decompression

FRAME_CLASS_BY_FORMAT["compressed"] = CompressedFrame

See the Uint8BufferFrame class for details.

On the sending side adjust - pass "compressed" as format for payload here https://github.com/Lifestohack/pupil-video-backend/blob/e185aee94818b6fddb472ffc7c3359980ff4436b/video_backend.py#L149 - pass compressed buffer instead of image here https://github.com/Lifestohack/pupil-video-backend/blob/e185aee94818b6fddb472ffc7c3359980ff4436b/video_backend.py#L166

user-d1ed67 25 May, 2021, 16:12:08

I see. Thank you!

user-b14f98 25 May, 2021, 18:29:34

Another, related question. We've found that the confidence values for our custom segmentation plugin are uniformly high when using the most recent build of Pupil Player. However, when using a Python port of your confidence metric we made about two years ago, the confidence values cover a much greater range, and more accurately reflect the quality of the pupil fit. Any insight into why confidence is uniformly high when using the new version, but not the old?

papr 25 May, 2021, 18:31:09

The 2d confidence measurement did not change in years. Are you talking about the 3d confidence?

user-b14f98 25 May, 2021, 18:30:37

Did py3d introduce a landmark change to the algorithm used to calculate pupil fit confidence? ...or, perhaps there have just been changes in some parameters? It seems like we're going to have to dig into the code to understand, but I thought I would hit yu up for for any general advice.

user-b14f98 25 May, 2021, 18:31:08

By uniformly high, I mean all >.98, which is why I asked the question before. A blink destroys the 3d fit.

user-b14f98 25 May, 2021, 18:31:44

No, I'm talking about 2D confidence, which I assume is the condifence used to threshold out which samples effect the 3D eye model fit.

papr 25 May, 2021, 18:32:03

Correct. That one did not change.

user-b14f98 25 May, 2021, 18:31:54

...as you were so helpful to point out before.

user-b14f98 25 May, 2021, 18:32:28

Interesting. So, this could be an issue with the masks we're sending , or... ? That's a very helpful answer, even if it leaves me scratching me head. 🙂 Thanks.

user-b14f98 25 May, 2021, 18:32:47

FWIW, I suggest you guys document the role of confidence thresholds in the portion of the docs dealing with pupil plugins.

user-b14f98 25 May, 2021, 18:32:53

It's a pretty critical point.

user-b14f98 25 May, 2021, 18:33:48

I also suggest that you guys open that parameter up via slider 🙂

user-b14f98 25 May, 2021, 18:34:25

I wonder if that used to be the mysterious slider that determined the frequency of model switching, but is now missing?

user-b14f98 25 May, 2021, 18:34:35

The long-term threshold.

papr 25 May, 2021, 18:35:54

You should be able to set the thresholds via the network pupil detector api. I do not think that exposing all parameters via the UI is a good idea. The UI is already too complex as it is. 🙂 Since pye3d changed its approach to slippage compensation, there is no need for the slider anymore.

user-b14f98 25 May, 2021, 18:36:31

Ok, that's reasonable, as long as this is documented. Should I open an issue for you?

papr 25 May, 2021, 18:38:31

Feel free to create a PR to pye3d repository to include the info in the README. I guess it could use a flow diagram documenting what happens in which order.

user-b14f98 25 May, 2021, 18:39:05

Ok, thanks. I also suggest adding a mention / link in the pupil plugin portion of the docs.

papr 25 May, 2021, 18:39:36

This is not really related to the plugins but pye3d specifically.

user-b14f98 25 May, 2021, 18:39:10

I'll try and do this later in the week.

user-b14f98 25 May, 2021, 18:40:28

I understand. The issue is that a plugin that is trained to produce pupil-shaped estimates (as most machine learning approaches will be) is generally going to score high when using your metric, which really tests for contiguous contours of an ellipse.

user-b14f98 25 May, 2021, 18:41:13

I believe the metric is # pixels that form continguous contours (support pixels) / ellipse circumference, in pixels

papr 25 May, 2021, 18:41:17

Well, if the pupil is partially occluded, the metric should result in lower values.

user-b14f98 25 May, 2021, 18:41:39

Not if your ML network is robust to occlusion, and produces masks that are full ellipses despite occlusion.

papr 25 May, 2021, 18:44:50

This has nothing todo with how the ellipse is fitted. If your contours are a square and you fit an ellipse to it, it will only partially overlap. If the pupil is occluded, only parts of the ellipse will overlap with the contour. Or am I misunderstanding you?

user-b14f98 25 May, 2021, 18:41:57

Give us a partial ellipse, and we'll produce the rest of it 🙂

user-b14f98 25 May, 2021, 18:42:31

Consequently, when there is a blink, confidence remains high, and our samples are used to fit the 3D model even if they are poor.

papr 25 May, 2021, 18:45:23

Maybe, it could make sense to use a different confidence metric in your case?

user-b14f98 25 May, 2021, 18:43:02

Now, this is the behavior when using the confidence metric bundled in with the most recent version of PL, but not the old version.

user-b14f98 25 May, 2021, 18:43:12

You're telling me theres no difference, and I believe you.

user-b14f98 25 May, 2021, 18:43:27

So, this means that somthing with our "old version" is off, or soemthing with the way we're sending our data to the new version is off.

user-b14f98 25 May, 2021, 18:43:31

We have work to do.

user-b14f98 25 May, 2021, 18:44:37

https://arxiv.org/abs/2007.09600 if you're interested. THe figures should help where my description might falter.

user-b14f98 25 May, 2021, 18:45:56

It may. I think you've helped as much as you can until we figure out what the differences are between the old / new metrics are.

user-b14f98 25 May, 2021, 18:46:22

Changes in parameters? A mistake in the way we are sending preparing our mask for ellipse fitting?

user-b14f98 25 May, 2021, 18:46:54

Our approach is to convert the eye image into a mask by passing it through our network before passing it off to detector_2d_plugin.py .

papr 25 May, 2021, 18:47:52

Ah, I thought your detector generated 2d ellipses on its own.

user-b14f98 25 May, 2021, 18:47:19

Using our network almost as apre-processing step. This allows a more direct comparison between the native algorithms and our own.

user-b14f98 25 May, 2021, 18:48:14

It does, in the form of a mask. We are not using an ellipse fitting algorithm that generates the centroid and axes. Instead, we are letting detector_2d_plugin.py do that on the output of our mask.

user-b14f98 25 May, 2021, 18:48:27

We CAN regress the ellipse using our network, but we've turned that off for now to allow for a more direct comparison between algorithms.

user-b14f98 25 May, 2021, 18:48:51

Right now, our network is really replacing only the histogram bisection you guys do to find the pupil.

papr 25 May, 2021, 18:50:23

And you are saying that for blinks, the 2d detector generates ellipses with high confidence using your masks?

user-b14f98 25 May, 2021, 18:50:30

Yes

papr 25 May, 2021, 18:50:52

Can you share an example 2d detection of a mask during a blink?

user-b14f98 25 May, 2021, 18:51:01

In a few days, yes 🙂

user-b14f98 25 May, 2021, 18:51:47

I'm very willing to share, but it seems like I'm asking you to do work that we should handle first. It's the obvious next step.

user-b14f98 25 May, 2021, 18:52:22

If you want to see examples once we're sure eveyrthing is working well, I can DM you some.

user-b14f98 25 May, 2021, 18:52:29

but I don't want to waste you time and send them before that.

user-b14f98 25 May, 2021, 18:52:41

I'm more convinced there's a bug now than before the start of this conversation.

papr 25 May, 2021, 18:53:27

ok. I will sanity check my claim tomorrow.

user-b14f98 25 May, 2021, 18:53:55

that the metric hasn't changed?

papr 25 May, 2021, 18:54:10

that one

user-b14f98 25 May, 2021, 18:54:00

or a different claim?

user-b14f98 25 May, 2021, 18:54:16

Ok, thanks!

user-b14f98 25 May, 2021, 18:55:01

I wouldn't do much more than ask your peers if you're forgetting something. My graduate student, Kevin, will also begin comparing the new vs old code this week. It will be good for him.

user-b14f98 25 May, 2021, 18:55:58

Ok, my dog says I've spent too much time here. Thanks again! I'll let you know if we learn anything interesting, and I'll send a few masks your way if they are interesting.

papr 25 May, 2021, 18:56:22

Would be ncie to see them. But I can wait a few days, too

user-597293 25 May, 2021, 20:43:11

Anyone having trouble with camera intrinsics calibration? I've covered most of the FOV from different angles (see image), but get the following warnings: 2021-05-25 22:36:29,510 - world - [WARNING] camera_intrinsics_estimation: Camera calibration failed to converge! 2021-05-25 22:36:29,510 - world - [WARNING] camera_intrinsics_estimation: Please try again with a better coverage of the cameras FOV!

This is followed by an endless stream of OpenGL errors: 2021-05-25 22:36:35,706 - world - [DEBUG] root: No more errors found in OpenGL error queue! 2021-05-25 22:36:35,741 - world - [ERROR] root: Encountered PyOpenGL error: GLError( err = 1281, description = b'invalid value', baseOperation = glOrtho, cArguments = (-0.0, 0.0, 0.0, -0.0, -1, 1)

Thanks 🙂

Chat image

user-a98526 26 May, 2021, 02:39:56

Hi@papr, there is a question related to pixel coordinates. How to convert norm_pos_x and norm_pos_y to pixel coordinates.

user-26fef5 26 May, 2021, 06:09:29

@user-a98526 Multiply it with the resolution of the camera

user-a98526 26 May, 2021, 06:27:12

Is it like this:

fix_x = fix_x * 1270 fix_y = (1.0 - fix_y) * 720

nmt 26 May, 2021, 07:09:56

@user-26fef5 is right. You might find this tutorial helpful: https://discord.com/channels/285728493612957698/285728493612957698/771023287555326012

user-a98526 26 May, 2021, 06:28:29

The resolution of my camera is 1270720@user-26fef5*.

user-a98526 26 May, 2021, 08:06:22

It's helpful, thank you@nmt.

papr 26 May, 2021, 12:38:56

For which resolution did you try to calibrate? And which distortion model did you use?

user-597293 26 May, 2021, 14:48:24

It happened with many of the uncalibrated resolutions. But I realize now it was due to me not setting distortion model to Radial before calibration, as it should be for the narrow FOV lens. Maybe one could add something along the lines of "Have you selected the right distortion model?" in the warning? It might be helpful if one should forget it prior to calibration - like myself 🤷‍♂️

By the way, while cycling through the different resolutions, I found a potential bug. When resolution is set to (1024,768) it seems like auto exposure doesn't work. It is "stuck" with the auto exposure from when it first was selected. See images. Does not matter to me, as I won't be using this resolution. Auto exposure works for all other resolutions.

Thanks again for your helpful and quick support 👍

Chat image

user-597293 26 May, 2021, 14:48:33

Chat image

user-9b5cc8 26 May, 2021, 19:46:26

Newbie question - what's the max FPS the current release of the eye tracking code supports? Looking at prototyping an app that may require up to 400 FPS. Thanks, Mike Brown

nmt 27 May, 2021, 06:59:16

Hi @user-9b5cc8 👋 . Pupil Core can sample at up to 200 Hz (read the tech specs here: https://pupil-labs.com/products/core/tech-specs/)

user-e0d63b 27 May, 2021, 08:32:27

Hi. Is there a way to use Pupil Core over prescription glasses?

nmt 27 May, 2021, 08:46:02

Hi @user-e0d63b 👋 , You can try to put on the Pupil Core headset first, then eyeglasses on top. The eye cameras should be adjusted such that they capture the eye region from below the glasses frames. This is not an ideal condition but does work for some people but is dependent on physiology and eye glasses size/shape. The goal is to get a direct image of the eye (not through the lenses).

papr 27 May, 2021, 08:42:14

@user-b14f98 @user-3cff0d This is the line that calculates 2d confidence. As you can see, this has not been changed in 2 years https://github.com/pupil-labs/pupil-detectors/blob/master/src/pupil_detectors/detector_2d/detect_2d.hpp#L222

user-e0d63b 27 May, 2021, 08:46:53

thanks!

user-9541d1 27 May, 2021, 10:16:29

Hey guys! I got really excited about the Pupil Core DIY set and wanted to get started building one myself, but was wondering what the scope of the statement "The Pupil DIY Kit is not for commercial use or commercial clients." is. So of course, just double checking, if I work for a startup, which I understand is a commercial client, but which does not intend to ship anything eye tracking related and I would use it just to see what it feels like to use eye tracking with the startup's technology (a clicker), would that still be an improper use of the open source kit? I'm aware the startup is a "commercial client" but just double checking!

wrp 27 May, 2021, 10:26:21

Hi @user-9541d1 👋 Yes, startup = commercial use. We would request that Pupil Core DIY is not used within commercial context.

BTW - what is a clicker? Maybe we can get in touch via email - info@pupil-labs.com - to discuss the user case further and provide you with some feedback. Or we can continue the conversation here if desired.

user-9541d1 27 May, 2021, 10:42:04

Thanks for confirming @wrp ! Sure thing, I'll shoot that e-mail a message :)

user-7bd058 27 May, 2021, 11:48:38

Hello, I have the following situation: I have the exact videos twice, at home and at work (duplicated). The thing is that the videos at work already have added surfaces which makes our 30 minutes videos so complicated to compute that my laptop at home can not do it.

In addition to surfaces, we mant to annotate manually. As I said, at home I have the same videos. If I annotate them, I get an annotation_player.pldata file. Is it enough to copy this file later in ordner do transfer the annotations? This would help me a lot as I'm often at home at the moment.

Thank you.

papr 27 May, 2021, 12:41:15

You need to copy annotation_player.pldata and annotation_player_timestamps.npy, but yes, that should work.

user-7bd058 27 May, 2021, 12:55:45

thank you 🙂

End of May archive