šŸ‘ core


user-f195c6 01 April, 2021, 07:59:26

Hello! I submitted an article about my project where I used Pupil Core to make some experiments, but they replied asking for a permission to use one of the pictures. I guess it can be the figure that I took from your technical aspects section (the one with the glasses and corresponding components)... I put the reference of the website, but it seems is not enough. Is it possible to have a written premission from you? Thank you!

papr 01 April, 2021, 08:00:32

Please contact [email removed] in this regard.

user-f195c6 01 April, 2021, 08:01:13

Thank you! I need this today 😄

papr 01 April, 2021, 08:02:14

I will make sure that you will get a response within the next 2 hours.

user-f195c6 01 April, 2021, 08:02:37

THANK YOU! I'm writing right now

papr 01 April, 2021, 08:07:32

We have received your email.

user-f195c6 01 April, 2021, 08:51:38

šŸ™

user-a09f5d 01 April, 2021, 17:44:04

Hi @papr When processing the pupil_position data exported from pupil player I exclude data what has a Pupil_confidence or Model_confidence below 0.6 as your website says that data with confidence below 0.6 is unreliable. However, this has resulted in a big loss of data because for some of my trials the Model_confidence for one eye (or sometimes both eye) is consistently low (<0.5, often <0.4-0.3) even though the Pupil_confidence is high (<0.9). I know that a low Pupil_confidence means that the pupil tracking was poor and it is often possible to see and fix this in real time using pupil capture during the experiment, but what causes the Model_confidence to be low and how can I prevent this problem from happening? Many thanks!

papr 01 April, 2021, 17:46:45

Hi, low model confidence means that the 3d eye model is not well fit. Its main requirement is high 2d pupil confidence. Secondly, it requires pupil data from different viewing angles in order to triangulate the eye model correctly, e.g. by rolling your eyes

user-a09f5d 01 April, 2021, 17:50:06

Okay, so in my case given that the pupil confidence is high the issue is probably caused by the a lack of data at different viewing angles?

papr 01 April, 2021, 17:50:47

Correct. Does your experiment require you to fixate a single point for longer periods?

user-a09f5d 01 April, 2021, 17:51:53

This pilot study does yes.

papr 01 April, 2021, 17:52:31

Does it follow a repeated measure design? And if yes, how long are your trials?

user-a09f5d 01 April, 2021, 17:59:05

The subjects are asked to fixate on a fixed target under different viewing conditions with breaks at set intervals. Each trails (and thus recording) lasts about 2 mins but the whole experiment takes about 40 min depending on the number and length of breaks the subject choses to take.

papr 01 April, 2021, 18:01:49

ok, great. I suggest asking the subjects to roll their eyes before each trial, making sure the 3d eye model fits well, and freezing the model for the duration of the trial. You can freeze/unfreeze programatically using the network api.

papr 01 April, 2021, 18:03:05

Freezing the eye model makes it susceptible to slippage. But if your subject is keeping their head still during the trial, slippage probability is low

user-a09f5d 01 April, 2021, 18:13:22

Cool! Thanks! How do I check the models fit in real time? I might be missing something and but I can't see the model confidence displayed in capture?

and what is the advantage to freezing the model once it has a good fit (i.e. I thought it only update if a slippage occurs)?

papr 01 April, 2021, 18:21:35

If you look straight ahead, triangulation is difficult, causing model estimation inaccuracy in z-dimension. For the 3d detector this can look like slippage and adjust the model. Freezing the model prevents any model changes.

model confidence is not displayed explicitly. You can either subscribe to model confidence and plot in realtime using e.g. python, or inspect the eye model visually. https://docs.pupil-labs.com/core/#_3-check-pupil-detection

green circle around the eye ball and a red circle around the pupil with a red dot in the center Additionally, starting with Capture 2.0, the red and blue circle should overlap as much as possible for as many viewing angles as possible

user-a09f5d 01 April, 2021, 19:18:49

Thanks very much @papr.

user-049674 02 April, 2021, 06:18:01

could Pupil enable my father to use his eyes to control a Windows PC and/or Android device? Or is there something you folks who understand this tech recommend for someone who has paralyzed arms? I'm sorry if this is the wrong channel to be asking this.

nmt 06 April, 2021, 09:52:09

Hi @user-049674. Pupil Core is a head-mounted system that requires set-up and calibration each time it's worn. It's possible to control a mouse via gaze collected by the headset, but it requires some technical knowledge to get set up, and is more for research than an end-user accessibility tool.

user-1a6a43 03 April, 2021, 18:57:56

Hey, I’m trying to use pupil core on a windows computer and I keep getting device has malfunctioned pop up

user-1a6a43 04 April, 2021, 18:28:12

Never mind, it worked

user-1a6a43 06 April, 2021, 02:40:13

@papr hello, I was wondering how we can limit the information we are trying to gather from pupil core to the dilation of both eyes, their azimuth, and their virgence (if that is calculated by the machine).

papr 06 April, 2021, 10:18:25

Are you referring to real time data collection or data recording by Capture?

For the former, you can specify which data you are interested in by subscribing to specific topics https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py

For dilation of both eyes and their azimuth you need to subscribe to pupil. Vergence requires calibrated gaze data, which you can get by subscribing to gaze and successfully calibrating.

The recorder does not offer much customization options for which data is recorded. You can turn off eye video recordings and disable active plugins to reduce the recorded data. Generally, I recommend recording as much raw data as possible to perform post-hoc processing should that become necessary.

user-f1866e 06 April, 2021, 20:22:56

can I apply host-hoc calibration from pupil-player for next pupil-capture recording session?

papr 06 April, 2021, 20:40:18

Hey, this is not possible.

user-f1866e 06 April, 2021, 20:40:54

ok thanks!

user-14d189 07 April, 2021, 06:12:57

Hi, could I get your advice please, I need to record 2 world videos simultaneously with different recording frequencies. Then I want to match times of the videos and present them next to each other. Is there something that I can use for the time matching?

user-14d189 07 April, 2021, 07:05:55

I use pupil capture 3.1.16 on a win10 system.

papr 07 April, 2021, 07:22:37

I am assuming that you are talking about recording two different scene cameras.

The easiest way (if you do not need eye video or pupil data), would be to select the second scene camera as an eye camera and use Player's Eye Video Overlay.

If you need these recordings to be complete, i.e. including pupil detection and eye video, use two Capture instances (need to be on the same computer as Time Sync is currently not working). Afterward, you can use Player's Video Overlay plugin to render one video into the other.

user-af5385 07 April, 2021, 09:18:04

HI all, I'm trying to setup Pupillabs 3.x with my VR application using LSL. I'm well aware that Pupil Capture needs to be calibrated before any data will be put out on the streams. However, I'm not interested in the gaze data - only Pupil diameter, which as far as I know doesn't need depth gaze calibration to work. Are there any plans to allow the LSL plugin to output pupil diameter data without calibration in the future?

I know I can include a calibration procedure in my VR application but it is already quite taxing to begin with so I'd like to avoid this.

papr 07 April, 2021, 09:22:25

Hi, currently we are not planning any changes to the plugin. But technically, it is possible to implement your requirements.

This is the line that accesses the gaze data. The first step would be to access "pupil" data. https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L48

Afterward, you only need to adjust the setup-channel and extract-data functions to match the pupil format.

user-af5385 07 April, 2021, 09:23:58

Thank you for your prompt response papr! Right now I've rolled back to 1.23 but I feel like I'm missing out on new juicy updates. I'll see what I can do with implementing it myself - Thank you!

user-7bd058 07 April, 2021, 11:32:59

Hello, pupillabs does not save any annotations I made. The annotation.pldata does not safe or change even though I the annotations pop up when I look trough the video. Any ideas?

nmt 07 April, 2021, 11:39:00

@user-7bd058 If your annotations pop up when you look through the recording, they are saved. Run the exporter to view the annotations in a .csv file: https://docs.pupil-labs.com/core/software/pupil-player/#annotation-export

user-7bd058 07 April, 2021, 11:51:19

Thank you for your rapid answer. Pupillabs crashed and therefore the annotation file did not update. That was the problem...

papr 07 April, 2021, 12:08:58

Should you be able to reproduce the crash, please share the player.log file (directly after; without restarting Player) with us such that we can check what caused it.

user-da621d 07 April, 2021, 16:58:22

hi, does the output data means normalized data?

papr 07 April, 2021, 18:32:07

I am sorry, but I do not fully understand your question. Could you try elaborate on what you want to know?

user-da621d 07 April, 2021, 16:58:33

@papr

user-7daa32 07 April, 2021, 18:48:22

Chat image

papr 08 April, 2021, 08:20:11

See also my response to your previous question as reference https://discord.com/channels/285728493612957698/285728493612957698/806941783175069706

nmt 08 April, 2021, 08:19:56

Hi @user-7daa32 In the debug window, all three pye3d model outlines are rendered. The lines correspond to: Green: short term model Yellow: mid-term model Dark Red: long-term model

user-7daa32 07 April, 2021, 18:49:11

Please what do these lines mean? Only the eye 0 has the red circular line

user-7daa32 07 April, 2021, 19:24:22

Chat image

user-7daa32 07 April, 2021, 19:24:43

The calibration doesn't cover the whole screen

user-14d189 08 April, 2021, 00:09:31

I need the latter. would you know how I can use PL world camera with any other recording software? I'll record the picoflexx video with the pupil capture.

papr 08 April, 2021, 08:26:24

I am able to record the scene camera with Quicktime Player on macOS. On windows, you might need to disable the libusbk drivers in order to make the camera accessible to other recording software.

user-14d189 08 April, 2021, 02:53:26

hi, little update: somehow I got 2 sessions of pupil labs capture running on my laptop. one recording world view with your high speed cam and one with eye-tracking and picoflexx plugin

user-14d189 08 April, 2021, 02:54:54

so I can use the timestamps to map both videos. matching field of view.... maybe adobe will sort that for me.

papr 08 April, 2021, 08:27:59

If you want to post-process the videos in Adobe I highly recommend to export the videos with Player first, as the intermediate video files do not set the media-internal timestamps correctly. Adobe won't know anything about our external timestamps.

nmt 08 April, 2021, 07:57:55

Did you perform a screen-based calibration, or did you use the physical marker (in the right of your picture)? If you did a screen-based calibration and the physical marker was also visible, this will throw off your calibration.

user-7daa32 08 April, 2021, 11:50:08

I realized that and I recalibrated. I did Single marker p calibration

user-ecbbea 08 April, 2021, 21:59:04

Hey, I'm just wondering if the LSL plugin has changed substantially. I went to use the most recent version today (2.1 I think?) with the newest version of Pupil Capture and the newest version of Lab Recorder and found that it is not recording any pupil position data, but the outlet still is open. It passes all of the metadata about the outlet, but there are no time stamps or timeseries data

papr 08 April, 2021, 22:00:26

You need to calibrate first šŸ™‚

user-ecbbea 08 April, 2021, 22:01:03

Oh, I don't think I had to do that before

user-ecbbea 08 April, 2021, 22:02:08

Even if I just want pupil location data - not gaze?

papr 08 April, 2021, 22:02:56

Yes, see this message for reference https://discord.com/channels/285728493612957698/285728493612957698/829283882300997674

user-ecbbea 08 April, 2021, 22:04:04

Ok thanks

papr 08 April, 2021, 22:10:39

@user-ecbbea @user-af5385 I will note this down, though. Maybe we can create a pupil-data only LSL plugin since there seems to be demand for it.

user-af5385 09 April, 2021, 09:24:40

Very cool. thank you. I will keep an eye out for this.

user-ecbbea 08 April, 2021, 22:13:20

yeah, I'm confused if I was just dreaming of it, but i'm almost positive I was able to get pupil position data out before. was this changed?

I would super appreciate it - in the meantime I'll try to hack together a pupil only version

papr 08 April, 2021, 22:14:10

It was changed

user-ecbbea 08 April, 2021, 22:16:10

ah ok. yeah maybe either a separate version - or just split the gaze and pupil data into two streams. that way you can access both streams (or just one) and still be synchronized to LSL's clock

papr 08 April, 2021, 22:19:16

The design was based on https://github.com/sccn/xdf/wiki/Gaze-Meta-Data which requires pupil and gaze data to be matched.

user-ecbbea 08 April, 2021, 22:20:44

ah I see. seems like a compromise could be to push pupil data as normal and fill in nans for gaze data if a good calibration is not available. but maybe I'm thinking too simply

papr 08 April, 2021, 22:24:49

Well, pupil data is not matched by default. But I guess we can make something work in that sense. Unfortunately, I won't have time to look at that in next 3-4 weeks. Probably early May again. I noted it down and will let you know once we have any relevant changes for you.

An easier workaround might be a dummy gaze plugin that only matches pupil data for that purpose šŸ¤”

user-ecbbea 08 April, 2021, 22:20:52

it just wasn't clear I had to calibration before hand

user-ecbbea 08 April, 2021, 22:26:15

yeah no worries - I know you're probably busy. in the mean time - do you have any quick tips for generating the dummy gaze values? can I just hardcore them into the original script?

papr 09 April, 2021, 08:32:24

Not sure what you mean. This is how gaze data looks like internally: https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format You can instantiate such a datum as a Python dictionary in your script and pass it in this line https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L49

Is this what you were looking for?

user-83ea3f 09 April, 2021, 06:32:03

Until two days ago, the glasses were well executed, but now the glasses themselves are not working. The two parts that recognize iris are good, but the camera on top itself is not recognized. I just can't figure out why. Please help me. ( ps. I am using window )

Chat image

papr 09 April, 2021, 08:29:46

Hey, this looks like a hardware issue with your realsense camera. Please contact [email removed] in this regard.

user-83ea3f 09 April, 2021, 06:32:04

Chat image

user-ae6ade 09 April, 2021, 11:47:00

Is there a possibility to find out how long someone looked into someone else's eyes when doing the analysis? It's a bit complicated to patch a marker at each side of the eye šŸ˜…

And how high is the precision of measurement of a pupil labs core? - so how big is the aberration between the actual gaze and the measured gaze point, normally given in degrees?

papr 09 April, 2021, 11:49:04

The latter depends on your calibration. See https://docs.pupil-labs.com/core/best-practices/#calibrating

The former requires a custom post-processing. You could run a face-detection algorithm on the scene video that also annotates the eye region. Afterward you could check if the gaze is mapped on that area for a given frame or not.

user-331121 09 April, 2021, 16:01:29

Hi @papr , I have one query regarding the data output from PL software. 'circle_3d_normal's and 'gaze_normal's. They both are normals originating from the sphere center or eyeball center. 'circle_3d_normal' originates at sphere center and passes through pupil center in 3D and refers to the optical axis whereas 'gaze_normal's refer to visual axis but this also originates from sphere center. Is this correct?

papr 09 April, 2021, 17:16:48

Correct. They are equivalent in their meaning; the representation of the visual axis in two different coordinate systems (eye vs scene camera; the latter requires calibration and is therefore less accurate)

user-331121 09 April, 2021, 17:38:13

Is there any way to get the matrix that converts eye coordinate space (ECS) to scene coordinate space (SCS) from the export files only? Is this trasnformation stored anywhere or do I need to do with calibration again?

papr 09 April, 2021, 17:39:36

It is not being exported but should be part of the notify.pldata file if it was made before or during the recording. Post-hoc calibrations are stored in the offline_data folder

user-331121 09 April, 2021, 18:40:27

Are you referring to the calibration points or the matrix?

papr 09 April, 2021, 18:45:32

I must have mistaken the points for the gaze mapper parameters. But you could pass the calibration data to a new Gazer3D instance https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L327 (inherits __init__() from below)

This will trigger a "calibration" https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_base.py#L204

and announce the result using a notification https://github.com/pupil-labs/pupil/blob/eb8c2324f3fd558858ce33f3816972d93e02fcc6/pupil_src/shared_modules/gaze_mapping/gazer_base.py#L325-L336 (It looks like this notification is being recorded v2.0)

user-331121 09 April, 2021, 18:40:43

I only see the calibration points in these files

papr 09 April, 2021, 18:46:46

It would probably be easiest to write a Capture plugin that loads the calibration data from a given recording, initializes the gazer and writes the captured calibration result to disk.

papr 09 April, 2021, 18:50:04

Actually, you do not even have to write a plugin. This works with an external script and the network API alone.

user-331121 09 April, 2021, 19:31:23

Thank you @papr. I will have a look at it.

user-a9e72d 10 April, 2021, 09:54:28

I want to know what information the output data(like gaze position.csv) can deliver after you use pupil capture to record.

papr 10 April, 2021, 09:55:15

Please see our documentation for reference https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

user-292135 12 April, 2021, 00:22:59

Hi, I am using Pupil Mobile + Pupil Player 3.2 . I found at the one timestamp point, Player crashes and found world_lookup pts reset at that point.

papr 12 April, 2021, 07:41:33

Hey, the PTS are extracted from the video when you open the recording in Player. Usually, PTS need to be monotonically increasing. Therefore, it looks like something is wrong. But it is difficult to tell for me which part. Would it be possible for you to share the recording with [email removed] such that we can have a look?

user-292135 12 April, 2021, 00:23:15

Chat image

user-292135 12 April, 2021, 00:23:41

error message is like this. Any help?

user-292135 12 April, 2021, 00:23:44

2021-04-12 09:09:22,727 - player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "video_capture/file_backend.py", line 535, in seek_to_frame File "video_capture/file_backend.py", line 185, in seek File "av/stream.pyx", line 146, in av.stream.Stream.seek File "av/container/core.pyx", line 166, in av.container.core.ContainerProxy.seek File "av/utils.pyx", line 105, in av.utils.err_check av.AVError: [Errno 22] Invalid argument The above exception was the direct cause of the following exception: Traceback (most recent call last): File "launchables/player.py", line 690, in player File "video_overlay/plugins/eye_overlay.py", line 57, in recent_events File "video_overlay/workers/overlay_renderer.py", line 51, in draw_on_frame File "video_overlay/workers/frame_fetcher.py", line 34, in closest_frame_to_ts File "video_overlay/workers/frame_fetcher.py", line 43, in frame_for_idx File "video_capture/file_backend.py", line 537, in seek_to_frame video_capture.file_backend.FileSeekError 2021-04-12 09:09:22,727 - player - [INFO] launchables.player: Process shutting down.

user-7daa32 12 April, 2021, 15:33:34

Please I have questions

The green lines must be within the ROI right ? And should be smaller? I am still trying to look for the range of values that are ideal for accuracy and precision.

After calibration must the the field cover all the stimulus areas ? I am going to send a picture to understand this last question

Chat image

papr 12 April, 2021, 17:17:32

The green lines must be within the ROI right? No, the green eye model outline should be slightly smaller than the eye ball. The ROI only informs the pupil detection where to look for the pupil. Your models look fairly well fit.

user-7daa32 12 April, 2021, 16:38:22

Must this cover the whole range of the visual stimulus ? The participant will not move the head during the tracking

Chat image

papr 12 April, 2021, 17:20:29

Gaze estimation will be most accurate in that region. When the subject is free to move their head, calibrating the subject's field of view is sufficient as humans tend to move their head rather than doing extreme eye movements. If the subject's head is fixed, e.g. using an head-rest, you should make sure the calibration area covers your stimulus, yes.

user-7daa32 12 April, 2021, 18:08:46

Thanks. Is this possible when the visual stimulus is very large like 95 inches ? The visual stimulus is placed on the wall. Is single marker calibration good for such calibration?

nmt 13 April, 2021, 08:05:13

You should calibrate using the viewing angles you expect to record in your experiment. Yes, you can use the single marker, as it is able to calibrate at small or large viewing angles depending on how you use it.

user-94f03a 13 April, 2021, 04:00:32

Hi! I have the following issue with the surface mapper and I would like your advice. I have an A4 paper with 4 april tags in the corners. I have defined a surface on pupil player, with all 4. However, it does not detect the surface when only one is within view. I thought the surface works even with a subset of the apriltags visible. Is there some kind of rule of thumb? 2/4 minimum or so?

edit: to clarify, when there 2-3 markers visible the surface is detected, and has the annotation <surface name> 3/4

Thanks

papr 13 April, 2021, 14:26:03

Please see our (recently added) note in the documentation:

Note that the markers require a white border around them for robust detection. In our experience, this should be at least equal to the width of the smallest white square/rectangle shown in the Marker. Please ensure you include a sufficient border around when displaying markers on screen or printing the markers.

user-6b3ffb 13 April, 2021, 13:07:53

Can I ask how to get the gaze coordinates of the calibration points? Is that possible? I want to draw a bounding box as a limit for my experiment.

papr 13 April, 2021, 14:29:52

That is possible. When stopped, the calibration choreography starts the selected gazer plugin using a notify.start_plugin message. It includes the pupil and reference data necessary to calibrate.

user-6b3ffb 14 April, 2021, 10:29:18

Thank you very much for your response! Can you please give me some directions on how to do that? Can I use the pupil player to do it after the recording session?

user-7daa32 13 April, 2021, 17:19:18

Can pupil lab core audio-recorded? Like participants able to indicate when they actually see the target? This will help us know what is correct and incorrect

nmt 14 April, 2021, 07:35:33

Since Capture v2, Pupil Core does not record audio. You could use the annotation plugin for this purpose, whereby participants press a hotkey when they see a stimulus if that is what your experiment requires: https://docs.pupil-labs.com/core/software/pupil-capture/#annotations

user-7daa32 13 April, 2021, 17:26:29

Do we have filter for setting threshold for what should be considered fixations or saccades?

nmt 14 April, 2021, 07:39:52

We implement a dispersion-based fixation filter, with the exact procedure differing dependent on whether fixations are detected in an online or offline context. Read more about that here: https://docs.pupil-labs.com/core/terminology/#fixations The filter does not classify saccades. This community example implements a saccadic filter which might be of interest to you: https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing

papr 14 April, 2021, 10:30:43

Ah, my apologies, I was assuming you wanted to do that in real time. You wanted to do that post-hoc only?

user-6b3ffb 14 April, 2021, 10:31:50

If that is possible yes

papr 14 April, 2021, 10:32:19

I am not 100% sure on that.

user-6b3ffb 14 April, 2021, 10:33:29

I'm using the recorded gaze data.

papr 14 April, 2021, 10:32:36

Where do you want to draw the calibration area?

papr 14 April, 2021, 10:32:56

And do you use post-hoc calibration? Or the recorded gaze data?

user-6b3ffb 14 April, 2021, 10:33:59

Regarding the real time do I have to enable a plugin?

papr 14 April, 2021, 10:34:48

No, you would need to run a script in parallel, that is subscribed to the notification and extracts the relevant data from it. But the information might be stored in the recording. Let me check.

papr 14 April, 2021, 11:39:36

Check out this quick notebook I made https://nbviewer.jupyter.org/gist/papr/ad50c1146d297deef9a1738a4731eb45

user-6b3ffb 14 April, 2021, 14:07:30

Many thanks!!!

user-94f03a 14 April, 2021, 13:12:51

Thanks, that's not the issue. the individual marker is detected fine, but when it detect 1 out of 4 only (the others are out of view) it does not 'show' the surface.

papr 14 April, 2021, 13:13:33

Ah, ok. Pupil Player requires at least 2 visible markers to detect a surface successfully.

user-94f03a 14 April, 2021, 13:13:46

i see!

user-94f03a 14 April, 2021, 13:14:24

in another setup we tried (successefuly i think) to define a surface from a single marker. this seems to work fine>

papr 14 April, 2021, 13:15:22

Is it possible that this setup was made in Pupil Cloud?

user-94f03a 14 April, 2021, 13:16:38

no, it also worked on the pupil player

user-94f03a 14 April, 2021, 13:16:44

I can pm you some examples

papr 14 April, 2021, 13:17:32

I was just able to reproduce it. My apologies.

papr 14 April, 2021, 13:18:10

So, it seems like one-marker-surfaces need to be setup as such. Surfaces defined with more than one marker require at least 2 surfaces.

user-94f03a 14 April, 2021, 13:19:16

any chance we can tweak this in the plugin?

user-94f03a 14 April, 2021, 13:19:29

it would make detections somewhat smoother

papr 14 April, 2021, 13:20:16

I am not exactly sure why and how the software differentiates these cases. We will have to look into that.

user-94f03a 14 April, 2021, 13:20:25

ok

user-94f03a 14 April, 2021, 13:20:40

let me know if you find out anything šŸ˜‰

user-7daa32 14 April, 2021, 15:08:38

Thanks again! Because pupil lab core has a binocular recording, does it mean we will have two data? One for the right eye and the other for the left eye? If that is a yes, how can we decide which data to choose? Right eye or average the two data ?

user-7daa32 14 April, 2021, 19:14:39

For the offline fixation detection what does this statement mean? " Instead of creating two consecutive fixations of length 300 ms it creates a single fixation with length 600 ms. Fixations do not overlap"

Is the minimum duration threshold of 100-200 ms proposed by Salvucci for online fixation detection?

nmt 15 April, 2021, 08:05:10

You can read about choosing fixation filter thresholds here: https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds. Choosing the optimum threshold is dependent on your experimental setup.

user-7daa32 15 April, 2021, 05:14:04

https://www.frontiersin.org/articles/10.3389/fnhum.2020.589502/full#h3

In this paper, the authors used screen marker calibration for experiment that doesn't involve the use of the computer screen. I want to be sure if is possible to use the 5 point screen marker calibration for experiment where the visual stimulus is placed on the wall ?

user-347b27 15 April, 2021, 06:30:32

Hello! I am using the pupil core eye tracker and since my last recording the world camera does not work anymore. When i start pupil capture the screen for the world camera stays grey.. I tried to reinstall the cam driver but it didnt help. I guess the problem could be a loose connector which connects the world camera to the usb hub. The other 2 cams for the eyes work fine. I am grateful for every tip!

nmt 15 April, 2021, 08:00:22

@user-7daa32 that depends on what outcomes you are interested in. If you simply want to know where people are looking, the gaze point is estimated from both eyes automatically (binocular). If you want to examine the specific gaze direction for each eye individually, you can also do that by looking at 'gaze_normal*_x,y,z': https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

user-7daa32 16 April, 2021, 00:06:10

For Annotation data, it is usually come twice or multiple for a single click of a hotkey. For example when the participant fixates on a particular AOI and click once the E annotations, we expect a single annotation display on player and one annotation for the AOI on the export file

papr 15 April, 2021, 08:02:50

Please contact [email removed] in this regard.

user-347b27 15 April, 2021, 08:18:05

ok thanks!

nmt 15 April, 2021, 08:18:37

@user-7daa32 here are a few example scenarios: 1) Your stimulus is 2x2 meters in size. The participant stands 2 meters away from the stimulus. In order to look at the whole stimulus, the participant will 'point' their head towards where they are looking. Therefore, the degree of eye rotation in the head won't be big. 2) The same conditions. However, you instruct the participant to keep their head still. Therefore, in order to look at the whole stimulus, they will need bigger eye rotations in the head. The calibration procedure you choose should replicate the eye rotations. So if your experiment aligns with scenario 1 (i.e. the participants are free to move their head) a screen-based calibration will probably be sufficient. If you experiment aligns with scenario 2 (i.e. the participants head must be still) you might need to use a single marker in order to calibrate at the bigger eye rotations.

To be sure, you should run a pilot study. You should try out both calibration methods. Then confirm which one was optimal for you experimental setup. Test them out by instructing the participant to look at a specific region and check the gaze point is in the expected place.

user-7daa32 16 April, 2021, 03:12:50

Great! Thank again. The visual stimulus is on the wall. And I thought we should use single marker calibration (physical marker display) since we are not using the computer screen. We have not decided yet whether there will be head movements or the head will remains still during visual search. But I have been learning the experiment with the head kept still. During the single marker calibration (physical), a spiral head movement was made while looking at the physical calibration marker placed on the center of the stimulus. Just thinking that the calibration method (screen display) used in this paper shouldn't be single marker calibration(screen display) because the computer wasn't used to display the stimulus. Most literatures usually don't write detailed methodology.

I want to ask, during single marker calibration (physical marker display mode), the subject must look at the marker while moving the head right?

user-0c4111 15 April, 2021, 11:40:34

zigzag — Today at 11:14 AM Hi, a couple of questions I have a pupil labs 200 hz core eye camera.

I would like it to be positioned pointing straight at the eye and quite close (60 mm away) or so but I am having some issues getting a good image - perhaps the software is not expecting a head on view of the eye? I had a brief look but I don't really understand the plug-ins and whether or not an algorithm optimised for a head on view of the eye exists

My user will have glasses on so perhaps the glasses are causing issues?

Perhaps the fixed focus distance of the 200hz camera is too far away - what is the design focal distance ? I know it's not recommended, but could I adjust the lens focal position (or change the lens) I assume it is a standard m12 mount lens , and the glue fixing the lens could be scraped off ? (I accept all risk of destroying the lens)

What is the optimum situation for maximum contrast (i.e. dark pupil and white iris) - to have a high level of uniform illumination on the eye to increase the contrast ? (while keeping eye safety)

Does the software modulate the LEDs at all for glint detection (i.e. to distinguish between the glints from the two LEDs ?)

I assume the LEDs are at 850 nm, but I can still see a red glow from the LEDs which surprises me (perhaps the lower wavelength tail of the LED?)

I assume the pupil labs camera has an IR pass + visible block filter in it , and hence any replacement m12 lenses would need this too?

Thanks!

nmt 15 April, 2021, 13:47:14

Hi @user-0c4111, pointing the eye camera straight at the eye should not be a problem. However, the eye cameras should have an unobstructed view of the eyes. If they are pointed through external glasses lenses, that is likely to be one reason why you can't detect the pupils. However, 60 mm could also be too far. The standard headset positions the eye cameras around 25 mm from the pupils. The focus is fixed at 15 mm - 80 mm.

We employ 'dark pupil' detection that doesn't require corneal reflection or glints. The IR illumination is not modulated. The best situation is a uniformly illuminated pupil. The standard positioning of the cameras is usually optimal in my experience.

I would try again without glasses obstructing the camera.

I do not recommend changing the eye camera lens. But perhaps @papr or @user-755e9e is better placed to advise on that.

user-189029 16 April, 2021, 10:36:43

Hello! Is it possible to run Fixation Detector in console (not via interface of Pupil Player)? If yes, please tell how (I cannot find scripts in the pupil repo)

nmt 16 April, 2021, 12:00:15

A single press of a hotkey will only generate one annotation. Are you sure the hotkey wasn't held down or pressed twice? Incidentally, I would avoid using 'E' as a hotkey, as 'e' is actually used to trigger a data export in Pupil Player

user-7daa32 21 April, 2021, 04:38:29

Just an example. I am not using the "e".. I think it seems like hotkey was held down.. keyboard keys should be so flexible that one can't realize when a key is held down.

nmt 16 April, 2021, 12:08:16

When using a single marker, yes you can move the head, e.g. in a spiral pattern. Alternatively, you could move the marker around whilst keeping the head still. The most important thing is to cover a similar area of the visual field that you will record in your experiment. The 5-point screen-based calibration can only cover a relatively small area that encompasses the computer screen. With the single marker, you can cover a much larger range of the visual field as you can move it or your head as much as needed.

user-7daa32 21 April, 2021, 04:39:13

Thanks

user-10fa94 17 April, 2021, 06:10:27

it looks like my camera intrinsics calibration may have been off during data recording. The displayed image looked correct (all parallel lines parallel) but when i go to undistort the frames in post processing, the undistortion fails. If i load the default intrinsics I am able to somewhat undistort the image while if I load the calibrated intrinscs I just get a black image when I display the undistorted image. If I recalibrate, is there a way i can still use the gaze projections? or will these now be off for all data collected? (I am only interested in 2d gaze data for the moment)

papr 17 April, 2021, 08:55:36

2d gaze data is not affected by the intrinsics as it is mapped into the distorted camera image.

user-10fa94 18 April, 2021, 04:14:29

Thank you very much. Another question I have come across during my debugging. When I get good coverage of the entire field of view during my intrinsics estimation I then get a blank image when i go to perform undistortion (sample coverage when taking the 10 images is shown in the attached image). However, if I use the default or an estimation of intrinsics when the 10 images were all fairly centered, I am able to get a result from the undistortion but the edges are quite unclear. Is there a reason why re-projection of points may be failing when coverage of the 10 images taken include the edge points? Thank you very much for your time and help

Chat image

papr 19 April, 2021, 08:01:17

Hey, which camera model have you chosen for the intrinsics estimation? Radial or fish eye? Please use the latter for the 1080p resolution.

user-10fa94 28 April, 2021, 08:32:51

Hi, yes I am using fisheye for the 1080p resolution. I have gotten it working better by increasing the number of images taken (from 10 to 15), but am still not able to get good images taken with the calibration pattern at the edges of the frame. However, the new camera intrinsics calculated seem to work ok (images included below)

Chat image

user-25404b 20 April, 2021, 10:13:25

Hi, anyone has the wiring diagram of the core? I need to change the connector of the right camera.

papr 20 April, 2021, 11:35:55

Please contact info@pupil-labs.com in this regard.

user-25404b 20 April, 2021, 13:59:04

Thanks!

user-8d541e 20 April, 2021, 12:29:18

Hi all, we're using the core to run an experiment in the light and in complete darkness, but in darkness we are struggling to get a reasonable confidence level, and we are missing a lot of data. Our hunch is that the pupil is probably too large, but I thought it would be a good idea to see if there are recommended settings for collecting data in darkness in case we are doing something silly! In the light we have nice data, so the only difference is in darkness and we're a bit stumped. Any advice would be very much appreciated, thanks.

papr 20 April, 2021, 12:32:40

Please see this issue for reference https://github.com/pupil-labs/pupil/issues/1370

user-8d541e 21 April, 2021, 14:38:51

Thanks for this info. If I understood correctly, this refers to changing the 2D/3D pupil detection (i.e., ROI, intensity range etc.), and adjusting the resolution of the camera? We have been playing with these settings, but unfortunately there doesn't seem to be a setting that will get a reasonable detection when in total darkness. Is it just a case of playing with the settings, or is there something obvious I need to select when setting up the recording?

user-057596 20 April, 2021, 13:09:35

Hi is there a link to latest update for Pupil Core? Thanks Gary.

papr 20 April, 2021, 13:39:30

Here you go https://github.com/pupil-labs/pupil/releases/latest

user-057596 20 April, 2021, 14:11:57

Thanks Papr, hope you are all well.

papr 20 April, 2021, 14:13:13

Thanks, I am. I hope the same for you. šŸ™‚

user-057596 20 April, 2021, 15:16:25

Yes just trying to get my brain working again after the long gap šŸ˜‚

user-a9e72d 20 April, 2021, 18:05:03

Sorry to bother you! But how can I get the concrete bounding boxes on the images by the gaze data, pupil data and other data?

wrp 21 April, 2021, 03:52:18

@user-a9e72d what are you looking to achieve? Are you looking to map gaze data to surface/image in the real world? If so, you might want to see: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-a9e72d 21 April, 2021, 08:11:40

@wrp Sorry to bother you again. After I got the gaze position and pupil position. I want to use the data to get the area which has been looked at by users. (The position can be mapped to the screen?)

papr 21 April, 2021, 08:12:52

Yes, that is possible! šŸ™‚ But for that you need to setup your screen as a "surface". See the linked documentation on how to do that.

user-a9e72d 21 April, 2021, 08:15:03

I did not find related document!🄲

papr 21 April, 2021, 08:15:32

Please follow this link https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-a9e72d 21 April, 2021, 08:17:29

Thank you for your kindly help

user-60f500 21 April, 2021, 08:32:24

Hi all,

user-60f500 21 April, 2021, 08:37:41

I was wandering if someone already has troubles with the sampling rate of the pupil glasses ? I use LSL to record pupil and EEG data at the same time but the sampling rate of my pupil core is changing constantly (from 180 to 260Hz !). While the EEG sampling rate stay constant (around 512Hz). Did someone already has this kind of problem ?

papr 21 April, 2021, 08:39:20

Which version of the LSL plugin are you using?

user-60f500 21 April, 2021, 08:43:41

Good question, I'm not sure but I use an old version of pupil Capture (version 2.1 or 2.2).

papr 21 April, 2021, 08:44:33

LSL calculates the sampling rate based on the data timestamps. In the first plugin version, these were not measured at frame exposure but after performing pupil detection. The variable duration between frame exposure and completed computation caused the uneven timestamp distribution, and therefore the variable "effective" sampling rate. The latest LSL plugin attempts to use the frame exposure timestamp instead, yielding a much stabler sampling rate calculation.

The Capture version is secondary here. The plugin version is more important. But given that Capture version, I am assuming that you are using the old plugin as well.

user-60f500 21 April, 2021, 08:52:09

Ok, I understand, thank you for the explanation ! Where can I download the latest version of the plugin ?

papr 21 April, 2021, 08:53:39

https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture Please note, that this version requires a valid calibration before it can publish data to LSL.

user-60f500 21 April, 2021, 08:58:24

Yes, my participants perform calibration before starting the recording. But what is a valid calibration ? There is a threshold that indicate if the calibration is good or bad ?

papr 21 April, 2021, 08:59:03

I meant valid in the sense of "successful" (no errors reported by Capture).

user-60f500 21 April, 2021, 08:58:29

Thanks for the link

user-60f500 21 April, 2021, 09:01:58

Ah yes, I see thank you. I have another question, regarding the "Nan". In the time series data I sometime have "Nan", does it meens that the glasses cannot capture the pupil at this particular moment ? What is the impact of the calibration on the quantity of Nan in the data ?

papr 21 April, 2021, 09:08:18

The pipeline is: Pupil detection -> pupil data matching -> gaze mapping (requires calibration). NaN values come from the first two steps.

papr 21 April, 2021, 09:06:39

Capture attempts to map gaze based on the data from both eyes. Sometimes this is not possible. In these cases, the values are set to NaN. The calibration does not have an impact on the number of NaN values.

user-60f500 21 April, 2021, 09:10:06

Ok this is the reason why I have Nan values in the diameter values. Probably because the device cannot detect the pupil.

user-7daa32 21 April, 2021, 09:12:00

I have a quick question about dispersion filters to be fed into the pupil detection algorithm. I have seen a lot of papers that talked about the duration threshold which depends on the research questions, nature of task, stimulus design, field, literature or researcher's discretion but I have not found explanation for the dispersion threshold. I read that degree of visual angle depends on the tracker resolution. But I don't know which ranges of spatial bound should be taken as fixations or saccades especially for pupil lab core.

For example, in chemistry education, most visual stimuli have multiple representations required to solve a task and so the duration threshold usually set at 100 Ms.

user-1bfcbc 21 April, 2021, 11:06:35

Does detect eye0 and detect eye1 need to be checked, while recording? Or can they just be used to position the eye cameras correctly?

papr 21 April, 2021, 11:10:26

You can turn them off once you have verified that pupil detection works correctly. You can do the detection post-hoc in Pupil Player.

user-1bfcbc 21 April, 2021, 12:18:51

No eye recording seem to have been made, when they are unchecked.

papr 21 April, 2021, 12:23:40

Apologies, I need to correct myself. detect eye0/1 need to be enabled to process and record the eye videos. Turning off pupil detection via the UI is currently not working, but will be fixed in our next release.

user-1bfcbc 21 April, 2021, 12:27:08

When would the next release be? Are there any releases that I would be able to use in the meantime? the laptop that we are using is maxed out when they are on.

papr 21 April, 2021, 12:29:53

We are planning the next release for the first half of May. @user-764f72 Could you please check which is the latest release without this issue?

user-1bfcbc 21 April, 2021, 12:27:19

Thank you for your help! šŸ™‚

user-1bfcbc 21 April, 2021, 13:13:34

Hi @papr,

What would be the minimum specs for running the software?

user-1bfcbc 21 April, 2021, 13:14:49

We have an i7-7500U with 8GB ram, but we are having issues with this

papr 21 April, 2021, 13:21:30

Main factor for good pupil detection speed is CPU frequency. We are also aware of calculation speed issues on 3.x that we are actively working on.

papr 21 April, 2021, 14:44:12

The coarse detection is part of the 2d detector. It is disabled at the lowest eye camera resolution (eye window -> video source menu -> resolution). If that does not help, please share an example recording with [email removed] such that we can have a look.

user-8d541e 21 April, 2021, 14:53:36

Thanks! We are already recording at the lowest resolution, so I'm not sure that's the issue. I'll have another look tomorrow and may well take you up on the offer of having a look at an example. Thanks so much for your help.

user-94f03a 21 April, 2021, 15:19:55

is there any update on this by any chance? I turned out that participants look at the stimulus from more close so some tags are out of the field of view; there is at least one though and it would be great if we could still use it to detect the interaction with the surface

papr 21 April, 2021, 15:20:43

There is no update in this regard yet

user-2be752 21 April, 2021, 18:22:29

Hello! I have a question about the frame rate. I exported the fixation data, and I believe the frames used their are for frames at 30hz, is that correct? Could you briefly explain how it goes from the 120Hz sampling rate of the world camera and 200Hz sampling rate of the eye cameras, to fixation frame rate of 30Hz? And, is that modifiable? Thank you so much!

papr 21 April, 2021, 18:23:50

I exported the fixation data, and I believe the frames used their are for frames at 30hz, is that correct? Could you elaborate how you came to this conclusion? I am not exactly sure what you mean by that.

user-2be752 21 April, 2021, 18:26:10

I'm trying to do some analysis on the video image statistics at each fixation. Since I have the start frame and end frame of the fixations, I can just pull the frame from a simple video split function. However, I need to know what is the sampling rate of the frames indicated in the fixation file. Does that make sense?

papr 21 April, 2021, 18:35:17

See this conversation for reference https://discord.com/channels/285728493612957698/446977689690177536/814228154364985414

papr 21 April, 2021, 18:33:58

The frame indices in the exported file refer to frames in the scene video. These are independent of the recording frame rate. I highly recommend using this tutorial to extract single frames https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb or pyav to access the frame data directly https://github.com/PyAV-Org/PyAV/blob/main/examples/numpy/barcode.py

It is possible that frames are skipped/dropped during extraction if the software is not configured correctly. In this case, the scene frame indices would be invalid

user-2be752 21 April, 2021, 18:45:34

great! I will look into what you have sent me!

user-2be752 21 April, 2021, 22:28:42

Okay, so if I understand correctly if I extract frame index 4 through the process outlined here: https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb, and then get the fixation xy coordinates at frame 4 in the fixation file, then I should have a good corresponding frame to fixation right?

papr 22 April, 2021, 07:06:53

I have the feeling I am missing something. Is "frame to fixation" a metric? A fixation can span multiple frames, i.e. it is possible the frame content of the fixation can change.

user-2be752 22 April, 2021, 20:04:49

Yeah, it is true a fixation can last many frames, but let's say that for the sake of argument, a fixation happens in frame 4 and only in frame 4. Then, I can use the method described in that tutorial to pull that frame from the video correct? Thank you so much for helping me figure this out šŸ™‚

papr 23 April, 2021, 06:57:12

That is correct šŸ™‚

user-7daa32 23 April, 2021, 00:09:41

Hello

Thank again for the help.

  1. I have another questions about calibration. Can I just have like 4 to 6 calibration markers placed on the wall instead of moving a single marker around during calibration?

  2. How big can a surface (AOI) be to have actual or correct fixations within and not in whitespaces or not interested areas? Participants should claim they are looking at A for example but the fixations seen in player are on B or in between A and B.

  3. If we have the Alphabets as the stimulus for example and participants are to search for a particular letter as commanded, how big do you think each letter could be? How big in respect to the tracker resolution

user-ef03be 23 April, 2021, 08:01:25

Hi everyone, I am currently working with the Surface tracker plugin to detect April. And to detect AprilTags it is mentioned that you need a grayscale image of type numpy.uint8. I took a look at the code of the plugin and it appears to receive a "frame" which must be of numpy.uint8 type. So I'm wondering where this "frame" is coming from and what is it's real type.

papr 23 April, 2021, 08:17:10

Hey, I am not sure about the context. Are you using the surface tracker as part of the Core desktop applications or are you attempting to run the marker detection (a subcomponent of surface tracking) using the corresponding python package? (if so, please link to the code that you are referring to)

user-ef03be 23 April, 2021, 08:02:15

Please could someone help me with that ?

user-ef03be 23 April, 2021, 08:23:36

Thanks for answering quickly. Sorry if I was unclear, I am attempting to run the marker detection. I saw it is using this code to detect April tags : https://github.com/pupil-labs/apriltags/blob/master/src/pupil_apriltags/bindings.py line 400 for the detection part

papr 23 April, 2021, 08:28:20

ok, so you need this in realtime?

user-ef03be 23 April, 2021, 08:24:43

And I saw that finally the image which is needed comes from here : https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_tracker_online.py but I don't know where this frame come from

user-ef03be 23 April, 2021, 08:25:39

So I'm trying to get the image from the glasses and directly give it to the detector.

user-ef03be 23 April, 2021, 08:30:12

Yes that would be better

user-ef03be 23 April, 2021, 08:30:32

Thanks ! I'll take a look at it

user-98789c 23 April, 2021, 12:50:43

This happens a lot lately to my recordings, that for some time during my experitment, cameras don't record anything and they suddenly need to reinit. How can I prevent this?

Chat image

user-45f652 24 April, 2021, 11:40:34

Hi, I'm new to pupil. Is there any webcam support?

papr 26 April, 2021, 08:14:52

Capture supports cameras fulfilling these requirements: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

Please note that Pupil Core is a head-mounted eye tracker and its software cannot be used for remote eye tracking.

user-ae6ade 25 April, 2021, 09:39:29

I am the only one who has a problem with the program Pupil Player to not be responding - even though CPU is at 13.1 only? All programs are closed and this happens always. I managed to export data once. The video is only 2.5 minutes long ( a test video ) but even this is too much? Help please 😟

Chat image

papr 26 April, 2021, 08:15:50

This is likely the issue that we fixed in v3.2-16 https://github.com/pupil-labs/pupil/releases/tag/v3.2

user-ae6ade 25 April, 2021, 09:41:40

I use Pupil Player v3.2.15

user-ae6ade 25 April, 2021, 09:50:23

funnily, I just opened v3.1.16 and there it works šŸ¤”

papr 26 April, 2021, 08:11:50

Looks like a physical disconnect of the cable since all three cams disconnected at the same time.

user-98789c 26 April, 2021, 12:20:22

True, I had connected Pupil Core through an extended USB cable and that was the cause.

user-75ca9b 26 April, 2021, 08:24:28

Hi everyone, is logitech c270 compatible with Pupil Capture v3.2.20?

papr 26 April, 2021, 08:26:12

If it was compatible with older versions of Capture, it will be compatible with this release, too.

user-75ca9b 26 April, 2021, 08:28:13

It's the first time I try it, it doesn't see the webcam at all

papr 26 April, 2021, 08:30:58

On which operating system are you?

user-75ca9b 26 April, 2021, 09:06:53

@papr Win10

papr 26 April, 2021, 09:07:48

See these instructions (steps 1-7) then https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-75ca9b 26 April, 2021, 09:09:53

Yes I am following those instructions, as soon as I solve an error with pyhton in the last step I will have my answer

papr 26 April, 2021, 09:10:42

You do not need to run the last step (8) if you run Pupil Capture from bundle.

user-75ca9b 26 April, 2021, 09:18:53

I'm running the compiled version named "Pupil Capture v3.2.20"

papr 26 April, 2021, 09:21:12

Yes, that is what I referred to as "bundle". šŸ™‚ So there is no need to run the python-related changes from step 8

user-75ca9b 26 April, 2021, 09:36:47

After following all the steps in the guide for custom setup world and eye0 cameras can't seem to recognize the c270, it says 3 times"unknown"

I'm sure I've installed the drivers manually using Zadig

user-75ca9b 26 April, 2021, 10:03:42

Ok I figured out what I did wrong:

When using Zadig, once found the correct composite devices I did not set manually the driver to install to the "libusbK (v3.0.7.0)" (my bad), to everyone having issues with drivers resulting in 3 "unknown" cameras this could be a mistake.

user-75ca9b 26 April, 2021, 10:06:57

Now I have another issue, I have 2 c270 to use, is this an issue? (when the 2 cameras are basically the same?) Because both World and Eye0 use the same one on my eye and the one looking forward is not used

papr 26 April, 2021, 10:11:23

Unfortunately, there is no good way for the software to differentiate between the two. The cameras should be listed twice if Manual camera selection is enabled. In the world window, try selecting the cameras until the correct one is selected.

user-75ca9b 26 April, 2021, 10:26:09

That would not be a problem, but in my case both world and eye0 or eye0 and eye1 open only 1 of them. Basically they take only on of the 2 I have. I have installed the correct drivers on both (I've checked to be sure)

papr 26 April, 2021, 10:32:16

It is possible that the cameras change order in the listing. I know this is far from optimal šŸ˜•

user-75ca9b 26 April, 2021, 10:35:42

I understand but I've tried all. Anyway the front camera is used only for Eye Tracker feature right? I am more interested in a pupilometer

user-7daa32 27 April, 2021, 00:54:39

Please I have trouble with the gaze not hitting the AOI being looked upon by the viewer. For example, let say we alphabets A to Z. Each letter is an AOI. The AOI is made of a letter surrounded by a 5ā€ 5" squared black color line and again by a 1.2" 1.2" squared whitespace. I guess you get it!

All placed side by side in order of the Alphabets arrangement. By so doing the whitespace between two 5"5" will be 2.4inches.

The question I wanna ask is this.. How big or small can the size of an AOI be for it to be hit and not another or whitespace? What size can account for the tracker error?

user-7daa32 27 April, 2021, 04:26:44

Capture turn off immediately after opening... I don't know what I wanna do please

papr 27 April, 2021, 07:08:53

Correct, you do not need a scene camera if you want to estimate pupil size.

user-75ca9b 27 April, 2021, 14:55:06

ok, thank you for the help.

papr 27 April, 2021, 07:50:28

You can calculate the stimulus size s using your calibration accuracy E and the target distance d.

Chat image

user-7daa32 27 April, 2021, 07:56:08

Thank you. You mean the size of the AOI?

papr 27 April, 2021, 07:57:10

What ever you want to identify reliably. šŸ™‚ In your case target stimulus = aoi = letter

user-7daa32 27 April, 2021, 08:00:43

Thanks again. I will try it out

user-7daa32 27 April, 2021, 08:04:00

Sorry d is the size of the whole visual stimulus like the size of the plasma screen?

papr 27 April, 2021, 08:04:23

No, d is the distance from the subject to the aoi

papr 27 April, 2021, 08:04:48

The further the subject sits away from the stimulus, the bigger it needs to be.

user-7daa32 27 April, 2021, 08:45:12

I am asking a lot of questions.. Is the angle twice E? Or the 2 in the formulae has accounted for that ?

papr 27 April, 2021, 08:53:21

E is the accuracy as reported by the Accuracy Visualizer plugin. You need to multiply by 2, else the result is the AOI radius, not its diameter.

user-7daa32 27 April, 2021, 15:36:29

Thanks. I am currently considering that I might have the stimulus on the screen instead of on the wall. Please the AOI size we are talking about from that calculation, is it the size of of each feature on the stimulus or for the surface to be created using the Apriltags on player? I don't like creating surfaces on Capture.

papr 27 April, 2021, 16:17:19

The equation above does work for both use cases, stimulus on wall and stimulus on screen.

The equation above gives you the size for the gazed at stimulus that you want to identify. For example, let's say you present a simple sentence to the subject:

Hello, nice to meet you.

Know it depends on your research question: (a) Are you interested in which single letter the subject is looking at, or (b) are you interested in which word the subject is looking at.

For (a), the equation gives you the size of each letter. For (b), the equation gives you the size of each word. In case (b), the letters can be smaller because you don't care about if the subject looking at the H or the o in Hello. You are interested that they are looking at Hello, not nice.

Regarding the surface setup, there two possible setups: 1. One surface including all visible stimuli (easy to set up, more difficult to analyse) 2. One surface for each stimulus (more difficult to set up, easier to analyse)

user-7daa32 27 April, 2021, 15:37:43

There is the actual physical stimulus size and there is the AOI size place on the stimulus using the Apriltags

user-7daa32 27 April, 2021, 16:26:33

Thank you. Is is the size of the the tangible word on the screen or wall before data collection or the size of the word in capture or player created by Apriltags? I am talking about creating a surface.

Are you referring to the size of surface created ?

Or the size of the individual representations that form part of the stimulus?

papr 28 April, 2021, 08:23:26

The equation refers to the real-world size of the stimulus. So assuming a distance of 1 m = 100 cm (d = 100), and an angular error of 1 degree (E = 1), the stimulus needs to be ~3.5 cm in diameter.

The surface creation is independent of that equation. The surface should simply cover the stimulus.

I have trouble with the gaze not hitting the AOI being looked upon This sounded like the real-world stimulus size was too small, that is why I posted the equation. šŸ™‚

user-a44f8d 28 April, 2021, 08:25:15

Hello, could anybody tell me what kind of connector is used to connect the eye camera to the USB cable coming out of the Pupil Core frame?

papr 28 April, 2021, 08:26:58

You might be referring to the JST connector posted in this message: https://discord.com/channels/285728493612957698/285728493612957698/701705600979042336

user-a44f8d 28 April, 2021, 08:28:25

Thanks!

user-7daa32 28 April, 2021, 08:27:00

Thanks. I guess the angular error is already known based on the tracker used right ?

papr 28 April, 2021, 08:28:38

It is known after the calibration and validation (https://docs.pupil-labs.com/core/best-practices/#calibrating). I recommend assuming a worst-case angular error that defines the size of the stimulus. Should the validation yield a larger angular error, the subject needs to calibrate again.

user-7daa32 28 April, 2021, 08:30:07

I refer to the whole set up as the stimulus. And I refer to a relevant feature of the stimulus where surface will be created as the AOI. For example, whole A-Z is the stimulus. A is AOl if surface will eventually be created on A

So is the "your stimulus" the whole of the interested area?

papr 29 April, 2021, 08:12:22

ok, thanks for defining the terms. The stimulus I was referring to is a single letter, a single AOI in your case. So to correct myself:

I have trouble with the gaze not hitting the AOI being looked upon It sounds like the real-world size of your AOI is too small.

user-7daa32 28 April, 2021, 08:31:36

After the calibration and validation, I will change the size based on the error ?

What if the size is fixed and can't be changed?

papr 29 April, 2021, 08:14:15

I recommend assuming a worst-case angular error that defines the size of the stimulus. Should the validation yield a larger angular error, the subject needs to calibrate again. This way you do not need to vary d. If the error reported after calibration and validation is smaller than assumed, you are fine to continue.

user-7daa32 28 April, 2021, 08:33:23

Maybe if fixed, then I Will have to vary the d right ?

Would the calibration remain valid ?

user-10fa94 28 April, 2021, 08:36:34

Another question I came across, I have noticed that in the pupil and gaze exports, some frames are occasionally skipped (e.g. I have an instance where I jump from world index 5460 to 5462, and a similar occurrence happens ~10 times in my 11 minute sample collection). The world frames are correctly captured (e.g. frame 5461 exists in my frames export, but no pupil data is associated with that frame) Is this expected behavior or could something be going wrong with my system?

papr 29 April, 2021, 08:09:23

Pupil data is associated by time. Can you check the exported world_timestamps.csv for the frames 5460–5462 and check how much time lies between them?

user-10fa94 29 April, 2021, 18:51:26

They all have similar time between them, ~0.0335s. For my recordings, I include the calibration in them as per guidance i saw in the documentation. I notice that when the calibration is taking place, gray frames appear in the world camera, during which the world timestamps continue to be collected, but pupil data stops recording for a few frames

user-f122fa 29 April, 2021, 09:23:48

Hi, I have issues installing dependencies Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'c:\program files\python36\lib\site-packages\pip-18.1.dist-info\entry_points.txt' Consider using the --user option or check the permissions. Is there a solution?

papr 29 April, 2021, 09:37:02

Generally, on Windows, it is recommended to run python -m pip ... instead of pip ... directly.

papr 29 April, 2021, 09:35:58

Which command were you executing?

user-f122fa 29 April, 2021, 09:39:46

Chat image

papr 29 April, 2021, 09:43:41

Just out of interest, what are your reasons for running from source instead from bundle? Installing the dependencies on Windows is quite tricky and you can do 99% of customizations via plugins, i.e. running from source is often not necessary.

papr 29 April, 2021, 09:42:21

Yeah, please run python -m pip install -r requirements.txt

user-f122fa 29 April, 2021, 09:45:16

I want to get data from pupil core for my app

papr 29 April, 2021, 09:57:13

Good news, in this case you do not need to run the application from source. You can download a bundled release here: https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads

user-f122fa 29 April, 2021, 09:45:33

gaze data

user-7daa32 29 April, 2021, 15:50:49

@papr thanks

wrp 30 April, 2021, 05:29:32

šŸ“£ Friendly reminder to Pupil Core users & developers about some useful Github repositories:

  • Pupil Community: This repository contains community contributed projects, forks, plugins, scripts using Pupil Core. If you have made anything and would like it to be added to this repo, please make a pull request directly to this repo. šŸ™ https://github.com/pupil-labs/pupil-community
  • Pupil Tutorials: This repository contains a series of Jupyter notebooks that demonstrate how to visualize and analyze Pupil Labs eye tracking data. https://github.com/pupil-labs/pupil-tutorials/
  • Pupil Helpers: This repository contains utility scripts and concise code examples that demonstrate how to communicate and develop your own tools with Pupil. https://github.com/pupil-labs/pupil-helpers
user-af5385 30 April, 2021, 14:44:08

Hey, I'm seeing a pretty big offset of ~60 seconds between timestamps made on two different PCs on the same network, using LSL. One PC is running a custom lsl outlet which uses liblsl.local_clock() and the other has PupilCapture with the lsl plugin running. Is this a know issue? I don't believe any interfering pupil capture plugins are running. Data is recorded with LabRecorder and parsed with Matlab's open_xdf.

papr 30 April, 2021, 15:07:45

Which version of the capture plugin are you using?

papr 30 April, 2021, 15:08:31

Also, what reference do you use to know that they are out of sync?

user-ecbbea 30 April, 2021, 17:39:56

Ok this is driving me insane, no matter how I position the left eye camera I get pupil dropout for no discernable reason.. I've played with the intensity / min / max settings and it doesn't seem to make a difference. What am I doing wrong?

papr 30 April, 2021, 17:42:02

Please try reducing the exposure time in the video source menu. It looks like it is quite dark on the right side of the eye1 video

user-ecbbea 30 April, 2021, 17:42:22

I'll give that a try - I've set them both to auto, which I thought would fix that

papr 30 April, 2021, 17:43:10

It is possible that one of the IR LEDs is no longer working, hence the uneven illumination

user-ecbbea 30 April, 2021, 17:43:42

I think that may be the case, as reducing the exposure seems to make it worse

user-10fa94 30 April, 2021, 17:43:59

apologies for the second message, I was just wondering if you may have any further tips for debugging this issue. Thank you so much for your help.

papr 30 April, 2021, 17:51:34

Feel free to share the recording with [email removed] can have a look next week.

papr 30 April, 2021, 17:47:39

So, dropped scene images during calibration are expected, as the calibration calculation is blocking the main process.

user-ecbbea 30 April, 2021, 17:45:13

I took a picture, and both of the IR LEDs seem to be working... ignore the mocap marker

Chat image

user-ecbbea 30 April, 2021, 20:46:28

I've been playing around with this for hours, using different computers, trying all of the angles I can and I keep getting the same problem... I'm not even really going into the extremes of my eyes. It looks like the IREDs are working, but I'm not 100% sure..

nmt 03 May, 2021, 08:25:02

Hi @user-ecbbea. Please contact info@pupil-labs.com in this regard, and include your original order id.

End of April archive