core


user-d1bcb6 01 April, 2022, 11:11:38

Thank you, Papr, I will explore if it is doable and workable for me.

I understand that you do not support sound recording long ago (since 2.0), but do you have any recommendation on which system to use if I intend to use PupilCapture before 2.0 with sound recording?

papr 01 April, 2022, 13:33:36

I do not remember anything that worked reliably, unfortunately.

papr 01 April, 2022, 13:32:44

In that case, we will need a pupillometry-only version that does not require calibration. I will be able to work on that within the next 2-3 weeks.

user-cdb028 01 April, 2022, 15:57:15

Thanks a lot papr, it would be really helpful

user-ced35b 01 April, 2022, 18:08:19

Hi all, apologies for the repeat question but is there any example of how to display screen markers on the client side (i'm not sure how to implement screen marker choreography on the dual_display_choreography_client)? Thank you

papr 01 April, 2022, 19:09:54

This is the default screen marker choreography https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_choreography/screen_marker_plugin.py The original screen marker calib. will not know where in the scene camera coordinate system the markers will be displayed. Therefore, it needs to draw something that can be recognized in the scene video.

But keep in mind, with the dual-hmd calib you do not calibrate in the scene camera coordinate system but in those of the mirrored screens. Your client needs to send the target location in mirrored screen coordinates. As a result, you can use anything as a gaze target in your case! e.g. a cross.

user-ced35b 04 April, 2022, 16:59:27

hello, how do I get the actual reference locations that I can implement into the client? I'm presenting visual targets (just presenting a video I've created) that displays on both monitors, however I'm not sure how i get the exact target locations (where/coordinates the target is presented on screen).

user-ced35b 01 April, 2022, 20:47:30

ok great thank you! and is it correct to be displaying the visual stimulus via dual_display_choreography_client or can I just present my own stimulus using a separate software (as long as I include those target locations in the dual display client)? Also, do I use the surface tracker to get the actual reference locations to include in the client?

user-1abb3f 01 April, 2022, 20:49:30

hi, we have a pupil invisible model, and we want to check the pupil detection, but when we connect the eye tracker into computer, the computer didn't detected the eye tracker and there is nothing on the pupil capture when i opened it

user-9429ba 04 April, 2022, 07:52:29

I have replied to this message over in the invisible channel šŸ™‚

user-1abb3f 01 April, 2022, 20:50:37

do you know how can i fix it? thank you so much

user-64deb6 02 April, 2022, 01:23:48

Hi, guys, why use the program filter_gaze_on_surface.py to get the gaze position based on the surface coordinate system, each loop has more than one gaze position data point, can I only get one point per loop?

user-6338f3 04 April, 2022, 07:56:50

Hello, I have a question for you about wearing Pupil Core. They are expected to be worn by children and are between 8 and 14 years old. Is pupil recognition possible even for children?

nmt 04 April, 2022, 10:55:43

Hi @user-6338f3 šŸ‘‹. Yes, our pupil detection algorithms work well for children in that age range and below (see my own research: https://pupil-labs.com/blog/research-digest/developmental-coordination/)

user-6338f3 04 April, 2022, 23:59:57

Thank you for answer. I'll check the references.

user-df0510 04 April, 2022, 13:34:40

Hello, I'm trying to record EEG using BrainVision Recorder and pupil size data using pupil capture and it seems to me that LSL is the only option.

If I have BrainVision and Pupil Capture running on the same PC, is there an easy way to sync both data sources? e.g. sending a trigger

papr 04 April, 2022, 13:37:53

You could run the LSL Recorder plugin for Pupil Capture to record the EEG data in Pupil Time, alongside the Pupil Core recording. The data would be stored as CSV. https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_recorder.py

user-df0510 04 April, 2022, 13:45:50

Thanks for the quick reply! Is it possible to do it the other way around? i.e. send pupil size data to the Brainvision recorder? My EEG data has a higher sampling rate than what is possible with pupil capture and I don't want to downsample it to the pupil capture sampling rate.

edited: alternatively maybe I could just send a brainvision trigger to pupilcapture via lsl?

papr 04 April, 2022, 13:51:34

just send a brainvision trigger to pupilcapture via lsl If the BrainVision LSL app supports sending such triggers to LSL that would work!

papr 04 April, 2022, 13:50:14

We have not tested the recorder plugin to its limits yet, but it would attempt to record the data at the sampling rate that the BrainVision LSL app would provide. Storing eye tracking data in BranVision time is most likely not doable. The idea would be to let LSL take care of the time sync for us/you.

papr 04 April, 2022, 17:01:39

The coordinates are the Pixel locations within the video, assuming that you are playing the video fullscreen

user-ced35b 04 April, 2022, 17:02:45

It doesnt need normalized coordinates?

papr 04 April, 2022, 17:12:24

Right! I forgot about that. In this case, you can easily convert the pixel locations like this:

norm_pos_x = pixel_x / total_width
norm_pos_y = 1.0 - (pixel_y / total_height)
user-ced35b 04 April, 2022, 17:12:42

Awesome thanks so much!

user-64c4d3 05 April, 2022, 09:04:05

Hello, I want record Saccade Amplitude & Saccade Duration by Invisible or core, but I can not find these data in export file... @nmt @user-9429ba

user-027014 05 April, 2022, 09:29:36

Hi guys, quick question here: is there any page with information on the exact labels and meta info of the lsl streams obtained from pupil labs? I'm wondering about the coordinate system being used for the xyz gazepoint data, when i train a network to predict pupil position based on the normalized xzy per eye (6 inputs mapped onto az-el output), i get something very similar to the mathematical conversion from the gazepoint_xyz data except that it's flipped along the vertical axis. (so left is right and right is left).

papr 05 April, 2022, 09:32:56

Hi, see https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture#data-format and https://docs.pupil-labs.com/core/terminology/#coordinate-system for reference. The xdf file contains labels for each channel.

user-027014 05 April, 2022, 09:33:19

ty!

user-9429ba 05 April, 2022, 10:09:13

Hi @user-64c4d3! We don't offer a turnkey solution for classifying saccades. Although, with the new fixation detection algorithm available in Pupil Cloud for Invisible recordings, the inter-fixation intervals (gaps) can implicitly be regarded as saccades. See documentation here: https://docs.pupil-labs.com/invisible/explainers/data-streams/#fixations

Otherwise there would be two options: 1. Manual annotation of saccades that occur between fixations. 2. Implement a saccade filter using the raw data exposed

user-64c4d3 05 April, 2022, 10:45:32

Thank you for answer. I'll try it.

user-e29c16 05 April, 2022, 13:23:03

Does anyone have any idea how to plot norm_pos_x and norm_pos_y using Matlab?

papr 05 April, 2022, 13:24:34

You should be able to plot it in the same way as every other time series, too. For general information on plotting in Matlab see https://de.mathworks.com/help/matlab/ref/plot.html

user-e29c16 05 April, 2022, 13:36:27

Thank you for responding @papr This is the Gaze data that I got after extracting the. Xdf file i was wondering if you could tell me how to find the norm_pos_x and norm_pos_y in this in order for me to plot. Thank you kindly.

Chat image

papr 05 April, 2022, 13:55:25

Out of interest, can you tell us a bit more about what type of data you are looking forward to record and why you chose LSL and Matlab for your tooling?

papr 05 April, 2022, 13:47:58

Hey, unfortunately, I do not have access to a Matlab instance to provide a working example. Based on the xdf loader documentation, it returns a cell array with a struct for each stream. Read more about structs here https://de.mathworks.com/help/matlab/ref/struct.html

Check out the info field to verify that you are looking at the correct stream. Based on the number of channels in the time_series object (7 channels, 91 samples), you are likely looking at the fixation stream. It has the following channels: 1 fixation id 2 confidence 3 norm_pos_x 4 norm_pos_y 5 dispersion 6 duration 7 method

user-e29c16 05 April, 2022, 13:53:57

Hi Papr but I don't have the following channels you mentioned above

Chat image

papr 05 April, 2022, 13:56:07

This info belongs to a different stream, the gaze data stream. Checkout the desc struct listed in that screenshot. It should contain the channel labels.

user-e29c16 05 April, 2022, 13:58:51

Yes, it contains 22 channel labels

papr 05 April, 2022, 13:59:53

Then these map to the 22 dimensions in the top-level time_series field šŸ‘

user-af1bd9 05 April, 2022, 14:24:46

Excuse me, could I get some help with setting up the pupil core glasses? I am getting that the automatic driver installation is not working and to consider manually installing the drivers.

papr 05 April, 2022, 14:28:10

Hey, please follow steps 1-7 for manual driver installation https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-e29c16 05 April, 2022, 14:26:05

Have you familiar with any visualization method to visualize the eye tracker data? Thank you.

papr 05 April, 2022, 14:30:21

Visualizations always depend on what you want to show/on what your research question is. We might be able to provide more concrete suggestions if you tell us about your project šŸ™‚

user-af1bd9 05 April, 2022, 14:28:29

okay, thank you!

user-af1bd9 05 April, 2022, 14:28:46

also, is the installation the same for windows 11?

papr 05 April, 2022, 14:30:37

We have not tested the procedure on Windows 11 yet.

user-af1bd9 05 April, 2022, 14:33:49

thank you again

user-e29c16 05 April, 2022, 14:34:25

My project is all about testing pupils' core eye tracker to make it work with LSL and the recorder data to matlab in order to extract the data, final part is the visualization, i have to visualize the gaze and fixation data.

papr 05 April, 2022, 14:53:12

Simple visualizations would include: - gaze/fixation position over time (line plot with x=timestamps, y=norm_pos_x or norm_pos_y) - spatial gaze/fixation distribution (scatter plot with x=norm_pos_x and y=norm_pos_y) - Histogram of fixation durations

user-e29c16 05 April, 2022, 16:58:22

Thank you for the information. I was able to get the plot, now I just wanted it to overlap with the recording that I recorded from pupil capture. so I can compare the movements.

user-e29c16 05 April, 2022, 16:59:42

simply a way to overlay the plot on top of the video.

papr 05 April, 2022, 17:16:30

I do not think that can easily be done. The video is not transmitted via lsl but stored in Pupil time. To overlay your gaze data, you would need to somehow load the video into Matlab, transform the time from pupil to lsl time, and then overlay your plots. This exceeds my Matlab knowledge.

user-b9005d 05 April, 2022, 18:26:45

Quick question: in the gaze_positions.csv export, what are the units of the gaze_timestamps?

papr 06 April, 2022, 07:18:34

It's seconds šŸ™‚ See https://docs.pupil-labs.com/core/terminology/#timestamps for details

user-1abb3f 05 April, 2022, 19:35:40

hi, we have a pupil invisible model, and we want to check the pupil detection, but when we connect the eye tracker into computer, the computer didn't detected the eye tracker and there is nothing on the pupil capture when i opened it do you know how can i fix it? thank you so much

papr 06 April, 2022, 07:18:06

Hi, the Pupil Invisible glasses are not designed to work with Pupil Capture. Please use the Pupil Invisible Companion device and app instead.

user-e18fd4 05 April, 2022, 20:07:02

hello Can you please helo me fix this issue

Chat image

user-ced35b 05 April, 2022, 20:27:07

Hello, I am consistently getting a pretty high angular accuracy number ( 2.379 degrees) during calibration. I have attached my calibration targets that the participant looks at through the mirror as well as the reference locations ive implemented into the dual display choreography client. Do you think its my visual target configuration? Should it span more of the display?

Chat image Chat image

user-e18fd4 05 April, 2022, 21:03:11

InĀ Device ManagerĀ (System > Device Manager)View > Show Hidden DevicesExpandĀ libUSBK Usb Devices,Ā Cameras, andĀ Imaging Devicescategories.For eachĀ Pupil CamĀ device (even hidden devices) clickĀ UninstallĀ and check the box agreeing toĀ Delete the driver software for this deviceĀ and pressĀ OKUnplug Pupil headset (if plugged in) and plug back in.Right click onĀ pupil_capture.exeĀ > Run as administrator. This should install drivers automatically.

papr 06 April, 2022, 07:12:39

Hi, it looks like you have reached out to us via email, too. I have responded to your question there.

user-e18fd4 05 April, 2022, 21:03:19

I tried this but didnt work

papr 06 April, 2022, 07:16:53

I recommend increasing the spread of the dots such that they fill the subject's field of view. Since you are using the 2d gaze estimation method, you might want to add more points, e.g. a 3x3 grid. I would also add an inner "center" dot for each gaze target to ease fixating the target for your subjects.

user-ced35b 06 April, 2022, 16:23:27

Ok thanks i'll try that!

user-882915 06 April, 2022, 20:24:01

Hello, when i open my pupil player i dont see the fixation detector plugin. Any idea why ?

papr 07 April, 2022, 07:21:12

Hi, you seem to have opened a Pupil Invisible recording in Pupil Player. Please note that its fixation detector is only designed for Pupil Core recordings and is therefore not available for Invisible recordings.

user-882915 06 April, 2022, 20:41:46

Chat image

user-a09f5d 07 April, 2022, 17:51:17

Could you please tell me what is the visual ange range for the pupil core, e.g.. the visual angle between the furthest left and the furthest right (or any two opposing directions) that the eye tracker can reliably measure pupil position? I apologies if this is reported online, I counldn't find it.

papr 07 April, 2022, 18:17:43

What is your definition of reliability here? The field of view is reported in the documentation as part of the camera intrinsics estimation. The area where gaze is estimated accurately depends on the calibration.

user-b9005d 07 April, 2022, 18:08:53

Is the pupil capture software not compatible with Mac OS Monterey? Iā€™ve plugged in the headset and it doesnā€™t seem to register in the software. We have another laptop on Catalina that it works fine on though

papr 07 April, 2022, 18:16:23

Monterey requires admin privileges to access the cameras :-/ see the release notes where you downloaded the app for details :)

user-b9005d 08 April, 2022, 13:48:59

Iā€™m not seeing the pupil labs cameras under the privacy tab of my system preferences. Iā€™m currently logged in as an admin. Is there a certain place where Iā€™d have to give privileges to the cameras?

user-a09f5d 07 April, 2022, 18:27:22

Sorry, I am reffering to only the range for 2d tracking by the eye camera and not gaze estimate. To explain, I am using circle_3d_normal_[x,y,z] to calulate the angle between the position of the eye at time point 1 and the position of the eye at time point 2. I would like to know the max limit that this angle could possibley be (i.e the largest measurable angle).

papr 07 April, 2022, 18:35:16

That depends on the actual eye camera position. The more oblique the angle onto the pupil the more difficult it becomes to detect. This issue increases in difficulty due to the distortion of the cornea. At some angles you can see the Pupil twice!

user-a09f5d 07 April, 2022, 19:36:37

That makes sense and is kinda what I had been assuming but was wondering if there is a max angle that can be assumed under optimal conditons/camera position, or if it's truly a cases by cases bases?

papr 07 April, 2022, 19:40:37

The 3d eye model allows +- 80 degrees. Gaze estimations outside of that range will result in low model confidence. Thinking about it, it actually also depends on the actual pupil size. So this is a very difficult question to answer. I also know that the cornea refracts light slightly different for everyone.

user-a09f5d 07 April, 2022, 19:52:33

Thanks Paplo. That alone is really helpful. Just knowing (or rather confirming) that the few values I have over 80deg are liklely problematic is useful. Intresting that you mention pupil size because I have found that can make a hugh difference for my data. I have had excellent tracking for some people while the light is on but as soon as I switch the light off (which have to do for my experiment) and pupil size increases the tracking becomes terrible. This happens a lot for people with contact lens.

papr 07 April, 2022, 19:54:28

tracking might become worse due to lighting, not necessarily due to the size (causation vs correlation). Might be that the 2d pupil max parameter is set too low for the dark setting, too.

user-a09f5d 07 April, 2022, 20:24:36

I've actually found that the main cause is that when the pupil gets bigger the edges of pupil become much paler and harder to distinguish from the iris (even after playing with the min and max pupil intensity and pupil size max), hence the poorer tracking. This mostly only happens for people with contacts so I think it is to do with how the light reflects off/is transmitted through the contact lens as you get further from the center of the lens. So I think you are probably right about the lighting.

papr 07 April, 2022, 20:25:27

This is an interesting finding!

user-6338f3 08 April, 2022, 01:56:08

Hello, can I know the drawing numerical value of the exact product among the Pupil Core components? When I change the length of the component and try to recreate it, I need the exact blueprint value.

Chat image

papr 08 April, 2022, 06:45:22

Hi, check out https://github.com/pupil-labs/pupil-geometry

papr 08 April, 2022, 13:50:40

This is the section that I was referring to before:

Due to new technical limitations, Pupil Capture and Pupil Service need to be started with administrator privileges to get access to the video camera feeds. To do that, copy the applications into your /Applications folder and run the corresponding command from the terminal:

Pupil Capture: `sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture`

https://github.com/pupil-labs/pupil/releases/latest

user-b9005d 08 April, 2022, 14:10:52

Ah thank you! I wasnā€™t able to find this

user-fa42c9 10 April, 2022, 18:32:49

hello! I have a quick confirmation about the pupil gaze coordinates. The coordinates is in reference to the size of the world data frame right? So if we have gaze coordinates that are in positive x and positive y values then, those gaze coordinates correspond to a pixel in the image?

nmt 11 April, 2022, 15:21:15

Hi @user-fa42c9 šŸ‘‹. Gaze positions (norm_pos_x, norm_pos_y) are provided in normalised scene camera coordinates. Details available here: https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-98789c 11 April, 2022, 10:36:51

Hello,

I am recording pupil diameter in a cognitive task that involves learning and decision-making, so ambient light and target distance are fixed and only pupil's psychosensory response is important for me. As stated by Sebastiaan Mathot in his very new preprint (https://www.biorxiv.org/content/10.1101/2022.02.23.481628v1, page 11, "trials should ideally be slow-paced"), such responses ideally need 2 to 3 seconds to show up on pupil diameter.

I'm concerned that this would make my experiment so long.

Based on your experience, do you think this 2 to 3 seconds is really necessary?

user-20283e 11 April, 2022, 16:35:21

Hi again, aparently we still have the issue... IT asked me the following: "Please contact the vendor and have them send us the Anti-Virus exclusion list for their software so we can verify we are up to date." Could you provide me this info? Thanks!

papr 12 April, 2022, 08:04:41

The list has not changed. Be aware that the path contains the version number, i.e. depending on which version is installed the path might be slightly different.

user-cc6154 11 April, 2022, 17:21:40

Hi everyone, I am trying to load videos in pupil player but I couldnt. It was working fine until recently but now when i drag the folder to player the video doesnt work. It works fine on pupil cloud. Please help. Thank you.

nmt 12 April, 2022, 07:48:47

Hi @user-cc6154. Can you share the player log after trying to load an affected recording? Search on your machine, pupil_player_settings. Inside that folder is the player.log file. Feel free to post here or send to [email removed]

user-1bda7f 11 April, 2022, 21:14:51

Hi, I'm currently getting this issue when building Pupil Capture from source, is there any way to fix this?

Chat image

papr 12 April, 2022, 08:02:46

Hi, if I had to guess it looks like your ffmpeg libraries are not in your system path, i.e. pyav fails to find them. You can use this tool https://github.com/lucasg/Dependencies on the pupil\venv\lib\site-packages\av\_core.*.pyd file to find out what is missing.

user-99bb6b 12 April, 2022, 03:09:41

Does anyone know if it is possible to have the world camera be used in 2 places at once, similar to this https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md but can be on in both pupil capture and as a camera device?

papr 12 April, 2022, 07:50:13

Hi, this is not possible to my knowledge.

user-027014 12 April, 2022, 12:56:43

@papr Hi Papr, got i small question for you again. When I compute the azimuth and elevation (using the method you previously described: "r = sqrt(x^2 + y^2 + z^2) el = acosd(y/r) az = atan2d(z,x)" ) from the xyz coordinates i get rather large offsets roughly 80-100 degrees in both directions. Do you know where these offsets come from? (so points where the participants is looking "straight ahead" are then at (90,90ish) rather than (0,0)).

papr 12 April, 2022, 13:07:45

if you pass [0, 0, 1] through this equation, you should find out what the "straight-ahead" el/az should be. If this value is not desirable for your use case, you can simply rotate the coordinate system by subtracting fixed values from el/az

user-027014 12 April, 2022, 13:40:31

Thank you. šŸ™‚

user-a51aed 13 April, 2022, 17:07:30

~~@papr Everything was working well for me, but things suddenly started returning an error. I haven't changed any of the code.~~

Error calling git: "Command '['git', 'describe', '--tags', '--long']' returned non-zero exit status 128." 
 output: "b"fatal: unsafe repository ('/home/ubuntu/Eye/pupil' is owned by someone else)\nTo add an exception for this directory, call:\n\n\tgit config --global --add safe.directory /home/ubuntu/Eye/pupil\n""
Traceback (most recent call last):
  File "main.py", line 36, in <module>
    app_version = get_version()
  File "/home/ubuntu/Eye/pupil/pupil_src/shared_modules/version_utils.py", line 81, in get_version
    version_string = pupil_version_string()
  File "/home/ubuntu/Eye/pupil/pupil_src/shared_modules/version_utils.py", line 57, in pupil_version_string
    raise ValueError("Version Error")
ValueError: Version Error

~~I checked earlier recommendations. I downloaded from github (git clone) and built from source~~ Nevermind. Running the recommended command fixed it. Leaving this up though because I think I'm not sure how this happened, and I'm interested in knowing if this can be prevented

papr 13 April, 2022, 17:46:02

Hi, when you download the source code as a zip it does not contain the required git version information. You only get it by cloning the repository

user-99bb6b 14 April, 2022, 02:33:45

When your recording in capture or watching a recording in player what data is used to make the dot indicator?

papr 14 April, 2022, 05:21:20

It displays gaze with a minimum confidence of 0.6 (by default)

user-99bb6b 14 April, 2022, 05:29:07

Yes, but I guess I should have asked which gaze_positional value? Gaze_point_3d? gaze_normal? How do I get the values that make up that dot? Sorry should have been more specific.

papr 14 April, 2022, 05:29:55

The norm_pos_* values šŸ™‚

user-b74779 14 April, 2022, 05:49:02

Hi, I am recording my data with the notification system in a csv, i would like to have the position of the center of the pupil (to compute eye gaze saccade etc... What data should I use from the pupil notification ? I see that there is "circle_3d center_x/y/z" data, I believe I should use that, yet I am wondering : is the value is pixels ? mm ?

user-b74779 14 April, 2022, 05:49:58

Also as I have 2 pupil core device (different generation) I figured out that the resolution of the eye camera are not the same, will this impact how I should handle the center position ?

user-b74779 14 April, 2022, 05:51:10

Thanks for your work and the quick help that you provide

nmt 14 April, 2022, 06:18:15

You can see a description of pupil and gaze data made available here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv circle_3d_center* is the centre of the pupil in 3D camera space (mm). For saccades, I'd recommend working with theta and phi (spherical coordinates) or their cartesian representation circle_3d_normal* Note that these values are enabled by the pye3d eye model, so make sure you fit the model well: https://docs.pupil-labs.com/core/best-practices/#pye3d-model. After that, resolution shouldn't affect how you handle the data

user-b74779 15 April, 2022, 04:09:23

Hi I tried today my prototype Hololens2 + Pupil core Device and I figured out that the pye3d model has some issue to "well fit" as you can see in this video https://www.youtube.com/watch?v=7QSXckZkfhI, I don't understand why it won't fit, I did what is advised in the doc (moving the head around a fixed point) yet the model won't fit. Do you have any idea of what I could do ?

user-f51d8a 14 April, 2022, 07:59:12

Hai, anyone can help me on how to resolve this problem? Left camera is not connected.

Chat image

papr 14 April, 2022, 08:00:33

Hi, please check if the small connector on the left eye camera is connected correctly.

user-f51d8a 14 April, 2022, 08:04:49

I have checked the small connector and now itā€™s connected to the software. However thereā€™s a lag when recording.

papr 14 April, 2022, 08:46:31

Looks like the connection is quite loose. The lag is caused by the camera disconnecting.

user-b74779 15 April, 2022, 04:11:09

(if the model 3d is not fitted I m having trouble to calibrate and I am receiving weird pupil size (23mm), I am for now forced to use the 2d diameter which is in pixel (not very good for my experiment)

user-b74779 15 April, 2022, 04:14:23

(and as you can see the 2d model is working pretty well)

user-b74779 15 April, 2022, 04:22:28

I am wonderring if is is not the camera position ?

papr 15 April, 2022, 06:39:53

Hey, looks like the eye model is being fit 8 cm away from the eye camera which is a lot. Also, the model looks very stable even though it is not frozen. This let's me believe that the intrinsics for the eye cameras might be incorrect. Specifically the focal length. What cameras are you using and can you check the log which intrinsics are loaded?

user-ef3ca7 15 April, 2022, 16:07:22

@papr Hello, Do I have the option of calibrating the pupil core for a visual impairment subject like age-related macular degenerations? Due to the Scotoma, they are not able to fixate on the calibration points.

user-b74779 15 April, 2022, 17:36:45

I am using the camera of the pupil core mounted on the Hololens with a prototype, I m not sure how I can tune the focal lenght ? and where can I see the log related to instrinsics ?

papr 15 April, 2022, 17:42:31

Could you please share an example recording with [email removed]

papr 15 April, 2022, 17:41:55

In that case, the intrinsics should be fine. You should find that information in the capture.log file

user-b74779 15 April, 2022, 17:46:28

Chat image Chat image

user-b74779 15 April, 2022, 17:47:06

here you can see how it is mounted, I am recording and sending the image through network by the way

papr 15 April, 2022, 19:32:59

Oh, right. Then this is it! I do not think that the pupil-video-backend is transmitting the intrinsics for the eye cameras correctly, but I would need to check.

user-b74779 15 April, 2022, 17:47:19

yeah of course I will send you a recording

user-b74779 15 April, 2022, 19:34:08

What kind of intrinsics are we talking about ? What kind of parameter should I tune in it ?

papr 15 April, 2022, 19:37:36

There is no need to tune anything. It is just a matter of sending the information. Specifically, the camera matrix (they are resolution dependent). These are the values used by Pupil Capture when the cameras are connected via USB: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L193-L210

The video backend allows sending custom matrices via https://github.com/Lifestohack/pupil-video-backend/blob/e185aee94818b6fddb472ffc7c3359980ff4436b/payload.py#L16

user-b74779 15 April, 2022, 19:34:34

Should I send you the code so you check ?

user-b74779 15 April, 2022, 19:38:53

Yes so I should look forward in the backend code to send an appropriate matrix for this resolution ?

papr 15 April, 2022, 19:38:59

correct

user-b74779 15 April, 2022, 19:39:07

Cause I guess it is sending the wrong matrix right now

user-b74779 15 April, 2022, 19:39:37

Okay and where can I found/compute the right matrix for the right camera ?

user-b74779 15 April, 2022, 19:39:52

Oh my bad you send me in the link

user-b74779 15 April, 2022, 19:40:12

Thank you so much for your quick answer, you guys are helping me so much šŸ˜

user-b74779 15 April, 2022, 19:55:48

That solves the issue, thank you SO much

user-b74779 15 April, 2022, 23:46:46

Hey last question, I noticed that with these cameras (I have 2 pupil core device) I have sometimes a weird thing eye on the video (not only related to video streaming because I had the issue with USB connection), do you have an idea of what could cause that ?(see the video) :

papr 16 April, 2022, 08:41:42

How often does that happen? Restarting the camera/changing the frame rate or resolution should fix it. If this issue appears frequently please contact info@pupil-labs.com

user-b74779 15 April, 2022, 23:47:06

https://www.youtube.com/watch?v=ApCg0XbLofY

user-b74779 18 April, 2022, 16:19:12

50% of the time I will say, usually unplugging and replug the USB is fixing the issue but I have to do this 50% of the time šŸ˜…

wrp 19 April, 2022, 07:42:08

As @papr said, please get in touch via email. It could be a cable issue as well

user-045271 19 April, 2022, 03:05:36

Unrelated to above, but just a question about pupil measurements. For Eye 0 and Eye 1 we are getting different pupil measurements by about 1-2mm between the eyes for every frame. Is this a form of parallax error/ could it be due to the pupil cameras being at different distances from each eye? Would this affect the pupil diameter measurements? For example, if Eye 0's camera is 3 cm from one eye, and then Eye 1's camera if 6 cm from the eye, is this expected to produce different measurements between the two pupillary diameters, and would it be expected that Eye 1 reads a smaller diameter as a result of this because the camera is farther away? Thanks!!

user-045271 19 April, 2022, 04:07:40

Apologies I have just read the following: "Pupil size in pixels is dependent on the eye-camera to pupil distance and is not corrected for perspective.Ā Pye3d, on the other hand, accounts for differences in eye-camera to pupil distances and corrects for perspective. It thus more accurately reflects pupil size and will be preferable for most users."

user-af5385 19 April, 2022, 12:41:49

Hey Pupil Labs! Are there any updates on this or is it still not the plan to support the Vive Pro 2 in the future?

papr 19 April, 2022, 12:42:52

Hi šŸ™‚ The linked message is still up-to-date.

user-ef3ca7 19 April, 2022, 17:49:18

Hello. I am not sure if I got an answer to this question I asked a few days ago. Do I have the option of calibrating the pupil core for a visual impairment subject like age-related macular degenerations? Due to the Scotoma, they are not able to fixate on the calibration points.

nmt 20 April, 2022, 08:11:55

Hi @user-ef3ca7 šŸ‘‹. Thanks for the question. During a Pupil Core calibration, the wearer is required to fixate a target (reference location), while our software records and matches pupil data to these locations. This is a necessary step to obtain gaze data, i.e. where the wearer is looking within Pupil Core's scene camera field of view.

However, it is possible to record eye movement and pupil size data without the calibration step. This is provided by default relative to the eye camera coordinate system (the calibration is used to map pupil data from eye camera to scene camera coordinates).

What is it exactly you intend to do? For example, you can work with the pupil data to calculate metrics like magnitude of eye movements. You can find a full overview of the data made available without calibration here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

user-ee70bf 20 April, 2022, 10:34:42

Hi there, I have a question regarding saccades please. I am using Pupil Core to record eye movements and am interested in the saccade count (number of saccades) and saccade duration. I have read that I can consider that the time during two fixations is a saccade - but what about the blinks ? How can I take those into account / out of the equation to calculate my saccades ? Thanks in advance. Joanna

user-c94ac3 21 April, 2022, 01:05:27

Hi all, I am writing to ask if anyone has had experience 3d printing their own headsets. This researcher has uploaded .STL files https://www.lukaszkedziora.com/en/2019/02/22/eye-tracker-elements-ready-for-workshops/ , but I am not sure what filament to use, and the settings, such as the % of infil, etc. Also, I would like to know if there are other homebrewed solutions available for people without the budget to purchase the real thing. Thank you in advance!

user-6338f3 21 April, 2022, 08:13:59

Hello, is it possible to remove the green dot that displays Pupilcore data information on the screen? This is the checked part in the photo.

Chat image

papr 21 April, 2022, 08:19:37

Hi, yes. The dot is visualized by the default plugins Vis Circle and Vis Polyline. Turn them off to remove the visualization.

user-6338f3 21 April, 2022, 08:25:09

thank you

user-ee70bf 21 April, 2022, 10:10:14

Hello, sorry to bother you with another question - when I export data from pupil player, I do not seem to have a "gaze.csv" file, I just have a "gaze.positions" file (as well as pupil positions, fixations and blink). The Raw data exporter and gaze positions exporter are activated. Is there something I am missing ? Thanks in advance for your help.

papr 21 April, 2022, 10:33:53

That is the corresponding gaze file exported by pupil Player, yes. Note, that file names and format differ from Pupil cloud exports, if this is what you are comparing it with.

user-ee70bf 21 April, 2022, 10:42:13

Oh, ok, thank you ! I am missing the "id fixation" and "id blink" columns in the gaze.position file, that's why I thought it was weird.. Is that normal ?

papr 21 April, 2022, 11:39:24

Yes, gaze_positions.csv only contains gaze data. You need to load the data from the other files and assign fixation and blink ids based on their timestamps.

user-ee70bf 21 April, 2022, 13:02:16

Thanks very much for your answer. Okay, so I need to code that manually then I imagine !

papr 21 April, 2022, 13:05:02

Yes, this requires some programming. Manual annotation is not needed.

user-ee70bf 21 April, 2022, 13:06:13

I will get coding on R then ! Thank you. Do you know if any programs are already available on github for example ?

papr 21 April, 2022, 13:10:02

We have some python examples on how to load and visualize exported data https://github.com/pupil-labs/pupil-tutorials I might be able to create a short example for your use case this afternoon.

user-ee70bf 21 April, 2022, 14:41:12

Thanks again, it's very helpful.

papr 21 April, 2022, 14:46:07

You can find a preview of the new tutorial here https://nbviewer.org/github/pupil-labs/pupil-tutorials/blob/apply-blinks-fixations/10_apply_fixation_and_blink_detection_to_gaze.ipynb

You can follow its review here: https://github.com/pupil-labs/pupil-tutorials/pull/18 Feel free to contribute to the review with questions and notes!

user-5b9b5d 21 April, 2022, 15:51:15

Hi! I'm writing to you regarding advice on building my set-up. I am not sure that the way I created the connection is the most functional, so I expose as briefly as possible my set-up. My experiment is written in Matlab (in Ubuntu) is connected with pupil capture(in Win) with TCP protocol via ZMQ. Everything works perfectly, except for the timestamp that sometimes is accurate to the last millisecond and sometimes is not (LAN and in Win I have disabled any external process like anti-virus and other) [I attach a picture of the function, Matlab is integrated with Python]. The function has applied the variable mean-offset because I have previously created a function that sends request and receives response for 20 times between the two PCs to create an average of the delay -Could the UDP protocol be better to avoid delays? as I'm willing to lose packets due to the large number of trials/conditions. - Using Labstreaming recorder in Matlab could be a better method for synchronization? I have already installed the plugins on pupil capture, but I am not sure that using Matlab and having only eye tracking data is the right choice. In case it is, having already the plug in pupil capture, in the communication between Matlab/Lab recorder and pupil capture, Matlab would still need the ZMQ library to receive the annotation and time stamp respectively from the stimuli in Matlab and from pupil time? Sorry for the unclear questions. Thanks for your attention!

user-5b9b5d 21 April, 2022, 15:54:17

Chat image

user-7780ee 21 April, 2022, 17:04:56

can we reform the pupil core to be wireless? e.g. any bluetooth accessories?

wrp 22 April, 2022, 03:26:27

I would love to have truly untethered/wireless eye tracking. To make that possible you would need to put a battery enough computing power in the headset. Doing so would add significant additional weight and more heat on the user's face/head. For both Pupil Core and Pupil Invisible we decided to go the route of having a cable to a computing device in order to keep the frames and glasses lightweight and to offload some of the engineering work to pre-existing consumer electronics - which enables us to keep costs reasonable and to focus efforts on advancing core eye tracking algos/pipelines.

While not "wireless" I would suggest looking into small form factor tablets running windows or ubuntu that would enable image capture from all cameras using Pupil Core and potentially real-time gaze estimation.

@user-7780ee what are your goals? What are you looking to achieve?

user-f93379 21 April, 2022, 19:24:58

Hello colleagues! Our team has a question about testing new mobile app designs. How is image scrolling synchronized with the observation process? How are heat maps built? Are there any ready-made solutions from pupil team to handle such tasks? Thank youšŸ¤ŖšŸ‘

wrp 22 April, 2022, 05:35:23

Hi šŸ‘‹

There are no fully turn-key solutions that we can offer you for purely screen-based work. That being said, we do have marker based tracking algos built into Pupil Core software that would enable you to track the mobile device screen and get gaze relative to the screen. This would get you gaze relative to the screen and aggregate on gaze of multiple participants on screen (but not aggregate onto dynamic content).

If you want to aggregate gaze of multiple respondents/subjects/participants relative to dynamic content on screen, then you would need to build your own tooling to capture screen content and then aggregating onto that screen content.

Are you looking to do only eye tracking on mobile or do you have other use cases in mind? I ask because there might be alternative solutions to wearable eye tracking that might be more purpose-built to your research/application area.

user-7780ee 22 April, 2022, 10:31:40

@wrp Thanks for your solutions! For instance, we want to make humans do some navigation task to look for a specific object in a room. It will be not convenient for a person to hold a laptop to do the free navigation. Do you have any solutions for this scenario?

wrp 22 April, 2022, 10:47:24

Pupil Invisible šŸ‘“ was designed exactly for this šŸ˜ø - if you already have Pupil Core, then I would suggest a small form factor tablet that can be carried in a pocket/bag.

user-7780ee 22 April, 2022, 10:49:24

šŸ˜€ great, so the tablet can be only with windows/ubuntu but not iOS, right?

wrp 22 April, 2022, 10:50:18

The tablet form factor computer would need to have enough compute power to run Pupil Capture (windows 10 or ubuntu - unfortunately no macos options I know of)

user-7780ee 22 April, 2022, 10:51:00

btw, do you have any recommendations on tablets?

user-9429ba 25 April, 2022, 14:45:50

Hi @user-7780ee you would need a laptop-style tablet/pc. The key specs are CPU and RAM. We suggest at least a recent generation Intel i7 CPU with 16GB of RAM

user-f93379 22 April, 2022, 22:42:25

Thanks for the reply! We are interested in tracking the view on dynamic content that is user-driven.

Can you tell me if there is a semi-fabricated solution or a proponent of a screen capture solution using lsl, maybe? Any and all help would be greatly appreciated as we make the decision to purchase two core devices.

user-9429ba 25 April, 2022, 15:20:31

Hi @user-f93379 šŸ‘‹ I am not aware of any software solutions that are able to capture the mobile screen and send it to LSL. If you are purely interested in eye movement research on mobile phones, hawkeye might be better suited: https://www.usehawkeye.com/

user-868d47 23 April, 2022, 05:37:36

Howdy howdy all

user-868d47 23 April, 2022, 06:02:06

Just want to make sure this product (Pupil Core) will work for what I want, and hopefully play nice with the others. I am going to do an academic study where I am going to introduce (what I believe) is a new stressor. Participants will come in, do a survey on a computer, do one of the manipulations (control, experimental condition 1, experimental condition 2). I want to do a manipulation check on the manipulations (prior research already shows ET/HR/GSR response), but I need to do it because one of them is new and I need to test it. Then the individuals will do another survey. I think some of the questions might actually trigger a stress response, so would like to keep check on that. I am looking for an eye tracking (ET) hardware that can do pupillometry (I think the Pupil Core can do that: https://pupil-labs.com/products/core/); additionally, at the same time (and timed together so I can match up ET/ pupil response to text and images on the screen), I would like to collect: galvanic skin response (GSR), photoplethysmography (PPG) to capture heart rate (HR) and oxygen saturation (SpO2)(this equipment: https://verisense.net/verisense-pulse). I'm assuming I will need some sort of software for this so I was wondering what would work, for example Shimmer software (https://shimmersensing.com/wearable-sensor-products/software/)?

Iā€™m a novice, so I might not even be asking the right questions. With this work?

user-9429ba 26 April, 2022, 14:56:21

Hi @user-868d47 šŸ‘‹ Pupil Core can indeed be for used for pupillometry research, it was designed with this use case in mind. Be aware however that to reliably detect changes in pupil size requires a controlled set-up because pupil dilation is highly sensitive to lighting and other factors. You might be interested in iMotions and Teaergo, who provide turnkey solutions to integrate Pupil Core with other biosensors for multi-sensor fusion.

user-ace7a4 24 April, 2022, 08:28:09

Hi folks. I have a real hard time with getting good confidence values for the pupil core eye-tracker. Is there any advice on how to set up the glasses in a way, that they capture the pupil even in difficult angles? I worked it out in a way, that I can achieve confidence values > 0.6 on my self. However, when participants use the glasses the values the set-up is way harder, takes a lot of time and values are not great at all. I even used the Youtube videos as a guide, but I cant seem to figure it out. Maybe I am missing one essential element when setting up the eye-tracker? Would appreciate any help!

user-ace7a4 24 April, 2022, 08:37:49

oh and is there any way to convert the timestamps within the csv files in to relative time? So I can work with minutes and/or seconds instead of the absolute time values

papr 25 April, 2022, 06:55:50

the info.player.json file contains the start timestamp ("synced"). If you subtract it from all other timestamps you get relative timestamps in seconds. šŸ‘

papr 25 April, 2022, 06:54:54

Hi, would it be possible for you to share a hard case, i.e. a case where you think that you have positioned the eye cameras well but the confidence is not as high as expected? This would allow us to give more concrete feedback.

user-ace7a4 25 April, 2022, 11:11:14

I will next time I am in the lab. Thank you so much, I know that my request was indeed vague

user-7b683e 26 April, 2022, 07:11:26

Hello, Could I learn FOV and infrared filter's nanometer of the eye camera? Have a good day!

papr 26 April, 2022, 07:16:11

Hey, you can find the camera FOV values here https://docs.pupil-labs.com/core/software/pupil-capture/#camera-field-of-view-fov

user-7b683e 26 April, 2022, 07:27:52

So could you mention something about nanometer amount of infrared filter?

papr 26 April, 2022, 07:32:16

Yes, I have relayed the question to the corresponding team šŸ‘Œ

papr 26 April, 2022, 07:32:48

It's 850nm

user-7b683e 26 April, 2022, 07:57:43

Yeah, I got it. Nice to know this knownledge. Thanks and see you again.

user-ee70bf 26 April, 2022, 08:51:38

Hi there. Can anybody think of a reason why blinks would not be detected on pupil player even though the eye0 video shows blinks (or rather... half-blinks, the participant half closes their eye each time) ? Fixations and pupil dilation are being detected and data is of good confidence. Thanks in advance for your help !

user-7b683e 26 April, 2022, 08:54:24

Because Pupil works with traditional methods to detect pupil area. By reason of this, if a pupil circle is not suit for polygon fit, the method markers this as a blink.

user-ee70bf 26 April, 2022, 08:55:25

Hi, thanks for your answer. So, if I understand correctly, as the participant is not "traditionaly" blinking, pupil labs can't detect it ?

user-7b683e 26 April, 2022, 09:00:42

I can suggest to use machine learning methods for your case if you want to detect eye opening size.

papr 26 April, 2022, 08:56:47

Not sure one can say it like that. I would rather say: Because the subject does not fully close their eyes, the confidence does not drop as much as the software would expect. By adjusting the thresholds, you should be able to compensate for that.

papr 26 April, 2022, 08:58:52

What @user-7b683e is referring to is that the pupil detector will always try to fit an ellipse instead of a polygon. If one had a polygon one might be able to estimate by how much the pupil is occluded by the eye lid. But out blink detector does not work like this.

user-027014 26 April, 2022, 10:34:32

Hi guys, here in our labs we would like to 'unmatch' the data if still possible. However, we can no longer find documentation on the pupil matching algorithm. Do you still have that somewhere online? Gr. Jesse

papr 26 April, 2022, 10:37:14

The previous matching docs have moved to https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching

papr 26 April, 2022, 10:36:22

Hi, what do you mean by unmatch?

user-027014 26 April, 2022, 10:43:04

Well currently when we look at the data it isn't exactly clear to us for each idx from which frame it is being computed (and if this is consistent for each channel?)? Ideally we would like the eye tracker to give just data points for every sample it gets for the left and the right eye at the set sampling rate. But it seems you get twice the sampling rate in data points and there is some logic algorithm decidiing on which frame / img is being used to compute the parameters from right (usually alternating, but sometimes repeating and duplicating, or filling in nans)? What i would like in the end is 200 samples per second per eye to compute azimuth and elevation from the gaze_norm0/1_x/y/z with no duplications and a matching confidence per eye.

papr 26 April, 2022, 10:48:17

With some coding, you can also load the gaze mapping config from the recording, create a custom matching, and pass the new pairs through the same gaze mapper that was used during the recording.

papr 26 April, 2022, 10:45:36

The gaze data includes a base_data field that identifies which pupil data was used to estimate which gaze sample

user-027014 26 April, 2022, 10:48:47

Ahhh! That's very handy. I believe base data is not available in LSL, however i think we can then use some xcorr to synch the offline basedata with our lsl data and then add another channel with the frame idx, right?

papr 26 April, 2022, 10:50:36

Right, base_data is not included in the LSL stream. What do you mean by offline basedata? The native Pupil Capture recording?

user-027014 26 April, 2022, 10:53:21

well i havent used anything other than lsl data yet, but i believe if we drag and drop the recordings in pupil player, we can then extract the data to some deliminated format and import that in our programming environment again, right? If we then find a channel that is both in lsl and the extract file; Synch the two together, We then can superimpose the index of the frame as a 23rd channel or so on our lsl data.

papr 26 April, 2022, 10:58:05

Yes, the native recording (the one you drop onto Pupil Player) has some overlap with the LSL recorded data. So your approach would work but would leave you with the too-high-sampling-rate that you are trying to fix.

user-027014 26 April, 2022, 11:08:38

Cool. I'll have my student play around with it for a bit. Thanks for the help!

user-7e4e46 26 April, 2022, 11:38:16

Hi all, just had a (perhaps silly) couple of questions regarding connectivity between Pupil Capture and Pupil Core: is active usb connection required at all times during recording, or is there local memory on Core device which you upload via usb having finished recording? If the former is true, is there a wireless means of data streaming? Thanks.

papr 26 April, 2022, 11:41:36

Hi, without third-party software, Pupil Capture requires an active USB connection, yes. There are projects like https://github.com/Lifestohack/pupil-video-backend/ that can run on a raspberry PI connected to the headset and stream the video to a computer running Pupil Capture. But that has some limitations. If you want to be mobile I recommend having a look at our Invisible product: https://pupil-labs.com/products/invisible/

user-7e4e46 26 April, 2022, 11:42:27

Thanks a lot for your reply. That's great, I will take a look!

user-7e4e46 26 April, 2022, 11:54:24

Hi again, sorry I now have a further two questions: what is the performance of the Invisible in terms of accuracy and precision and is the Pupil Capture software free with the Pupil Core or is it a supplementary purchase?

papr 26 April, 2022, 11:57:31

See https://arxiv.org/abs/2009.00508 re Invisible performance. Pupil Capture is free and open source. Please be aware that Pupil Invisible runs with a different software, the Pupil Invisible Companion app, which is also free.

user-7e4e46 26 April, 2022, 12:01:02

Great, thanks again

user-2bd5c9 26 April, 2022, 14:00:48

What steps do I need to follow to do a post-hoc calibration in Pupil Player v.3.5.1?

papr 26 April, 2022, 14:02:09

Hi! Open the Gaze Data menu and select Post-hoc Calibration. Afterward, follow the instructions from our documentation https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration

user-2bd5c9 26 April, 2022, 14:05:23

Okay, but I don't have the option Post-hoc Calibration, only Gaze Data from Recording. How do I get to that option?

papr 26 April, 2022, 14:07:25

This sounds like you have a Pupil Invisible recording opened. Is that correct? Pupil Player does not support post-hoc calibration for this type of recordings.

user-2bd5c9 26 April, 2022, 14:08:26

Yes that is right. So it is not possible for the Pupil Invisible?

papr 26 April, 2022, 14:09:21

Correct. Post-hoc calibration requires pupil detections to perform the gaze mapping. Pupil Invisible does not provide these.

papr 26 April, 2022, 14:10:10

@user-2bd5c9 what where you trying to achieve with the post-hoc calibration?

papr 26 April, 2022, 14:11:40

If you are trying to correct for a constant offset, you should have a look at this Pupil Player plugin https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433

user-2bd5c9 26 April, 2022, 14:11:51

I want to shift the calibration to the correct form, so it matches with the calibration we've done beforehand

user-2bd5c9 26 April, 2022, 14:14:14

Oh yes, correct for the offset is the purpose. Where should I put in this plugin? Do I have to download python?

papr 26 April, 2022, 14:15:23

No need to download Python, just the gaze_with_offset_correction.py file. See https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin for where to put it. Once installed correctly and having restarted Player, it should appear next to Gaze From Recording.

user-84a678 26 April, 2022, 14:22:13

Does anyone know how to combine the data of Psychopy and Pupil lab? I made it to the point where pupil lab is recording from the experiment in psychopy, but I still get the data with the gaze position separately from the timing of the stimuli.

papr 26 April, 2022, 14:28:00

If you have not seen it yet we have a new integration since the last PsychoPy release early this year https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html#pupil-labs-core

papr 26 April, 2022, 14:23:03

Hi, how are you starting the recording?

user-84a678 26 April, 2022, 14:28:06

there is an option in psychopy where you can define the recording start point in one of the routines

papr 26 April, 2022, 14:30:21

This sounds like you are using the integration already. To record gaze in psychopy time and coordinates, check out on how to use the IOHub https://psychopy.org/api/iohub/index.html

user-f93379 26 April, 2022, 14:34:58

Hello)

Can I ask you for photos and dimensions of the shipping box for the core kit?

user-84a678 26 April, 2022, 14:37:13

Amazing! do I need to put it with code? Im not sure what the process requires

papr 26 April, 2022, 14:43:49

https://raw.githubusercontent.com/wiki/pupil-labs/pupil/media/images/eye-tracker-properties.png

In the Data tab, there should be an entry for Store HDF5 or something similar. This is required for iohub to store the eye tracking data in psychopy time+coordinates.

user-84a678 26 April, 2022, 14:48:32

Im sorry Im confused, the picture shows how to activate the pupil lab in the experiment, is the data tab should be there as well?

papr 26 April, 2022, 14:48:52

See the top right of the screenshot

user-84a678 26 April, 2022, 15:24:15

OK, and for the pupil lab exports - pupil_gazepositions, what is the time unit at pupil_timestamp ?

papr 26 April, 2022, 16:50:12

It's seconds. But the clock has an arbitrary start.

user-e7240e 26 April, 2022, 17:07:36

Hi, I recorded data using pupil invisible now, I am trying to analyze the data. I am not sure how should I start with data analysis, I came across a video explain to use pupil player in order to create the heatmap but when I am trying to create a surface it shows that no marker is found and I am not sure how define the markers.

user-e7240e 26 April, 2022, 17:07:54

I would appreciate if some one can help me

user-7daa32 26 April, 2022, 18:28:55

Hello everyone

I noticed that if the eye resolution is (192 x 193), the highest frame rate will be 200Hz. This way, it won't be easy to fine-tune pupil detection in the algorithm mode. The eye mode, which is the blue circle will be blue but not centrally placed (I guess not an issue) since the Pupil was well detected (red circle and dot). While an eye resolution of 400 x 400 will give max frame rate of 120Hz and easy to find tune pupil detection In algorithms mode.

I just want to be sure I am safe using either resolution. Please what do you think should influence my choice? We are currently using powerful computers.

Another question

2D and 3D gaze mapping pipelines

We have been using 3D pipeline and got good results if we don't have to worry about accuracy being more than 1.5 degrees.

Since we are using a chin rest, do you think using the 2D pipeline will still give the best results? No slippage compensation but low gaze accuracy

user-7b683e 27 April, 2022, 07:09:25

Hello,

Frequency amount can be set according to a work to be done. Some experiment requires high hz. If you don't need to 200 Hz, probably 400x400 can give best results. In addition to this, maybe you can use some filter to pass jumping data such as median, savitzky-golay or kolmogorov-zurbenko.

I can suggest for pipelines, 2D should give more accuracy by using a chin rest. Maybe you can change some settings such that distance between the headset and your surface.

user-7daa32 28 April, 2022, 14:15:19

Thanks

user-7780ee 27 April, 2022, 10:19:03

Hiļ¼We are gonna use the projecter screen to display stimuli to participants and collect their eye movement. Can we use the apriltag markers as well on the projecter screen? And do we need to ask participants not to wear mascara or eyeshadow? Thanks!

user-7daa32 28 April, 2022, 14:15:09

Not that I am an expert, you can use the picture of the markers instead of printing.

I think you should not discriminate based on mascara. It should not be a criteria for inclusion but data quality. Mascara can deceive the Pupil detector algorithm

papr 27 April, 2022, 12:21:59

Hi!

Can we use the apriltag markers as well on the projecter screen? If it is bright enough it should work. I suggest giving it a try and checking the detection using Pupil Capture.

And do we need to ask participants not to wear mascara or eyeshadow? Especially mascara can cause issues with pupil detection. You can work around that but you will have better out of the box if your subjects do not wear mascara.

user-ee70bf 27 April, 2022, 12:05:55

Hi there. I have a question : when I upload certain files in Pupil Player, this message pops up and Player crashes. Is that data not usable ? Thanks in advance for your help !

Chat image

papr 27 April, 2022, 12:27:00

Hi, looks like an issue with Player's blink detection algorithm. Let me have a look.

user-ee70bf 27 April, 2022, 12:28:04

Thanks very much ! I am available on discord or via email [email removed] for any information !

papr 27 April, 2022, 13:54:47

Hey, I think I was able to find the cause and a possible workaround. Please follow these steps: 1. Stop Player if it is running 2. Go to the pupil_player_settings folder 3. Delete the user_settings_* files (this resets the application to its default state) 4. Download this Python script and save it to the plugins sub-folder https://gist.github.com/papr/c02bf229ac9a94e9fbee633cd53113db 5. Start Pupil Player (will start in default settings) 6. Instead of enabling Blink Detector, enable Blink Detector (Fixed)

user-7780ee 27 April, 2022, 14:12:28

Thanks! @papr šŸ˜€

user-ee70bf 28 April, 2022, 08:04:16

Thanks very much, I will try that !

papr 28 April, 2022, 09:06:15

Please let me know if the plugin works for you or if you continue encountering the issue.

user-ee70bf 28 April, 2022, 11:42:19

Hey @papr , I have tried the new plugin and Pupil Player crashed when I wanted to export the data (not before, which is progress) : this is what has appeared, I don't know if it's linked to the same issue ?

user-ee70bf 28 April, 2022, 11:42:58

@papr

Chat image

papr 28 April, 2022, 12:31:16

I have updated the gist with a workaround for the second issue. šŸ‘ Please redownload the python script, restart Player, and try again.

user-ee70bf 28 April, 2022, 11:45:51

So what I have noticed is : with the original blink detector, only certain data files make Pupil Labs crash. However, with this new fixed blink detector, it makes every file crash

user-ee70bf 28 April, 2022, 11:49:30

I have now switched back to the classic blink plugin but it is still crashing : "Offline_Blink_Detection" object has no "attribute timeline. I would really appreciate a rapid response for this issue please, as I have all my data to process very rapidly (PhD student with a deadline haha). Thanks in advance

papr 28 April, 2022, 12:01:04

I am looking into it now. Edit: I am able to reproduce the issue.

user-85065c 28 April, 2022, 12:31:17

Hello! I'm new here and new to Pupil Labs. We are using the Pupil Labs VR/AR Binocular Add On (but in a non-VR application). We have a stereoscopic calibration sequence. It seems to work fine, but once it's complete, it shows all the gaze data shifted to the upper left corner of the Pupil Capture screen. Almost like there's a fixed offset between the world center and what the software thinks is the center. Suggestions?

papr 28 April, 2022, 12:32:25

Hi and welcome to the community! Are you using hmd-eyes for the calibration?

user-85065c 28 April, 2022, 12:33:05

Hello! I need to ask our programmer to be sure, but I don't believe we are. We have a custom calibration sequence in Unity.

papr 28 April, 2022, 12:42:10

hmd-eyes is implements a Unity plugin that you might be using. Please let us know if you are using the plugin with the default calibration, and if not, how you modified the calibration sequence.

user-ee70bf 28 April, 2022, 13:14:03

Thanks so much, it's fixed !

user-ee70bf 28 April, 2022, 13:42:42

Sorry for spamming the forum ... @papr , you had shared a tutorial for fixation and blink detection (https://nbviewer.org/github/pupil-labs/pupil-tutorials/blob/apply-blinks-fixations/10_apply_fixation_and_blink_detection_to_gaze.ipynb) but the link now shows 404 Not Found. Has the tutorial been deleted ?

user-7780ee 28 April, 2022, 17:13:19

I use macOS but the eye cameras doesn't work. Where can I switch the camera?

papr 28 April, 2022, 18:02:47

Hey, just the eye cameras? If you don't see the eye windows you can open them from the world window's general settings

user-7780ee 29 April, 2022, 10:41:47

world window has the same problem

papr 02 May, 2022, 06:54:54

In this case, it sounds like you are running on macOS Monterey, correct? You will need to start the application with administrator rights. sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture See the release notes for details: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-04dd6f 30 April, 2022, 10:21:39

Hi, I have a question regarding the accuracy of the Pupil core. Below is a screen in which the participant is looking at the screen. Red dot is the gaze point which is detected by the Pupil core. Suppose the the accuracy is 0.06ā°, does that mean the actual gaze point will be in the circule (dashed line) with a radius of 0.06ā°? I appreciate if you can help to clarify the question.

Chat image

papr 02 May, 2022, 07:14:20

Accuracy is approximated as the average angular error across the scene cameras field of view. That means that the gaze estimate will be on the dashed line on average but could be outside or within the circle, too. In practice, the estimate is often biased towards a specific direction and is not distributed normally around the actual gaze point.

user-6b3d8c 30 April, 2022, 14:47:02

Hi! We're starting a project that requires doing pupillometry (we want to measure pupil diameter changes across conditions) with patients that cannot comply with instructions so calibration is not possible. We're not doing gaze tracking, just pupillometry, would this be doable under these conditions with the core?

End of April archive