πŸ‘ core


user-b91cdf 01 December, 2021, 15:23:33

Hi guys, how can i get the camera instrinsics stored in the recordings ? e.g. world.intrinsics i can't open it properly

Greetings,

Korbi

user-b91cdf 01 December, 2021, 15:24:11

cool thanks

user-b91cdf 01 December, 2021, 15:47:45

I have another question: I want to track driver's gaze behavior in real world driving. Currently i am thinking of a single marker calibration under real conditions. I want to place the marker about ~85m away and carry out the calibration. Now i want to calculate the neccessary marker size the software needs to detect the marker. Thats why i wanted the focal length of the world intrinsics. What are your suggestions about this approach ? Is this even worth a try ?

nmt 01 December, 2021, 15:57:29

When using the 3d pipeline, the calibration does extrapolate outside of the calibration area. Attempting to calibrate at a distance of 85 metres isn't recommended, nor is it needed. My advice would be to calibrate at the standard distance, e.g. 1–2.5 metres, and then instruct the wearer to fixate on objects in the distance such that you can validate that gaze is located on the objects. You could also use the natural features calibration, but this is more dependent on good communication between the wearer and operator. Read more about that here: https://docs.pupil-labs.com/core/software/pupil-capture/#natural-features-calibration-choreography

user-b91cdf 02 December, 2021, 08:28:26

thanks for your answer !

user-4f89e9 01 December, 2021, 18:17:45

any known reason why the eye video won't show but its still inputting data

Chat image

papr 01 December, 2021, 18:18:53

Do I see it correctly that the gaze estimation is still working correctly?

user-4f89e9 01 December, 2021, 18:18:57

yes lol

papr 01 December, 2021, 18:19:54

then this is likely an openGL error

user-4f89e9 01 December, 2021, 18:19:56

i think it found my last recording data which is why its working but im about to have a different wearer

papr 01 December, 2021, 18:20:16

I suggest rebooting the machine.

user-4f89e9 01 December, 2021, 18:20:33

alright, should i uninstall the device first

papr 01 December, 2021, 18:20:42

No (if you are referring to the device driver)

user-4f89e9 01 December, 2021, 18:24:39

rebooted same issue, theyre showing as hidden in device manager tho

Chat image

papr 01 December, 2021, 18:29:10

Do you see any other entries with the same name in the other categories, e.g. cameras or imaging devices?

user-4f89e9 01 December, 2021, 18:30:18

Yeah I saw them in a USB drop down so I uninstalled them and am restarting again

papr 01 December, 2021, 18:31:07

I do not think this was a driver but a displaying issue to be honest.

user-4f89e9 01 December, 2021, 18:31:46

Chat image

user-4f89e9 01 December, 2021, 18:33:08

still no cam view

papr 01 December, 2021, 18:33:47

But only the eyes?

user-4f89e9 01 December, 2021, 18:33:52

correct

user-4f89e9 01 December, 2021, 18:34:10

and it still is tracking correctly(mostly)

papr 01 December, 2021, 18:34:40

The terminal is not showing anything unexpected? No error messages that could be related?

user-4f89e9 01 December, 2021, 18:34:56

i can shoot you my log file real fast

papr 01 December, 2021, 18:39:10

ok, nothing unusual on the first look. Could you please try restarting with default settings from the general settings menu?

user-4f89e9 01 December, 2021, 18:40:34

i hate that this worked

papr 01 December, 2021, 18:41:57

yeah, I feel you. Would be better to know what caused the issue in the first place.

user-4f89e9 01 December, 2021, 18:42:13

thanks as always though <:LULW:852663634835406920>

user-04dd6f 01 December, 2021, 18:43:32

@papr got a quick question here, is there any way to adjust the deviated fixation position (x-axis and y-axis) afterwards (maybe using the residual method)?

papr 01 December, 2021, 18:46:52

You need to adjust the gaze using a manual offset (either via post-hoc calibration or custom plugin [1]) and then recalculate fixations.

[1] https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433

user-04dd6f 01 December, 2021, 23:00:15

Many Thanks~

user-6e3d0f 02 December, 2021, 18:19:05

Hi, Can someone help me with the formula to convert accuracy (degree) to cm when all measured points are on the same distance? So having a accuracy of 1.1Β° on a plane with distance 1.40m results, I wan to calculate how much cm that is

user-61e409 02 December, 2021, 21:23:48

Hello, I am getting the following error message:

world - [WARNING] video_capture.uvc_backend: Updating drivers, please wait... powershell.exe -version 5 -Command "Start-Process 'C:\Program Files (x86)\Pupil-Labs\Pupil v3.5.1\Pupil Capture v3.5.1\PupilDrvInst.exe' -Wait -Verb runas -WorkingDirectory 'C:\Users\VISION~1\AppData\Local\Temp\tmpexxwja1m' -ArgumentList '--vid 1443 --pid 37426 --desc \"Pupil Cam1 ID2\" --vendor \"Pupil Labs\" --inst'" world - [ERROR] video_capture.uvc_backend: An error was encountered during the automatic driver installation. Please consider installing them manually. powershell.exe -version 5 -Command "Start-Process 'C:\Program Files (x86)\Pupil-Labs\Pupil v3.5.1\Pupil Capture v3.5.1\PupilDrvInst.exe' -Wait -Verb runas -WorkingDirectory 'C:\Users\VISION~1\AppData\Local\Temp\tmpfnw5lsyk' -ArgumentList '--vid 3141 --pid 25771 --desc \"Pupil Cam2 ID0\" --vendor \"Pupil Labs\" --inst'" world - [ERROR] video_capture.uvc_backend: An error was encountered during the automatic driver installation. Please consider installing them manually. world - [INFO] video_capture.uvc_backend: Done updating drivers! world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied. world - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution [1280, 720]! world - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy! world - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation!

user-61e409 02 December, 2021, 21:26:17

I have followed the instructions here (https://docs.pupil-labs.com/core/software/pupil-capture/#video-source-selection), on troubleshooting drivers, with no resolution to issue. Of note, however, I do not see the libUSBK Usb Devices orImaging Devices categories in my Device Manager, even after showing hidden devices.

user-61e409 02 December, 2021, 21:27:03

Is there a process for manual installation that is documented somewhere?

papr 02 December, 2021, 21:40:30

Please follow steps 1-7 from these instructions https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-61e409 02 December, 2021, 21:27:38

My system is detecting the camera, as they do show up in Device Manager.

user-61e409 02 December, 2021, 21:42:20

Thank you, papr. I will see if this can resolve the issue.

user-5ef6c0 03 December, 2021, 09:30:14

hello. In my fixations.csv file I am getting multiple fixations for the overlapping frame intervals. For example:

id:                     492    
start_timestamp:        20676.97    
start/end_frame_index:  7634-7639        
3dx/y:                  27.31673 / -22.378

id:                     585    
start_timestamp:        9382.697471    
start/end_frame_index:  7627-7642    
3dx/y:                  -1.753754619 / -4.730020008

I am not sure how to interpret this

papr 03 December, 2021, 09:32:39

The timestamps do not fit the frame indices. Could you please share this recording with data@pupil-labs.com such that we can inspect the cause?

user-5ef6c0 03 December, 2021, 09:38:29

@papr just send an email with a dropbox download link

papr 03 December, 2021, 09:39:57

We will come back to you via email early next week

user-5ef6c0 03 December, 2021, 09:40:20

thank you @papr

user-5ef6c0 03 December, 2021, 09:48:34

On a side note, does anyone have any opinions on the following parameters for the fixation detection algorithm: 50ms / 1500ms / 2.7deg? These values were used in Andersson et al.'s 2017 paper "One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms". Their methodology was to have 2 human coders manually classify fixations and then determine the algorithm values behind their classification. Here a few excerpts:

For fixation durations, we visually identified a small subset of fixations with durations between 8 and 54 ms, and the nearest following fixation duration was 82 ms. We therefore selected a minimum fixation duration of 55 ms to exclude this subset of very short fixations with durations outside the frequently
defined ranges (Manor & Gordon, 2003). This value is also supported by previous studies (Inhoff & Radach, 1998; see also Fig. 5.6 in Holmqvist et al., 2011).
The maximum fixation dispersion for the human coders was, after removing one extreme outlier at 27.9Β°, found to be 5.57Β°. This was clearly more than expected, especially for data of this level of accuracy and precision, and the distribution had no obvious point that represented a qualitative shift in
the type of fixations identified. However, after 2.7Β°, 93.95 % of the fixations values have been covered and the tail of the distribution is visually parallel to the x-axis. Thus, any selected
value between 2.7 and 5.57 would be equally arbitrary. So, as even a dispersion threshold of 2.7 would be considered a generous threshold, we decided to not go beyond this and simply set the maximum dispersion at 2.7Β°. Do note that we used the original Salvucci and Goldberg (2000) definition of
dispersion (ymax βˆ’ ymin + xmax βˆ’ xmin) which is around twice as large as most other dispersion calculations (see p. 886, Blignaut, 2009). The dispersion calculation for the humans
was identical to the one implemented in the evaluated IDT algorithm.
nmt 03 December, 2021, 13:36:11

Have a look at this message: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246

user-6e3d0f 03 December, 2021, 13:54:26

Very interesting! I always used: deviation = distance * tan(accuracy) and im super super bad with physics :D. Just need to learn the difference but Ill look into it

user-c37dfd 03 December, 2021, 18:56:34

Hi. I am using pupil core and one of the cameras on one of my eye-trackers appears to no longer be detected by pupil capture. I am attaching images of the issue. Wondering if there are any suggestions for how to resolve the issue?

Chat image Chat image Chat image

papr 06 December, 2021, 08:36:46

Please contact info@pupil-labs.com in this regard

user-345d29 06 December, 2021, 11:12:29

Hi,How do I determine the gaze coordinates in the AOI region

user-9429ba 06 December, 2021, 11:46:50

Hi @user-345d29 πŸ‘‹ Use the Surface Tracker plugin to define AOIs and mapped gaze positions. Documentation here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker

user-345d29 06 December, 2021, 11:49:31

thank you @user-9429ba

user-345d29 06 December, 2021, 12:15:55

and how do I display all of my vis polyline and fixation points on one map ?

user-345d29 06 December, 2021, 12:16:18

Chat image

user-345d29 06 December, 2021, 12:19:32

this image is not what I want

Chat image

papr 06 December, 2021, 13:01:49

@user-345d29 Hit the export data and checkout our tutorial here: https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb

user-3b5a61 07 December, 2021, 22:49:30

Hi all, I have a question regarding the position of the nose in relation to the screen. Is the exact middle of the x-axis of the video screen located where the nose of the person is? I'm finding a slight trend to "the left" and I'm not sure if it is an actual result or caused by the shift in the camera

user-3b5a61 10 December, 2021, 01:41:48

@papr , can you please help me?

user-ced35b 08 December, 2021, 18:17:00

Hello! I am looking to temporally synchronize EEG (recording using EGI NetAmps 300) and eye tracking (pupil core). What is the best method for this? I am currently looking at LSL - can you use LSL to sync EEG data that is recorded using EGI with the eye tracker or is the LSL used to actually record EEG data? Thank you!

papr 08 December, 2021, 20:01:20

Hi, lsl is a framework to send and receive time synchronized data. Most manufacturers provide client software that publishes data from their devices in real-time. Using the official recorder app, the data can be stored to disk.

For Pupil Core, there are to options. A relay plugin that publishes the Capture data and can be recorded with the official recorder. Or a recorder plugin that receives lsl data from other sensors within the Pupil Labs recording.

https://github.com/labstreaminglayer/App-PupilLabs/tree/lsl_recorder/pupil_capture

user-ced35b 09 December, 2021, 01:20:25

Thank you, so once capture records your EEG data - the eeg time series will be synchronized with the pupil capture time?

user-16c44f 08 December, 2021, 20:38:42

Hi All, I'm new to using the Core eye tracker and have been playing around with the Surface Tracker plugin. I see that it's important to specify the "real-world" width and height of surfaces, but I don't see a place to define units for these dimensions. Are there default units for surface height/width? Or is it more important that the relative dimensions are accurate (i.e., agnostic to units)? TIA for help on this matter!

papr 08 December, 2021, 20:40:21

It is actually not that important. It only has an influence on the heatmap resolution and the scaling of the mapped data.

papr 08 December, 2021, 20:40:46

The mapping happens into normalized coordinates (https://docs.pupil-labs.com/core/terminology/#surface-aoi-coordinate-system). From there it can be scaled with any units you like.

user-16c44f 08 December, 2021, 20:41:50

So if I export gaze/fixation data for those surfaces, changing the surface dimensions won't influence those coordinates? just the heatmap?

papr 08 December, 2021, 20:43:15

Correct. The gaze on surface data comes in two types of coordinates. normalized and scaled. The latter is just normalized * dimensions

user-16c44f 08 December, 2021, 20:43:51

Understood. Thanks!

papr 09 December, 2021, 07:10:08

Yes, Capture stores the incoming data in Pupil time.

user-ced35b 10 December, 2021, 01:01:02

It looks like our EGI netstation is too outdated to use the LSL. We are using NetAmps 300. We are considering different ways of synchronizing our system. Do you know if Pupil Core can receive digital markers (basic electrical signal) from an older system like our EGI NetAmps 300? Thank you again!

user-613971 09 December, 2021, 11:18:41

I have a question regarding surface tracking with Pupil Core. I used markers so I managed to set surface for desired area, but the surface then doesn't move together with this area during video. Does anyone have any ideas what could be the problem?

papr 09 December, 2021, 13:32:54

In your shared recording, the issue is that the surface is defined in relation to the markers, not in relation to the screen content. The markers do not move, therefore the surface does not move either. If you wanted to track someone's face with that method, they would need to attach an apriltag marker to their head. I think the post-hoc face detection and gaze mapping approach is more suitable for your use case.

papr 09 December, 2021, 11:27:50

Could you make a recording with Capture and share it with [email removed] We can have a look

user-613971 09 December, 2021, 11:36:45

Sure, thank you so much! It should be in your mail. This is just a test recording, but we are trying to measure eye gazes toward face as a whole and toward eyes. So my other question would be if we can also make surfaces more round? Or what would be the best way to track faces? I found Face Mapper enrichment but I think it is only fo Pupil Invisible? Is anything similar possible for Pupil Core?

papr 09 December, 2021, 13:17:32

I think the easiest way would be to run a face detector on the recorded video and use the gaze_positions.csv export data to check if gaze falls onto any of the detected faces. See this tutorial on how to extract frames from an exported video https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb

user-8acfc8 09 December, 2021, 12:17:49

Hello, I would like to ask what interface your eye camera module uses,thanks

papr 09 December, 2021, 13:15:13

Hi, we use the UVC protocol to access and control the cameras.

user-a594fc 09 December, 2021, 12:43:20

Hi! I'm currently calculating amplitudes and velocities for my gaze data. For this, I need the coordinate's size in deg. visual angle. We recorded with the wide angle lenght and a resolution of 1280x720p. On the HP, the FOV is given as 99Β°x53Β°. This now leads to non-square pixel sizes in deg. visual angle (99/1280=0,077, while 53/720=0,0736). The same holds for the other resolutions on the HP. I assume that the sensor actually does have square pixels and that the FOV might be rounded or the like. Can somebody help me out here?

user-7b683e 09 December, 2021, 13:19:31

.

user-7b683e 09 December, 2021, 13:18:37

Hey Anna, Calculating velocities is not related with camera angles since an eye model is already calculated by Pupil Capture. In your case - I guess finding VOR, you can interest in just coordinates in a surface. By using 3D gaze generator feature, I think you can calculate polar coordinate indicators, and then velocities.

user-a594fc 09 December, 2021, 13:24:04

Thank you for your answer. We are using the 2D model and I want to calculate the angular velocity.

papr 09 December, 2021, 13:25:39

@user-a594fc in this case, you can use the scene camera intrinsics to unproject 2d gaze into cyclopian 3d gaze directions (cyclopian, since the origin of the direction is the camera origin) and calculate angular differences between these 3d vectors

papr 09 December, 2021, 13:26:38

We use this method in our fixation detector https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L142-L150

papr 09 December, 2021, 13:27:28

This method is independent of where the stimulus is shown, i.e. you do not need to worry about your display's FOV

user-f9f006 10 December, 2021, 07:59:39

Hello, we would like to study the eye camera transmission method of this product. Could you please provide us with the product models of eye camera and IR LED. If possible, can you briefly describe the transmission method of the eye camera? Thanks a lotttttttttttttt!!!

papr 10 December, 2021, 08:31:09

Could you please elaborate on what you mean by transmission method?

papr 10 December, 2021, 08:30:47

Pupil Capture does not have built-in support for that. But you can build plugins to extend the software. https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin Also, it is likely that such a plugin has been developed before. Unfortunately, I am not aware of any concrete implementations right now.

user-ced35b 10 December, 2021, 19:47:52

Thank you!

papr 10 December, 2021, 08:32:09

Gaze is estimated in the scene cameras coordinate system. We do not shift it virtually to align it with the nose.

user-3b5a61 10 December, 2021, 08:32:52

Thank you!

user-b91cdf 10 December, 2021, 09:49:24

Hi folks,

As i told in a previous post: I want to track gaze behavior while driving a car. I thought of a Single Marker Calibration in 1-2,5m distance and a Natural Feature Validation at high distances as suggested by Neil.

Unfortunately that results in a bad accuracy during validation (> ~3Β°) even though Calibration shows high Accuracy ~1.5Β°. ( I'm using the 3D-pipeline).

In "Best practices" it says:

"If your stimuli are at a fixed distance to the participant, make sure to place calibration points only at this distance. If the distance of your stimuli varies, place calibration points at varying distances."

Can someone explain to me (mathematically) /provide Links what erros can occure when calibration distance β‰  validation distance. Although I noticed that the accuracy is often gaze-angle dependent.

I think i will do everything with the Natural Feature Choreography

Greetings ,Cobe

nmt 10 December, 2021, 11:26:34

Hi @user-b91cdf. Firstly, did you cover all of the gaze angles of interest during the calibration? For example, one can expect validation accuracy to be gaze angle dependent if you only calibrated in the centre of the visual field. Secondly, what happens if you use the natural features for calibration and validation?

user-a594fc 10 December, 2021, 12:03:40

Thank you for your response. I am now doing as you said. Looking at https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py I only find the 1080x720 res intrinsics for the radial lens. We are using the fisheye lens in the experiment, but with a reduced resolution of 1080x720 - but there is only the intrinsics for the full 1920x1080. Should I use the values from the radial lens istead?

papr 10 December, 2021, 12:15:30

Are you using a Pupil Core camera or a custom camera?

user-8acfc8 10 December, 2021, 12:09:33

@papr We know that your product uses the camera module of OV6211. But I am trying to understand: 1. How to combine IR LED and eye camera? Since they seem to be independent. 2. ov6211 seems like using mipi protocol, how to convert it to type C interface? 3. Could you please provide us with all the product models used from the eye camera to the USB?

papr 10 December, 2021, 12:16:38

I do not know how much of that information we disclose publicly. Please reach out to info@pupil-labs.com in this regard

user-a594fc 10 December, 2021, 12:16:15

Pupil Core. Original model from ~2017/8 πŸ˜‰ - might be even older.

papr 10 December, 2021, 12:17:29

In this case, use the radial model intrinsics for the 720p resolution

user-a594fc 10 December, 2021, 12:30:21

Thanks! That worked. Another question on the output of undistortPoint - these are the angles of the spherical coordinate system right? The values are (in my case) mostly between -0.5 and 0.5, not what I expected, I expected something in the range of 99Β°x53Β°. Ultimately I need the angular velocities in Β°, thus if I put subsequent samples in cosine-distance and multiply by the sample time difference (approx 240Hz), I expected to get typical velocities.

papr 10 December, 2021, 12:31:23

no, the output is in euclidian coordinates. You can use the cosine distance to calculate angular differences

user-a594fc 10 December, 2021, 12:48:43

unfortunately, I dont understand. In what unit is the output? if I want angular differences/velocities, I need to have them in visual angle, not in px or camera range. If it is in euclidian coordinates, I could just use eucledian difference, but the values would be much too small (max difference is 1 on the diagonal of -0.5/0.5 box).

papr 10 December, 2021, 12:52:30

the resulting output is a 3d direction in mm. (0, 0, 1) is the direction straight forward out of the scene camera. See the definition of https://en.wikipedia.org/wiki/Cosine_similarity A and B should be two gaze 3d vectors. The result is the angular difference between them. If you divide by the duration between the samples you get the velocity in angles/s

user-a594fc 10 December, 2021, 12:53:23

ah I see, I only get a 2D vector back - I might use an old pupil lab version, will investigate

papr 10 December, 2021, 12:53:56

ah, then you are using the wrong function. use unprojectPoints instead

user-a594fc 10 December, 2021, 12:59:13

it's the right function, but a relatively old version (2 years ago or so), there is no cv2.convertPointsToHomogeneous yet, will try this

papr 10 December, 2021, 12:59:40

Then you just need to update you opencv version

papr 10 December, 2021, 12:59:55

But this can easily be reimplemented

user-a594fc 10 December, 2021, 13:02:22

sorry, missunderstanding, there is the line cv2.convertPointsToHomogenous not yet in the pupil labs function

user-a594fc 10 December, 2021, 13:02:55

but it just adds the third dimension with "1's", the ranges of the outcoming datapoints are still between -0.5 and 0.5

user-a594fc 10 December, 2021, 13:05:43

ah I missed your comment about mm, this is exactly what I was looking for. Thus you implicitly assume a sphere with radius 1mm - gotcha

user-e637bd 10 December, 2021, 13:45:00

Hey all, I keep having one of my cameras disconnect, which I think is a loose wire but not sure. Is there a typical place that it would be loose or breaking from that I can try to repair?

papr 10 December, 2021, 14:16:06

Please contact info@pupil-labs.com in this regard.

user-b91cdf 12 December, 2021, 18:57:35

Sry for the late reply, Yes i think with the SIngle Marker calibration and the spiral pattern i covered most gaze angles. The natural feature calibration works quite well. During calibration i didn#t allow headmovement to cover most gaze angles (used 9 point) and during validation i allowed headmovement as the driver would normally look. reported median angular accuracy ~1.2Β°

nmt 13 December, 2021, 10:07:03

I'm glad to hear that the natural features calibration is working well for you. This approach does require good communication between the operator and wearer, such that the wearer knows exactly which features to look at.

user-aa7c07 12 December, 2021, 22:45:52

With post-hoc pupil detection and manual calibration, it was mentioned previously that you can edit the 3D model to exclude eyelashes, etc. How is this possible in Pupil Player? I only see (under the 'post-hoc pupil detection' tab) "detect pupil positions from eye videos" and then details on eye0 and eye1, along with detection progress. Many thanks!

papr 13 December, 2021, 09:02:53

Hi, the eye lashes can be excluded from the 2d pupil detection by setting a region of interest for the detector. For that, start the post-hoc detection (the eye windows should open), pause the detection via the menu in the main Player window. Then, in each eye window, change from camera to ROI mode (general settings menu) and adjust the ROI accordingly. Afterward, go back to the main Player menu and restart the pupil detection using the corresponding button. This should apply the new ROI from the beginning of the recording.

nmt 13 December, 2021, 10:10:32

@user-aa7c07 for reference, this is what setting the ROI looks like: https://drive.google.com/file/d/1NRozA9i0SDMe_uQdjC2jIr000iPjqqVH/view?usp=sharing Be sure not too set the ROI too small such that the pupil is not detected. This is easier to check in a real-time context as you can ask the wearer to look around whilst you adjust the ROI

user-aa7c07 14 December, 2021, 21:18:18

Thanks Neil! Excellent! Will give that a go πŸ™‚ Matt

user-dee88a 13 December, 2021, 21:18:00

Hi all, I am using the offline surface exporter on Player v3.5.1 to export surface data from a ~15 minute recording made with Capture v3.4.0, and my surface_events.csv file does not make sense to me - while all of the other surface related data files have amounts of data and entries that make sense for our recording, the surface events file has only ~100 entries and is not consistent with the "on_surf" entries in the gaze position files even though the surfaces were visible for 41153/41277 frames. Is there a step I am forgetting in the export that could be causing this inconsistency? I have attached the events file. Thank you!

surface_events.csv

papr 13 December, 2021, 21:21:19

The file indicates when a surface becomes visible for the first time (enter) and when it is no longer visible (exit). It makes sense that you only have few events if the surface was visible for the majority of the frames. The file does not refer as to when gaze enters or exits the surface.

user-dee88a 13 December, 2021, 21:23:04

Thank you! I was not understanding the terminology used in the docs "image enter/image exit."

user-ced35b 13 December, 2021, 22:35:28

Do core glasses have TTL input and output?

papr 14 December, 2021, 07:22:52

No. You would need to write a custom plugin to add that functionality

user-027014 14 December, 2021, 14:45:39

I’m wondering what’s going on in the pupil_capture_lsl_relay.py It seems lsl timestamps are overwritten with the capture clock. But that would mean you would lose the ability to synchronize pupil labs with other LSL streams? Presently I’m recording data that requires synchronization with different streams; and when I look at the lsl timestamps all streams start in the same time range roughly except for the pupil labs timestamps. They suggest that the eye data occurs millions of seconds before the other streams.

papr 14 December, 2021, 15:06:17

The plugin adjusts the Capture clock to the local LSL clock. This way we can use native Pupil Capture timestamps instead of generating new timestamps on sending the data out https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L34-L36

user-027014 14 December, 2021, 14:47:02

I think the overwriting of these timestamps should simply be left out, right? The whole point of LSL plug-in is to have data in LSL times rather than local capture times.

papr 14 December, 2021, 15:06:59

If we would do the latter we would disregard the processing delay of the data and actively desync the data.

user-a09f5d 14 December, 2021, 17:02:25

Hi I have been work on some custom code to fit the pyd3d model using this code provided by @papr last spring https://nbviewer.jupyter.org/gist/papr/48e98f9b46f6a0794345bd4322bbb607

For this could you please tell me how to find the focal_length and resolution that was used for the recording? At the moment my code uses the default values but I'm not sure if these are correct for my recording. Many thanks.

papr 14 December, 2021, 17:03:32

The recording should include eye*.intrinsics files. The focal_length can be inferred from the camera matrix included in the files https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L252-L255

user-a09f5d 14 December, 2021, 17:08:07

I figured it would have something to do with the eye*.intrinsics files. So to get focus length I would need to use

@property
def focal_length(self):
fx = self.K[0, 0]
fy = self.K[1, 1]
return (fx + fy) / 2

to obtain it from the eye*.intrinsics file? Do you have any example code of how to do this (e.g. how to open the eye.intrinsics file)?

papr 14 December, 2021, 17:11:42

See this message https://discord.com/channels/285728493612957698/446977689690177536/913512054667747389

user-a09f5d 14 December, 2021, 17:14:30

Great! Thanks very much. I'll try that now.

user-a09f5d 14 December, 2021, 17:37:37

Hi @papr I have been able to open the eye intrinsic files (thanks for that) and extract the resolution however I am not sure how to get focal length using the function you provide. When using the function def focal_length(self) to get the value of focal length what value from the intrinsics file do I use for the input (i.e where does 'self' come from)?

papr 14 December, 2021, 17:39:06

The linked function is a method of the CameraModel class. The self refers to the class instance to which it is bound. You do not need it. You just need the core logic, extracting two numbers from the camera matrix K and average them

user-a09f5d 14 December, 2021, 17:46:24

Humm... okay! So to get the focal length for eye 1 would should I use XXX? Would it be 'Eye1_intrinsics.K' (which is my file for eye 1)? ``` fx = XXX.K[0, 0] fy = XXX.K[1, 1] focal_length = (fx + fy) / 2

papr 14 December, 2021, 17:53:52

The loading code should return a Python dictionary. I suggest printing it to get an overview. Use this code to get K:

import numpy

loaded_dict = ...
resolution = str((192, 192)) # adjust accordingly
K = loaded_dict[resolution]["camera_matrix"]
fx = K[0][0]
fy = K[1][1]
focal_length =  (fx + fy) / 2
papr 14 December, 2021, 18:45:05

@user-a09f5d see above as well

user-a09f5d 14 December, 2021, 18:01:08

Thanks @papr . So what I now have is:

with open(intrinsics_dir + "eye0.intrinsics", "rb") as fp:
    Eye0_intrinsics = msgpack.unpack(fp)

#Get resolution
Eye0_resolution = Eye0_intrinsics["resolution"]

#Get focal length
K = Eye0_intrinsics["resolution"]["camera_matrix"]
fx = K[0, 0]
fy = K[1, 1]
Eye0_FocalLength = (fx + fy) / 2
user-a09f5d 14 December, 2021, 18:02:39

I guess K = Eye0_intrinsics["resolution"]["camera_matrix"] could also be K = Eye0_intrinsics[Eye0_resolution]["camera_matrix"].

papr 14 December, 2021, 18:36:06

I do not think that would work. Please print Eye0_intrinsics and share it here. I might be wrong regarding the file format that I have in mind

user-a09f5d 14 December, 2021, 18:37:50

Yeah, I tried it and it indeed didn't work.

user-a09f5d 14 December, 2021, 18:37:56
{'version': 1, '(192, 192)': {'camera_matrix': [[282.976877, 0.0, 96.0], [0.0, 283.561467, 96.0], [0.0, 0.0, 1.0]], 'dist_coefs': [[0.0, 0.0, 0.0, 0.0, 0.0]], 'resolution': [19
2, 192], 'cam_type': 'radial'}}
user-a09f5d 14 December, 2021, 18:38:12

That is what I get when i print Eye0_intrinsics

papr 14 December, 2021, 18:38:32

K = Eye0_intrinsics["(192, 192)"]["camera_matrix"]

user-a09f5d 14 December, 2021, 18:43:13

Would '["(192, 192)"]' change if the resolution changed?

That works for

#Get resolution
Eye0_resolution = Eye0_intrinsics["(192, 192)"]["resolution"] 

But through up an error for K = Eye0_intrinsics["(192, 192)"]["camera_matrix"]

fx = K[0, 0]
TypeError: list indices must be integers or slices, not tuple
papr 14 December, 2021, 18:44:33

Please use K[0][0] instead of K[0, 0]. The latte assumes K to be a numpy array (special type) instead of a primitive Python list of lists.

user-a09f5d 14 December, 2021, 18:46:21

Ah yes sorry! I missed that bit. My bad.

user-a09f5d 14 December, 2021, 18:47:49

That seems to be working now. Wonderful! Thank you very much

user-a09f5d 14 December, 2021, 18:48:11

Regarding my other question, would '["(192, 192)"]' change if the resolution changed?

papr 14 December, 2021, 19:02:36

Yes

user-eeecc7 16 December, 2021, 03:02:54

Hi,

How does pupil player get the gaze position for a given frame from all the available positions? Is it a confidence weighted average?

papr 16 December, 2021, 07:48:39

Gaze data is assigned to scene video frames based on their timestamps, i.e. multiple gaze positions are assigned to a single frame. If you have to choose a single gaze position per frame, I recommend choosing the datum that is temporally closest to the frame.

user-eeecc7 16 December, 2021, 09:46:50

Is this how pupil player determines the gaze position for the display?

papr 16 December, 2021, 09:47:41

And yes.

papr 16 December, 2021, 09:47:33

Pupil Player displays all gaze points for the current frame (minus those that have insufficient confidence)

user-eeecc7 16 December, 2021, 09:48:26

Thank you. Will do this then. Also, what if the closest data point is low confidence? Do we drop that point or pick the next best one?

papr 16 December, 2021, 09:51:00

Pupil Player simply tries to render all points associated with the current frame. It will not render those with low confidence. If there is no gaze with sufficient confidence, no gaze will be rendered. There is no active selection process.

user-8a20ba 16 December, 2021, 13:28:29

@paprHi all,I have the gaze point in "gaze_positions_on_surface_Surface 2.csv" and fixations in 'fixations_on_surface_Surface 2.csv', but I want to know which gaze point in "gaze_positions_on_surface_Surface 2.csv" is fixation, How can I do? Or Can I match the fixations to the gaze point in "gaze_positions_on_surface_Surface 2.csv" ?

papr 16 December, 2021, 13:32:03

You can do that by looking up the fixations by their ids in the fixations.csv file. It contains the start time and the duration in ms. With the latter you can calculate the end timestamp. Then you can go through the gaze_positions_on_surface_Surface.csv and use the gaze_timestamp value to check if it falls into any of the fixation ranges.

user-8a20ba 16 December, 2021, 13:35:49

I see. Thank you very much

user-074809 16 December, 2021, 14:55:33

Hi there, I am looking to clean up my gaze mapping and see that one eye pupil tracked well while the other did not. Is it possible to only use one eye for the calibration and gaze mapping? if so, how?

papr 16 December, 2021, 15:23:19

Have you checked the confidence of the data? A binocular gaze datum will only be based on both eyes if they provide a pupil detection with a confidence of 0.6 or higher. Otherwise, the data will be mapped monocularly in which case you can easily filter out the low confidence data.

Recommendation: The first step to cleaning up gaze data should be removing the low confidence data.

user-074809 16 December, 2021, 15:24:56

Ok, I can adjust the confidence, some of them are really low (<0.5) but some are higher, I'll try calibrating with a higher confidence threshold, thanks

papr 16 December, 2021, 15:25:22

Are you performing post-hoc calibrations?

user-074809 16 December, 2021, 15:25:30

yes

papr 16 December, 2021, 15:27:02

In this case there is a second approach: Close Player, rename the eye video files with the bad detection (everything that starts with eye0 or eye1), restart Player, run the post-hoc pupil detection (will only run for the untouched eye video file), and run the calibration on the monocular pupil data.

user-074809 16 December, 2021, 15:28:39

oh good to know. I'll try that if the higher calibration threshold does not work

user-96f342 16 December, 2021, 19:38:53

hey, why is the price for the "HTC Vive Binocular Add-on" so high?

wrp 17 December, 2021, 10:57:31

Hi @user-96f342 πŸ‘‹ VR/ARΒ productsΒ are priced relative to the academic discount model we have for Pupil Core headsets (similar HW and components). Additionally, VR/AR add-ons areΒ custom-made research tools made in low-er quantities, where margins are already slim. I hope I have been able to respond to your question.

user-8d25b9 17 December, 2021, 18:32:04

Hi, this is Chaipat. I and my research team have bought your pupil core and enjoyed using it. But today as my team were changing the world camera, the world camera stopped working. We inspected the camera and found that the image sensor mirror has a crack. Would it be possible to replace only this part? Would you please be able to guide us to the where we can buy one? Thanks!

papr 18 December, 2021, 06:16:23

Please contact info@pupil-labs.com in this regard

user-8d25b9 18 December, 2021, 14:23:22

Thanks @papr i have emailed yesterday but haven’t heard back from them. Thanks for the info though

user-04dd6f 18 December, 2021, 16:02:35

@papr Would like to know if there any algorithm could analyse the data and obtain the "succade amplitude" and "succade velocity"?

user-beb12b 21 December, 2021, 11:00:50

Hi @papr I've seen the future "Reference Image Mapper (beta feature)". Do you think that in the future will be possibile to draw region of interest on the reference image and export data? Thanks

marc 21 December, 2021, 11:07:15

Hi @user-beb12b! This is a feature we have on the roadmap, but it will still take several months for it to become available and I can not yet tell you a more precise release date.

If you are comfortable with Python, you can however consider this notebook that implements such a feature: https://github.com/pupil-labs/analysis_drafts/blob/main/01%20-%20Products%20in%20a%20Shelf.ipynb

user-beb12b 21 December, 2021, 12:17:46

that's amazing. Personally I believe that this feature will really make the difference. That's why people usually prefer tobii, but if you'll be able to bring this feature and increase its accuracy of the automatic mapping (because I understand that at this stage the accuracy is not really fine), you will get a lot of new clients, and I will be one of those clients as well! A suggestion that I can give you at this stage is to give the ability to the researcher to correct manually the automatic mapping in case the automatic mapping will be not correct. In addition, increasing the accuracy of the automatic mapping in order to use the automatic mapping also on dynamic and small stimuli (e.g. mobile devices while browsing a web site/ app) would make the difference. What do you think about it? thanks

marc 21 December, 2021, 13:20:25

Thanks a lot for your feedback @user-beb12b! I'd like to give you a thorough answer on this. Let's move this to the πŸ•Ά invisible channel, as discussing Pupil Cloud features will be relevant to more people there!

user-04dd6f 21 December, 2021, 13:49:56

@papr (Pupil Labs) Would like to know if the Pupil Core could measure the distance between the eye and the fixation (fixating object) in the export data?

Many Thanks

nmt 22 December, 2021, 12:49:39

gaze_point_3d describes the point that the subject looks at. You can technically measure the distance between the eye and the fixation point using this and the eyeball centre (eye_center0/1_3d). Important note: gaze_point_3d is defined as the intersection of the visual axes of the left and right eyes, and the depth component might not be accurate in all viewing conditions as measuring depth via ocular vergence is inherently difficult. Another option would be to use the head pose tracker to determine distance between the wearer and a given stimulus https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking

user-fcc3ea 22 December, 2021, 03:10:37

Hi, can I get some clarification on the data output (using pupil player/capture v3.4.0). I have a scene set-up with 4 surfaces. The surface data does not correspond to what occurs in the world video. Ie. a given fixation which is clearly within a detected surface (as seen on the video) either does not show as true in the corresponding fixations_on_surface_X file or shows as True on an incorrect surface. Also, the "(Re)calculate gaze distributions" button is not displayed in the surface tracker within Pupil player (offline surface tracker).

nmt 22 December, 2021, 13:44:14

Hi @user-fcc3ea πŸ‘‹. How are you identifying fixations in the export that correspond to the fixations shown on a given surface in the world video? Re. the "(Re)calculate gaze distributions", this button was removed when we updated the user interface – we need to also remove from the docs πŸ‘

user-16c44f 22 December, 2021, 18:36:38

Hi All, we were using our Core headset with a Mac laptop (pupil capture v. 3.5.7, booted from the terminal on OS Monterrey) and getting images from the world view camera and eye cameras just fine, but now are unable to get an image from the world view camera at all. I've tried rebooting the computer and still no image. I've also tried moving the headset to another computer with pupil capture v 3.4 and OS big Sur, but still no world view camera image. Is the camera bad? Please advise. Thanks!

Edit to add - based on the error message, it seems that the world camera is not being detected "world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied."

papr 23 December, 2021, 07:42:23

Hi, please contact info@pupil-labs.com in this regard.

user-5ef6c0 22 December, 2021, 23:39:09

hello. When pupil player crashes, the new annotations are not saved, right?

papr 23 December, 2021, 07:49:55

Unfortunately not πŸ˜• The crash is due to insufficient RAM memory on your computer. Even if this error had been caught, writing data to disk also requires memory, i.e. saving the annotations might have also failed. I have not found a good strategy to avoid this issue yet. πŸ˜•

user-5ef6c0 22 December, 2021, 23:39:50
player - [ERROR] launchables.player: Process Player crashed with trace:
Traceback (most recent call last):
  File "launchables\player.py", line 745, in player
  File "src\\pyglui\\custom.pxi", line 213, in pyglui.ui.Seek_Bar.handle_input
  File "src\\pyglui\\ui.pyx", line 156, in pyglui.ui.UI.handle_input
  File "src\\pyglui\\menus.pxi", line 821, in pyglui.ui.Container.handle_input
  File "src\\pyglui\\menus.pxi", line 825, in pyglui.ui.Container.handle_input
  File "src\\pyglui\\custom.pxi", line 213, in pyglui.ui.Seek_Bar.handle_input
  File "src\\pyglui\\custom.pxi", line 222, in pyglui.ui.Seek_Bar.handle_input
  File "seek_control.py", line 132, in set_playback_time
  File "seek_control.py", line 138, in set_playback_time_idx
  File "plugin.py", line 224, in notify_all
  File "zmq_tools.py", line 206, in notify
  File "zmq_tools.py", line 171, in send
  File "msgpack\__init__.py", line 35, in packb
  File "msgpack\_packer.pyx", line 120, in msgpack._cmsgpack.Packer.__cinit__
MemoryError: Unable to allocate internal buffer.

player - [INFO] launchables.player: Process shutting down.
user-fcc3ea 22 December, 2021, 23:43:59

Fixations are defined at export and appear to follow sensible operations. The recordings were taken of aircrew during a simulation experiment and their identified fixations follow button presses and system interaction as expected. The surfaces were defined prior but turned off during the actual recording as it dropped the frame counter down to 4fps. In post analysis, when re-enabled, the surfaces appear to track fine, however the fixations do not correspond the correct surfaces. Is there an extra step I need to complete because the surface tracker was not running during the recording?

nmt 23 December, 2021, 11:15:14

Would you be able to share a recording with [email removed]

user-6fc1d6 23 December, 2021, 03:01:07

Looking for help.

The eye tracking camera on the left will fail after a few minutes of use, and the red light doesn't go on. The CPU usage of that camera also dropped rapidly

I'm not sure I can give you any more information. Is the hardware broken?

user-6fc1d6 23 December, 2021, 03:13:55

Oh, I found the answer! Thanks!

Chat image

user-d90133 23 December, 2021, 22:35:21

Hi, so in my research, I've run into an issue where for those with smaller nose bridges, I cannot maintain a consistent position where the eye cameras can track the pupil, no matter how I adjust the cameras (sliding them across the bar, etc.) What method(s), if any, would you recommend to combat this issue?

nmt 29 December, 2021, 10:08:52

In such cases, affixing some self-adhesive foam to the frame bridge can help to raise the headset on the nose. I would also recommend using a head strap to further secure the headset on the wearer.

user-c884ad 28 December, 2021, 18:45:50

hello I cannot find the Vibasept wipes in the UK. Nor any product with the exact composition 22g Ethanol 21g etc... Here I only seem to find 70% or 80% alcohol wipes or alcohool free witpes. do you have any suggestions? What product can i use to clean that is not going to damage the pupil core hardware?

user-6df3b1 29 December, 2021, 13:33:16

Hi, I am using MacOS with M1 and I want to run pupil labs following https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-macos.md. However, when installing dependencies, it could not build wheels for pupil-detectors, av, uvc. I also upgraded pip wheel. You can find the error code in the txt file. Can you help me with that?

error.txt

nmt 29 December, 2021, 13:38:17

Hi @user-6df3b1 πŸ‘‹. Was there a specific reason that you are running from source? Nearly everything can be done using the plugin or Network API of the bundled application: https://github.com/pupil-labs/pupil/releases/download/v3.5/pupil_v3.5-7-g651931ba_macos_x64.dmg

user-6df3b1 29 December, 2021, 13:48:15

Hi @nmt Thanks! That would work. But I want to code a pipeline to use photos taken with the glasses (extract them), to later import them to the software and mark the pupil. That is not possible with the application so I would need to modify the source code, isn't it?

nmt 29 December, 2021, 15:13:19

I'm not 100% sure that I understand you aims. Can you elaborate a bit more on your intended use case?

user-6df3b1 04 January, 2022, 09:01:38

Hi Neil, thanks again for your help! We will try it with the bundled application for now πŸ‘

user-872c60 31 December, 2021, 21:12:37

Happy New Year to everyone. I have a few questions about how to use Pupil Labs in VR, but first of all, is there anyone that can tell me if by any chance the results of the pupil labs are time-variable? or it can only collectively tell you which points have caught more attention?!

papr 03 January, 2022, 08:01:12

Yes, our software provides timestamped gaze estimates in scene camera coordinates, i.e. the subject's field of view. In Unity/hmd eyes, this would correspond to Unity's main camera. Gaze can then be transformed into Unity world coordinates and e.g. be aggregated on objects.

End of December archive