core


user-ed3a2a 01 March, 2022, 15:10:43

hi. I can't get my pupil core to work. Grey screens only. Working on macOS Monterey12.2. Downloaded software from docs.pupil-labs.com today. Please help

papr 01 March, 2022, 15:13:10

Hi, due to new technical limitations, Pupil Capture and Pupil Service need to be started with administrator privileges to get access to the video camera feeds. To do that, copy the applications into your /Applications folder and run the corresponding command from the terminal: - Pupil Capture: sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture

Please see the release notes for details https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-ed3a2a 01 March, 2022, 15:21:12

thanks, that worked. Do I have to do that every time when starting, or can I set some startup options?

papr 01 March, 2022, 19:54:45

Unfortunately, yes. :-/

user-ed3a2a 01 March, 2022, 15:31:15

also: only one camera works. Error message: Eye0 can't set value

papr 01 March, 2022, 19:55:25

In the video source menu, switch to manual camera selection. Please check how many cameras are listed there.

user-8a4d8e 01 March, 2022, 17:45:01

Hi Pupil Labs Team πŸ™‚ I was wondering if there is an option to export the surface video as in the preview you can see in the analysis tool. When I export the data, it only seems to export csv-files, however, for further analysis it would be helpful for me to have direct access to the video file containing only the surface area. Thanks for your help in advance!

user-b28eb5 02 March, 2022, 16:03:31

Hi, thank you for the help yesterday, also, I have a problem with the calibration of Pupil Core. I have been following the steps on the web but I am not able to get it. What can I do?

papr 02 March, 2022, 16:04:35

Hi, what is the exact issue that you are facing?

user-b28eb5 02 March, 2022, 16:08:42

I am doing the calibration with "Screen Market" and with "Single Market", but when I finish the calibration the point of the gaze in the screen isn't where I am watching. The cameras to my eyes are detecting the pupils every moment. I don't know why it doesn't work.

papr 02 March, 2022, 16:12:09

Do you see the blue circle around the eye ball? Does it move much?

user-b28eb5 02 March, 2022, 16:14:09

The blue circle around the eye ball is right, I move the eye and it is adapted to my eye ball

papr 02 March, 2022, 16:17:29

In this case, I would like you to share an example Pupil Capture recording of you performing the calibration with [email removed] After the review, we will be able to give more specific feedback.

user-b28eb5 02 March, 2022, 16:18:33

Okay. I will send it as soon as possible. Thank you

user-b28eb5 02 March, 2022, 19:02:25

Hello, I was working with the Pupil core and now the world camera is not working. I only see a gray screen when I open Pupil Capture and even the computer doesn't detect that camera. The others are working good. Some advice?

papr 03 March, 2022, 07:52:52

and even the computer doesn't detect that camera Are you referring to the device not being listed in Windows Device Manager? If that is the case, please write another email to info@pupil-labs.com in this regard. I have reviewed your shared recording and will follow up via email.

user-b28eb5 03 March, 2022, 13:00:32

Okay thank you

user-8a4d8e 03 March, 2022, 12:37:23

I only found an old answer in the discord from 2020 were it was not a feature yet. Did this change by now?

papr 03 March, 2022, 14:43:22

Hi, this did not change.

user-bdd9cb 03 March, 2022, 14:32:57

Hi! I have a problem with pupil labs. I was doing a registration and suddenly pupil capture didn't work anymore.

user-bdd9cb 03 March, 2022, 14:34:01

In particular the world camera. Instead the eye's camera work

user-6a91a7 03 March, 2022, 21:48:09

Hi! I am having trouble with opening my pupil core data through pupil player. When I drag the output file onto pupil player the terminal error is regarding allow_pickles = False in the npyio.py file. I've gone through and changed allow_pickles = True and reinstalled with the new file, but I'm still receiving the error. Does anyone know a solution to having pupil player read file as allow_pickles = True or have an idea of what other scripts may be getting pulled in to create this error?

papr 04 March, 2022, 07:10:23

If I had to guess, one of our dependencies (numpy, short np) is having trouble loading a specific file. Therefore, I do not believe that a piece of code might be broken but (parts of) your recording files.

Could you share the full error message, using the unmodified source code? This would allow me to confirm my suspicion.

user-74ef51 04 March, 2022, 08:31:22

Hi, Is it possible to add an external video stream (e.g. from a webcam) to a recording when using Pupil Capture?

papr 04 March, 2022, 08:34:24

Hi, the recommended workflow would be: 1. Record the webcam with an external recording software 2. Convert the recorded video file into a Player-compatible format using https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a 3. And then use this custom video-overlay plugin with the option to temporally adjust the overlayed video https://gist.github.com/papr/ad8298ccd1c955dfafb21c3cbce130c8 (use the transformed video from step 2 here)

papr 04 March, 2022, 08:35:42

Otherwise, Pupil Capture does not have the built-in possibility to record a 4th video source.

user-74ef51 04 March, 2022, 08:39:43

Alright, since I'm already using the network API to control the recording I could use that to start and synchronize the 4th video source

user-bdd9cb 04 March, 2022, 10:30:45

@papr hi! I have a problem with pupil, can you help me?

papr 04 March, 2022, 10:31:14

Probably, yes. What is the issue that you are encountering?

user-bdd9cb 04 March, 2022, 10:32:28

@papr Hi! I have a problem with pupil labs. I was doing a registration and suddenly pupil capture didn't work anymore. In particular the world camera appear grey

papr 04 March, 2022, 10:40:10

So it stopped working mid-way? Do the eye cameras continue working?

user-bdd9cb 04 March, 2022, 10:43:51

Yes

user-bdd9cb 04 March, 2022, 10:43:56

Exactly

papr 04 March, 2022, 10:44:22

In this case, please contact info@pupil-labs.com with this information.

user-bdd9cb 04 March, 2022, 10:44:07

Eye cameras continue working

user-075851 04 March, 2022, 11:03:23

Hi, I have the same problem, eye cameras still working but the world camera view in the Pupil capture is gray. Did you solve the problem?

papr 04 March, 2022, 14:00:48

Please contact info@pupil-labs.com too

user-075851 04 March, 2022, 11:09:24

Please see the attached screen

Chat image

user-98789c 04 March, 2022, 14:06:32

One of the eye cameras on my pupil core device has stopped working properly. The image is so blurry. The reasons I can think of are either that the participant that I was recording took off the headset a little harshly, or that the experimenter has disinfected the headset and has put some gel on the camera. Is there anything I can do about this?

Chat image

papr 04 March, 2022, 14:40:32

Yes, it looks like the lens got dirty. You could try cleaning it carefully with a microfiber cloth.

In the older 120 Hz models (Pupil Cam1), it is possible that they got out of focus. If you have such a model, you could try to refocus the eye cameras https://docs.pupil-labs.com/core/hardware/#focus-120hz-eye-camera

user-0e5193 04 March, 2022, 15:17:06

Hello, I have a question about Pupil data. Is there any problem about 3d_diameter when participant wear contact lens?

papr 04 March, 2022, 15:22:58

Hi πŸ‘‹ The distortion introduced by contact lenses should be negligible, so no problem. πŸ™‚

user-0e5193 04 March, 2022, 15:25:20

Thank you for quick reply.πŸ˜„

user-eeecc7 04 March, 2022, 21:12:24

Hey I have a question about the process after the ellipse detection. I have an Asian friend filling in as a subject for a study and when I check the pupil windows the pupils are detected almost perfectly for every frame. But somehow, despite that, the POG that is detected is terribly incorrect. Is there any way for me to correct this post-hoc? I believe there are some physiological averages that the software uses which might be messing the results up in this case. Is there any way to correct for this?

papr 07 March, 2022, 08:26:33

You can recalibrate using the post-hoc calibration feature in Pupil Player. You can also add a manual offset to the result.

user-027014 05 March, 2022, 19:08:09

Hey guys, I've recorded some goal directed saccadic eye movements and tried to measure the kinematic (main sequence i.e.) as a control of the eye quality. I've included only eye traces of highest quality with low noise levels and no blinks etc as benchmark for further recordings. I notice some curiosities in the data (see figure). 1) max peak velocities saturate quite early (pk velocities do not surpass 300-350 degrees/second). 2) the gain of the pkvel*saccade duration seems to be 1, suggesting a peak velocities that is also equal to the mean velocity somehow (?). What I suspect is going on here is that a too low and too long estimation of my saccade peak velocity and duration distort the gain potentially due to over filtering. However, these parameters have been extracted from unfiltered data (on my end that is, so if any it should be noisy but an over estimation of the peak velocity, right?). So I was wondering, is the data collected by pupillabs/lsl plugin already being filtered with a cut off frequency of below 75Hz? I feel the 200 Hz sampling frequency should have the potential to describe the full saccade kinematics. Note, figure is produced from one participant, I'll collect data from others to make sure he isn't just 'slow'.

Chat image

user-027014 05 March, 2022, 20:17:20

So I think I may underestimate saccade duration here, as I select on and offset as the idx when velocities pass a threshold value (20d/s). If I 'correct' this, and allow the onset and offset to move a few samples to the closest local minimum next to it (where the velocity onset ramps up or down before/after passing the threshold value) i do get slightly longer saccade duration and as such a gain 1.2 - 1.7, yet im sure this not the cleanest way of doing this and may lead to an over estimation of saccade duration. Still my problem remains, why are the peak velocities of unfiltered saccadic eye movements recorder on 200Hz PL-LSL so low?

user-027014 05 March, 2022, 20:56:15

Here are some figures to show the quality of the recording. See example saccade detection. Data was recorder at 200Hz. Saccades are detected by passing the 20 D/s velocity threshold, corrected onset/offset (with a max shift in 5 samples either way), and then evaluated based on some parameters (such as a window of high confidence > 0.9, minimum / maximum saccade duration and amplitudes). Only the cleanest examples are kept by the algorithm (i tried to be rigorous here) and then visually inspected to make sure they are of high standard.

Chat image Chat image

papr 07 March, 2022, 08:40:50

Hi Jesse, happy to see that your research is making progress! Could you let us know which of the LSL-recorded data fields you use to calculate the velocity?

mpk 06 March, 2022, 08:38:52

@user-0b16e3 can you have a look at the above? Could there be something in py3d that acts as a filter?

user-48fec1 07 March, 2022, 07:18:51

Hi, in Lsl-recorded data, does anyone know how the 3d gaze data (e.g. in the streams gaze_point_3d_x/y/z) was converted into the normalized streams norm_pos_x/y ? I tried a simple normalization equation on the 3d gaze data but that did not really match the normalized streams.

papr 07 March, 2022, 08:28:09

Hi, we use camera intrinsics to project the 3d point from the 3d camera coordinate system into the normalized image plane. See also https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-ace7a4 07 March, 2022, 09:38:48

Hello! Is it possible to run pupil core without running the programm as an admin? Or is it necessary?

papr 07 March, 2022, 09:39:36

Hi, currently it is only necessary to do so on macOS. On Windows, once the drivers are installed, you can run the software without administrator rights.

user-619e31 07 March, 2022, 09:39:51

Hi I am getting this error when running pupil-capture.exe. Anyone know how to fix it ?

user-619e31 07 March, 2022, 09:39:54

Chat image

papr 07 March, 2022, 09:41:07

Hi, could you please share the full log message context?

user-619e31 07 March, 2022, 09:41:43

Chat image

papr 07 March, 2022, 09:43:30

Thank you! Can you confirm that there was a pop-up asking for admin privileges?

user-619e31 07 March, 2022, 09:42:02

the chinese in the log means permission denied

user-619e31 07 March, 2022, 09:43:56

yes and I confirmed it

papr 07 March, 2022, 09:46:52

In this case, it is not clear to me why the permission was denied. Is this your personal PC or is it managed by your company/institution? One possibility I could think of is that there are additional security policies installed on the device, prohibiting the driver installation.

user-619e31 07 March, 2022, 09:47:30

Strange, It is my own PC.

user-619e31 07 March, 2022, 09:47:10

I thought it may be because of admin privileges. I tried to open it with admin but it showed this again. The software just doesn't detect cameras.

papr 07 March, 2022, 09:48:33

Yes, this is expected. The software checks for the existence of the drivers and will try to install them if they were not installed previously.

user-ace7a4 07 March, 2022, 09:48:19

Hi, currently it is only necessary to do so on macOS. On Windows, once the drivers are installed, you can run the software without administrator rights. @papr How do I check if the drivers are installed? I installed the programm via the webpage, but it still asks me everytime to run as an admin (win 10)

papr 07 March, 2022, 09:49:06

Do you see the camera video feed being previewed in the application windows? Or are they grey?

user-619e31 07 March, 2022, 09:49:33

They are just grey.

user-619e31 07 March, 2022, 09:49:52

Chat image

papr 07 March, 2022, 09:50:11

@user-619e31 could you please try following steps 1-7 of these instructions https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-619e31 07 March, 2022, 10:41:55

This solves the problem! Thank you!

user-619e31 07 March, 2022, 09:50:25

Okay thanks.

user-eeecc7 07 March, 2022, 09:57:02

Hi Papr, Can you share the re-calibration steps please? I tried post-hoc re-calibration, but nothing has changed so why would this work? I am not sure. Do we need to change any physiological or other parameters? Also, there is no constant offset that I can remove using the manual offset step. It varies based on the gaze direction I believe.

papr 07 March, 2022, 10:01:08

Sounds like you got the process right already. With the post-hoc calibration, you are able to add/remove reference targets, and therefore adjusting the calibration. If you want you can share the recording with [email removed] and I can have a look and give more concrete feedback. There are only very few physiological parameters in the pipeline that are modelled explicitly. In my experience, they get corrected for in the calibration optimisation procedure and inaccurate gaze is caused by a different issue.

user-ace7a4 07 March, 2022, 10:41:05

Tja sk> Yes, this is expected. The software checks for the existence of the drivers and will try to install them if they were not installed previously. @papr Thanks!!

user-ace7a4 07 March, 2022, 10:42:00

Do I understand it correctly that the id0 and id1 confidence parameter tells me how "good" the eyetracvking device is adjusted to the respective eye?

papr 07 March, 2022, 10:44:02

Yes, one could say that. Or more precisely: If the eye cameras are not adjusted correctly, confidence will be low. There are other reasons for low confidence though, e.g. during blinks.

user-ace7a4 07 March, 2022, 10:45:11

Yes, one could say that. Or more precisely: If the eye cameras are not adjusted correctly, confidence will be low. There are other reasons for low confidence though, e.g. during blinks. @papr thank you so much!!!!

user-98789c 07 March, 2022, 10:59:42

I have a go/no-go task with four fractal images, during which participants should associate each of the four fractals to a decision, and all along I record pupil size. I need to make sure that the changes to pupil size come only from the mental workload and the learning process, and not the fractals differences in luminosity, etc. What are the image characteristics I need to check, for the fractals to be equally presented? Things like red and green and blue histogram? and luminosity histogram? Any references/papers about this?

Chat image Chat image Chat image Chat image

papr 07 March, 2022, 13:11:12

Re colors see also https://files.cie.co.at/x046_2019/x046-PO195.pdf

papr 07 March, 2022, 12:45:34

Please also see what these authors did to avoid illuminosity-difference effects in their experiment https://www.researchgate.net/publication/335327223_Replicating_five_pupillometry_studies_of_Eckhard_Hess That might be difficult to transfer to your experiment, though, given the complexity of the images. See also Appendix B in particular.

user-027014 07 March, 2022, 12:11:48

Hi Papr, Thank you! Currently velocity is computed based on the norm_pos_x and norm_pos_y values (i.e. pl.Data(2,:) and pl.Data(3,:)). These traces are calibrated/mapped to known real world coordinates (to a 0.1 degree precision) using a simple neural network. We asked participants to look at targets and press a button to indicate they fixate their gaze to these targets (15 different positions throughout the field of view). 200 ms of eye traces are then sampled, and we map the median of these values to the actual known world fixed coordinates. The network has 2 inputs (x,y) and 2 outputs (norm(az), norm(el)), which we then scale to actual azimuth and elevation. The network has 1 hidden layer with a few units (3 or so). We then iteratively map each sample to azimuth / elevation, which perfectly predicts the location the participant is looking at. (As we do not use any memory in the network, it seems unlikely that any unwanted filtering may occur on this level). We then compute a saccade amplitude, R, using Pythagoras (i.e. hypot(az,el)). From R we take the two-point central difference differentiation algorithm to compute the velocity.

papr 07 March, 2022, 12:15:12

Thank you for the details! Are you presenting the targets in a VR environment or on an external desktop screen?

user-027014 07 March, 2022, 12:18:37

Targets are being presented in the real world, each target is an LED on a sphere at exactly 1 meter away from the eyes. The participant's head is fixed to not allow for any translational movement.

papr 07 March, 2022, 12:21:26

Hi, depending on the display size you might have issues with within-image illuminosity differences. These will be difficult to get rid of. Also, different light spectra cause different pupil dilation [1]. You might be able to work around that by converting the images into grey images. But I do not know if your experiment allows that. Could you tell us a bit more about the details?

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6634360/

user-98789c 08 March, 2022, 16:32:18

Thanks a lot 😊

user-ed7a94 07 March, 2022, 12:23:58

Hi Papr, we try to place 4 Apriltag markers on each corner of the screen to identify it as a surface as the means to get the gaze point on the screen, but the detection of some markers was not very stable, is there any way to improve the marker detection in post-hoc analysis?

papr 07 March, 2022, 12:33:51

Unfortunately, there is not much one can do post-hoc. πŸ˜• Typical issues include insufficient white border around the pixel grid.

papr 07 March, 2022, 12:31:02

Could you please replicate the same graphs (same filtering process) but instead of using your custom norm-pos-to-az-el conversion, use the gaze_normal0/1_x/y/z values of the same samples that you had chosen previously? To calculate the velocity, you can either calculate the cosine distance between two vectors (yields saccade amplitude directly) or use

r = np.sqrt(x ** 2 + y ** 2 + z ** 2)
el = np.arccos(y / r)  # for elevation angle defined from Z-axis down
az = np.arctan2(z, x)

Note that the functions above yield radians, not degrees.

user-027014 07 March, 2022, 15:42:05

Hi Papr, I feel like i'm doing something wrong. Are el and az supposed to be the same value here?

user-027014 07 March, 2022, 12:33:27

Sure, do you know which streams those are; idx that is?

papr 07 March, 2022, 12:51:36

Re extracting header names from XDF files, see this pyxdf implementation: https://gist.github.com/papr/219929d26777711ad204acfe4bc5e2e4#file-xdf_convert-py-L124-L125

papr 07 March, 2022, 12:35:32

They should be part of your LSL recording already. These two 3d gaze vectors correspond to the 3d eye model orientation in scene camera coordinates. I recommend analysing the two vectors (left and right eye) separately. They should correlate strongly though.

user-027014 07 March, 2022, 12:39:37

Ok thank you. I'll post them once i've finalised the figures! πŸ™‚

papr 07 March, 2022, 12:40:47

See this csv file re column indices. Note, that the timestamp column is usually excluded when data is loaded from the xdf file. Btw, depending on your xdf file loader, you can also extract these column names from the file, avoiding the dependency on indices.

lsl_pupil_capture_ppr-m1.fritz.box_46954aa3-58ac-48ee-a911-baa54966a9d5.csv

papr 07 March, 2022, 15:42:43

No. What values are you getting?

user-027014 07 March, 2022, 15:50:37

Oh nvm, the second stream contains only nans. i thought the data was plotting over eachother.. πŸ˜›

user-6a91a7 07 March, 2022, 18:21:00

Here is the full error message. I've tried multiple files and they all produce the same error. I realize that the system I am using is slightly modified, so I would like to allow pickles, even if that isn't conventional for the code.

Chat image

papr 07 March, 2022, 18:39:06

I am not sure why your timestamp file would contain pickled data but the reason why changing the source code to allow pickled data did not work is that you are executing the bundled application. It is independent of the modified source. You need to run from source to see the effect.

papr 07 March, 2022, 18:22:48

What do you mean by you tried different files? Do refer to recordings?

user-6a91a7 07 March, 2022, 18:35:20

Yes, different recordings!

papr 07 March, 2022, 18:23:26

Could you please list the files of recording folder that you are trying to open?

user-6a91a7 07 March, 2022, 18:36:01

Chat image

user-a09f5d 07 March, 2022, 18:54:48

Hi It's been a while since I was last on here. I have a few more questions about using the core for my experiment. Specifically I wanted to double check a few things about circle_3d_normal_[x,y,z].

1) Is circle_3d_normal on a linear scale? e.g. if the value changes from 0 to 0.1, is this the same amount of eye movement as if it changed from 0.9 to 1?

2) Are the values of circle_3d_normal_[x,y,z] extracted from each eye camera comparable to each other? For instance, if both eyes move by a circle_3d_normal value of 0.2, does this mean both eyes moved the same amount in a given direction (eg. they both moved 2 deg)?

3) I guess this one depends on the answer to 1) and 2), but can the values of circle_3d_normal_[x,y,z] be used to calculate the angle between the two eyes (assuming eye gaze is not perfectly parallel)? For context, I currently use circle_3d_normal_[x,y,z] to calculate the angle between the position of the same eye at two different time points, but we would also like to calculate the angle between the position/direction of the two separate eyes at the same time point.

I also have a few question about extracting direction information but they can wait for now. Many thanks!

papr 08 March, 2022, 07:42:42

Hi, circle_3d_normal is part of the uncalibrated pupil data (in eye camera coordinates). Re 2 and 3: You want to use gaze_normal0/1_x/y/z and eye_center0/1_x/y/z from the gaze data. The former corresponds to circle_3d_normal but in scene camera coordinates. Re 1: Yes, these are in an undistorted euclidian space.

user-a09f5d 09 March, 2022, 15:11:59

Hi @papr Thanks for that. So to confirm, it's okay for me to use circle_3d_normal_[x,y,z] (taken from from a frozen eye model) to calculate the angle between the position of the same eye at two different time points, but I can't use circle_3d_normal_[x,y,z] to calculate the angle between the position of the two different eyes? We have spoken about the former at length before as a way to calculate the angle of strabismus (eye turn) and you recommended using the uncalibrated circle_3d_normal_[x,y,z] eye camera coordinates (so long as the person's head does not move) since the strabismus prevents us from properly calibrating the system (and so we can't get the calibrated gaze_normal0/1_x/y/z). I have been using circle_3d_normal_[x,y,z] to calculate strabismus angle for some time now and it seems to be working well.

user-ed7a94 08 March, 2022, 10:52:14

Hello papr, I am running the post-hoc pupil detection on Pupil player, but once I start this procedure the gaze point on the the image dispears, do you have any idea to make the gaze point show again? thanks

nmt 08 March, 2022, 11:36:18

Running post-hoc pupil detection shouldn't make the gaze point disappear. Is it possible you have selected Post-Hoc Gaze Calibration as the Gaze Data Source without running the post-hoc calibration?

user-205d47 08 March, 2022, 11:17:32

Hello @papr Is it possible to get spectrogram of the raw signal (possibly eye movements ) recorded from Pupil Core ? Otherwise, do you have recommendation of obtaining the spectrogram of the raw signal ?

user-ed7a94 08 March, 2022, 12:27:11

Thanks, but I after I clicked "Dectect References, Calculate All Calibrations and Mappings" button, there still no gaze point showed up. I am wondering is there anything I missed?

nmt 08 March, 2022, 12:53:38

The first thing to check is that the calibration marker is clearly visible during the recorded calibration choreography, and subsequently, that the marker has been detected. You can check the marker detection progress in Pupil Player's timeline: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

nmt 08 March, 2022, 14:54:19

Hi @user-205d47. It would be helpful for us to learn more about your aims with regards to frequency analysis of the raw data. What exactly do you intend to do with a spectrogram?

user-205d47 08 March, 2022, 16:06:49

Hi @nmt Thanks for getting back. I was wondering if you ever looked at the spectrum of your eye movement data ? I am interested to find out if pupil labs implements any pre-filter on the raw data that one gets from pupil capture and how does the spectrum look like ?

user-027014 08 March, 2022, 15:18:43

@papr Hi there, I've repeated the main sequence+ and "skewness" relationships for the same saccades i've posted earlier, albeit computed from de gaze_norm0/1_x/y/z streams. Note that the left (blue) and right (pink) are pretty much the same. Correlation between the R (dimensionless movement) of both eyes is almost 1. Overal tendencies between both graphs from the different pupil labs-LSLS data streams are very similar I would say. Peak velocities remain low (even a bit lower then before). while ideally, these should be near 750d/s for longer saccades. I've also include the relation ship between accelerated duration and saccade duration as a function of amplitude and duration overal (blue and pink completely overlap).

Chat image Chat image

user-98789c 08 March, 2022, 16:31:19

Thank you Pablo for the explanations. I tried cleaning the lens and it's a little better now, but the red circle around the pupil keeps jumping all around and I guess it needs to be focused. I got these messages when I open Pupil Capture, and it seems that I have the Pupil Cam1, whose focus is adjustable, right? If so, can you please tell me how to adjust the focus? I didn't find the adjusting screw thing shown in the videos.

message.txt

papr 09 March, 2022, 07:48:16

One recommendation was a lens pen, similar to this model https://www.foto-gregor.de/lenspen-micropro-nmcp-1-

user-aa7c07 09 March, 2022, 02:24:16

Is there any CPU restriction for Pupil Player 3.5.8? Is the eye detection multi-threaded - i.e. can it use any more than 2 cores? I am trying to speed up my data analysis using a virtual machine on Linux and despite having lots of CPU, it appears even slower than on my desktop computer

papr 09 March, 2022, 07:53:12

Hi, the algorithm is fairly sequential which is why it is difficult to parallelize. Are you trying to use VMs to start multiple Player instances? I think you can do so without them.

user-151a66 09 March, 2022, 03:21:22

Hi we are running an image viewing task in Inquisit. The task will have components from which we don't need eye tracking data, but the AOIs that temporally appear about 1/3 of time, do need to be analyzed for AOI gaze time and blink rate. The Pupil Core recording will take a while to process (60 images per participant) if I use Pupil Player to individually find/export each images recorded AOI surface gaze data. Is there a way to batch download a participant's AOI gaze data for all AOI's (60) at once?

papr 09 March, 2022, 07:49:53

Hi πŸ™‚ Are you using the surface tracking feature already?

papr 09 March, 2022, 07:27:11

Thanks for sharing the log. It is important to look at the IDx suffix. 0 and 1 refer to the eye cameras. ID2 refers to the scene camera.

eye1 - [INFO] video_capture.uvc_backend: Found device. Pupil Cam2 ID1. eye0 - [INFO] video_capture.uvc_backend: Found device. Pupil Cam2 ID0. You have the 200 Hz non-adjustable-focus eye cameras. I will ask my colleagues for further cleaning tips.

user-98789c 09 March, 2022, 08:44:04

Ah so the Cam1 I saw in the log is about the world camera, good to know πŸ‘

user-98789c 09 March, 2022, 08:45:31

Do I have to check for any size or material specifications so that the lens pen fits in the lens and properly cleans it?

papr 09 March, 2022, 08:50:45

The pen tip should be comparably small, given the size of the lens. Otherwise, it is important that it is dry-cleaning.

papr 09 March, 2022, 09:00:10

@user-98789c Otherwise, before purchasing such a lens pen, please contact info@pupil-labs.com s.t. we can set up a video call and a colleague from the hardware team can look at it.

user-98789c 09 March, 2022, 09:19:46

Sure, Thank you πŸ‘

user-13a459 09 March, 2022, 10:05:46

Hello! We are looking into purchasing the pupil core with the mounted scene camera and were wondering whether the calibration works with curved surfaces? We have a large parabolic screen (height x diameter: 3.5 x 3 m). Can the matching be done by the provided software or would we have to do it ourselves?

nmt 09 March, 2022, 13:28:50

Hi @user-13a459 πŸ‘‹. Pupil Core is a head-worn eye tracker, and provides gaze in its world camera's coordinate system. This means the wearer can move freely in their environment. You can present the calibration marker on screen, or even use a physical hand-held marker. In order to use Pupil Core for screen-based work, the screen will need to be robustly located within the world camera’s field of view, and Pupil Core’s gaze data subsequently transformed from world camera-based coordinates to screen-based coordinates. We achieve this by adding April Tag markers to the screen, and using the Surface Tracker plugin: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking Note that surfaces defined are assumed to be flat. A potential workaround for curved monitors would be to add multiple surfaces along the curvature.

user-ed7a94 09 March, 2022, 11:52:52

Hello! I am trying to use remote command ""T {time}" to resets current pupil time. But after this command, I am not able to give another remote command like "t". the error report is "ZMQError: Operation cannot be accomplished in current state", can anyone help me on this ?

papr 09 March, 2022, 13:49:51

Hi, looks like you forgot to call recv() on the socket. The Pupil Remote socket follows a REQ-REP pattern where you need to recv() for every send(). Please be also aware that we no longer recommend using the T command. See https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py as a replacement

user-027014 09 March, 2022, 12:33:37

@papr @user-92dca7 @nmt Hi guys, wasn't sure wether you caught my previous post with the figures you asked for. If not see above πŸ˜‰. Well anyways, I hope to hear from you with regard to max saccadic velocities and wether there may be any process that acts as (LP) filter of sorts. Kind regards, Jesse

papr 09 March, 2022, 13:57:05

Hey, pye3d has a mechanism to filter out camera pixel noise (see short-term model https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-estimates-eyeball-position-on-different-time-scales) but from our understanding, this should not have a large enough impact on saccades to result in 2-3x-lower-than-expected measurements. We will try to reproduce this internally.

user-13a459 09 March, 2022, 13:54:01

@nmt Splitting the screen into flat regions might be sufficient for our needs and would save us some trouble. Either way will have to try it out in practice and see if extrapolating rays from the eyes in the world view and projecting onto the curved surface would be necessary then. Thanks a lot for the suggestion, I appreciate your help!

nmt 09 March, 2022, 14:04:16

That's no problem at all. Note that once you have defined your surfaces using the Surface Tracker Plugin, the Plugin will automatically map gaze from scene camera coordinates to surface coordinates. You can download an example recording that contains surfaces data here: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing. Just load it into Pupil Player, our free desktop software (available here: https://docs.pupil-labs.com/core/, click 'Download Pupil Core Software'). Then, just enable the Surface Tracker plugin to see the surfaces and mapping results. You can also generate heatmaps.

user-151a66 09 March, 2022, 15:00:12

Hello ! πŸ™‚ Yes, I am surface tracking and using April Tags to define the computer screen and post-hoc manually using the "add surface" feature to digitally define image AOIs. The image AOI's occur about every 12-19s, then not again for 8-10s, so there is parts of the recorded world video for which gaze time doesn't need to be analyzed. It's taking a long time to go through to define each "add surface" then export data for each. (I have 6000+ images lol) Is there a way to annotate and batch download the data for each participant world video recording (100 images) without averaging out all the gaze time for that recording?

papr 09 March, 2022, 16:14:24

My recommendation would be: You can send annotations as part of your experiment to Capture. Set up only one aoi, export its gaze data and use the stored annotations to trim the gaze data for each image.

papr 09 March, 2022, 15:15:10

Right, thanks for reminding me!

it's okay for me to use circle_3d_normal_[x,y,z] (taken from from a frozen eye model) to calculate the angle between the position of the same eye at two different time points, but I can't use circle_3d_normal_[x,y,z] to calculate the angle between the position of the two different eyes? Correct, if you want to look at absolute orientations! You can easily compare within-eye differences between models.

user-a09f5d 09 March, 2022, 15:20:30

Nice! Good to know that still stands. I will stick to only comparing values taken from the same eye camera and won't compare values, or amount of change, between the two eye cameras. Thanks a lot

user-027014 09 March, 2022, 16:34:46

Hi @papr, thanks for coming back on this. Potentially i've found the error already. However, I'm not sure I fully understand how lsl extracts and writes data from pupil labs particularly when recording with two eyes. It appears when I look at the pupil-labs-lsl timestamps they seem to be consistently separated 2.5 ms apart (opposed to the expected 5ms at 200 Hz). I assume this has to do with two eyes recording simultaneously and alternating samples represent the 2 eyes(?). But then how do I know which sample belongs to which eye? and also are both eye cameras then really recording in opposite phase at exactly 2.5ms interval?

papr 10 March, 2022, 13:13:59

Mmh, LSL stores matched data and I think this is a result of how the matching works. See https://github.com/N-M-T/pupil-docs/commit/1dafe298565720a4bb7500a245abab7a6a2cd92f

user-205d47 09 March, 2022, 17:26:46

Hi @papr @nmt I am using the VR add of Pupil Labs for measuring pupil dilation in a VR environment. Subject fixates on a red dot in presence of a grey background. But we are having troubles with getting clean and stable recordings. Pupil detection flickers (see attached image and link for the video ) and yields low confidence. We also see a few reflections in the eye, not sure of the source yet but could this be the reason for back tracking and do you have any suggestions on how to get a clean and stable recordings from the VR addon of pupil labs ? https://filesender.surf.nl/?s=download&token=4c883088-d356-4e1c-986e-7be2bcd4cfe3

Chat image

user-f8b492 10 March, 2022, 11:00:45

Hi @papr , what effect does the size and width paramters in the surface definition GUI have? What is the "real world size" mentioned in the documentation? Is it cm,m,inch,miles? Is it the real real world measures, like measured with a ruler? Thanks a lot

user-20283e 10 March, 2022, 13:12:31

Hi there! I have a question related to the setup of the software which is I think is just a problem due to the IT department and the restrictions we have... I install the software and after a while they are blocked by the IT software called Carbon Black Cloud. The IT guys told me: "Please reach out to the software vendor and have them provide an anti-virus exclusion list for us to whitelist." Is this something that you can provide about this??? Thanks for your help!

papr 10 March, 2022, 13:15:20

It is meant for scaling of the heatmap and gaze data. Gaze is mapped primarily to a normalized coordinate system. You can enter the most sensible unit, e.g. pixels for a screen. See also https://docs.pupil-labs.com/core/terminology/#surface-aoi-coordinate-system

user-f8b492 10 March, 2022, 14:08:05

thanks a lot, so in our real world setting we are currently tracking shelves, so centimeters would be an option? And if we are not at all interested in the heatmaps, would we have to set any values here or could we leave it at the default 1.0/1.0?

papr 10 March, 2022, 13:17:04

It would be helpful to know about the format of that list. If you could point us to corresponding documentation that would be helpful. I can look into it, too, but earliest next week.

user-20283e 10 March, 2022, 17:00:51

This is what IT wrote back to me: "Carbon Black allows for file path, application executable and/or SHA256 file hashes as exclusions. Preferably the first two."

user-20283e 10 March, 2022, 15:18:48

Thanks! Will check with IT and come back to you.

nmt 10 March, 2022, 13:18:32

Hi @user-205d47. Thanks for sharing the video capture. It looks like contrast between the pupil and iris is low, which is hindering the pupil detection. Please see this message for steps you can take to improve pupil detection (for VR eye tracking, steps 2–5 are relevant): https://discord.com/channels/285728493612957698/285728493612957698/943090995917103144

user-205d47 10 March, 2022, 16:41:25

That's great Neil. Thanks a lot for sharing these pointers. Especially the Contrast and ROI Trim. I will try these out and hope it works. Yes, I am aware of the Best Practises and refer it for any doubts.

user-205d47 14 March, 2022, 08:06:17

Hi Neil. I tried a new bunch of recordings based on the points that you suggested and also went through the Best Practises. However, please the attached video of poor quality of tracking in VR. There a few issues I see here and hope you can help to clarify. First, there are a couple reflection points in the eye, which I believe are from the infra-red camera itself. Could this hinder tracking? Second, the major limitation here is that the orientation of the camera is fixed as compared to the Core device where you can adjust the eye camera to get the best possible eye/pupil tracking. Is there something that you recommend to solve this issue ? We also have Pupil Core and right away it tracked the pupil. Thanks a lot in advance.

nmt 10 March, 2022, 13:19:40

@user-205d47 If you haven't already, I'd also recommend reading our Pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry

user-ced35b 10 March, 2022, 23:21:32

Hello I am currently running an experiment where participants will view visual stimuli through a mirror stereoscope (a different image presented to each eye separately through each mirror). I will simultaneously track their eyes using Pupil Core. Since they will be wearing the glasses (and therefore the mirrors won't block the infrared light) does it matter what type of mirrors we have (hot vs cold mirrors)? Thank you!

user-ed3a2a 11 March, 2022, 09:59:50

good morning. Has anyone managed to open the mp4 files from Matlab on Mac? I only get errors

user-027014 11 March, 2022, 10:15:56

As i work on mac, I cannot open many .mp4 files (as i think they are build for windows or so). However, you can use free software such as handbrake to convert it to a functioning .mp4 file. Then you can use VideoReader to import data into Matlab.

user-ed3a2a 11 March, 2022, 10:01:06

or, alternatively, is there a way of reading the pupil dilation from matlab? Or is the only way to run the pupil_player first and then export the results to a csv? I only need the dilations, ideally directly from pupil_capture

papr 11 March, 2022, 10:16:56

@user-ed3a2a @user-027014 By default, Capture records mjpeg data and Quicktime Player does not know how to play that back properly. They should work correctly using VLC player. See also https://docs.pupil-labs.com/developer/core/recording-format/#video-player-compatibility

user-ed3a2a 11 March, 2022, 10:33:20

yes, I have VLC, that works, but I would like to measure pupil dilation during the experiment, not afterwards.

user-ed3a2a 11 March, 2022, 10:33:59

is there a way to 'remote control' pupil_player? No offence, but the documentation is somewhat lacking and hasn't been updated since 2016

user-ed3a2a 11 March, 2022, 10:37:54

it looks as if this file only works from already generated cvs files?. I assume there is no Matlab solution?

papr 11 March, 2022, 10:41:14

The extract script generates a CSV file from the raw recorded data.

papr 11 March, 2022, 10:39:38

You can receive data in realtime using Matlab, check out https://github.com/pupil-labs/pupil-helpers/tree/master/matlab

Regarding extracting the pupil diameter from the recorded data: Technically that should be possible, but I do not have a turnkey solution for that. It might be easier to run the mentioned python script and reading the generated CSV file in Matlab.

user-ed3a2a 11 March, 2022, 10:40:44

thanks! A little more help please, as there is 0 documentation. I worked out how to start and stop a recording. How do I query the pupil size?

papr 11 March, 2022, 10:41:27

Could you please clarify which of the two solutions you are interested in?

user-ed3a2a 11 March, 2022, 10:41:50

What works! Ideally online via send and receive notificaitons.

papr 11 March, 2022, 10:42:55

I worked out how to start and stop a recording. Are you using the linked matlab helpers for that already? Or are you referring to manually controlling the recording via the UI?

user-ed3a2a 11 March, 2022, 10:43:33

matlab via zmq via the given 'pupil_remote_control' example.

papr 11 March, 2022, 10:45:56

Check out the filter_messages.m example. With it, you should be receiving pupil data without any changes. The script prints the "norm_pos" field in line 70

        [topic, note('norm_pos')]  % print pupil norm_pos

You should be able to access the pupil diameter via note('diameter_3d')

papr 11 March, 2022, 10:43:48

Perfect! Then you are close to what you want.

user-ed3a2a 11 March, 2022, 10:44:00

Ideally I just need to extend that to query some other info

user-ed3a2a 11 March, 2022, 10:47:18

thanks, I will do that and come back to report if it works! Will take a few hours probably.

papr 11 March, 2022, 10:47:58

The pupil datum is documented here btw https://docs.pupil-labs.com/developer/core/overview/#pupil-datum-format

user-ed3a2a 11 March, 2022, 11:09:04

it seems to work so far, but another question in the middle. every day or two pupil_capture stops working. Terminal throws 'world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied.' I then have to reinstall software from scratch. Any idea?

papr 11 March, 2022, 11:13:50

Reinstalling the software usually does not make a change. The session settings are stored outside of the installation file location. Also, the driver installation is checked every time Capture launches, not at installation time. So I am not sure how a reinstall can fix the issue.

On Windows, Windows Updates reset the drivers which could be a reason for the device to not be recognized after an update. Otherwise, there is only the possibility of a physical disconnect.

papr 11 March, 2022, 11:10:13

Are you running on Windows?

user-ed3a2a 11 March, 2022, 11:13:51

no, MAC

papr 11 March, 2022, 11:14:57

Ah, right, you mentioned that above, my bad. The argument about the reinstallation stays the same, with the difference that there is no driver installation needed on macOS.

user-ed3a2a 11 March, 2022, 11:15:25

ok, point taken. Just having the situation again. Restart doesn't help, same error.

user-ed3a2a 11 March, 2022, 11:15:54

Do they need to cool down or something between uses? is matlab changing anything?

papr 11 March, 2022, 11:16:15

Should we quickly jump into a video call via the Discord voice channel? Then I can check it out.

user-ed3a2a 11 March, 2022, 11:16:49

yes please

user-ed3a2a 11 March, 2022, 11:17:12

how?

papr 11 March, 2022, 11:18:09

You should see a channel called pupil-voice on the left. Just click on it and you will join the voice/video chat.

user-e29c16 11 March, 2022, 14:36:56

Hello Everyone, I was trying to connect my Pupils Eye tracker to LSL but it’s not connecting does anyone know how to fix it? Thanks 😊

nmt 11 March, 2022, 14:51:08

Hi @user-e29c16. Could you please outline the steps you have already taken to set up your Pupil Core system with LSL?

user-e29c16 11 March, 2022, 14:55:38

Hi Neil thanks for replying me, Basically, I used this blogspot to follow the steps https://ws-dl.blogspot.com/2019/07/2019-07-15-lab-streaming-layer-lsl.html?m=1, and I ended up here without having any error but once I go to install there’s no any lab recorder.exe file generated.

Chat image

nmt 11 March, 2022, 15:12:01

Thanks for clarifying. A large section of these instructions are related to building LSL (and its apps) from source. Depending on what you want to do with LSL, this might not be necessary. You can, for example, download LabRecorder.exe directly from here: https://github.com/labstreaminglayer/App-LabRecorder/releases/tag/v1.14.2

user-e29c16 11 March, 2022, 15:17:01

yes, I was about to tell you that thing as well I installed it but the pupil's eye tracker is not linking to the lab recorder, I was wondering how to link the eye tracker device so it can show under the "Record from streams"? Thank you.

Chat image

nmt 11 March, 2022, 15:20:17

Great. The next two steps are: 1. Install pylsl and copy or symlink to the Pupil Capture plugin directory 2. Install our LSL Plugin to the Pupil Capture plugin directory Detailed instructions available here: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture#pupil-capture-lsl-relay-plugin

user-e29c16 11 March, 2022, 18:13:38

Chat image Chat image

nmt 11 March, 2022, 15:21:29

I also noticed that the tutorial you were following linked to an old version of Pupil Capture. Be sure to use the most recent version: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-e29c16 11 March, 2022, 18:14:08

Thanks, Neil, I did that steps as well but still it's the same issue pupil's eye tracker is not linking with the lab recorder and once I enter start there's an issue coming as " Missing: Qset()"

papr 14 March, 2022, 07:26:03

Could you please share the capture.log file that can be found in the pupil_capture_settings folder? The most likely reason that the streams are not appearing is that the plugin is not working as intended.

user-e29c16 12 March, 2022, 14:46:25

Could anyone be able to help me with this issue please?

user-7daa32 13 March, 2022, 00:09:21

Hello

Please eye cameras have a sampling rate of what ? We calculated 140Hz. although the laptop is corel i5, the memory is really low. We used a resolution of 400 X 400

user-7daa32 13 March, 2022, 00:14:07

I can see that sampling frequency 200Hz at 192X192 is in the spec. I chose 400 x 400

papr 13 March, 2022, 08:03:25

The programs are installed into C:\Program Files (x86)\Pupil-Labs, more specifically

C:\Program Files (x86)\Pupil-Labs\Pupil <version>\Pupil <app name> <version>

e.g.

C:\Program Files (x86)\Pupil-Labs\Pupil v3.5.1\Pupil Capture v3.5.1

The executables are:

pupil_capture.exe
pupil_player.exe
pupil_service.exe
PupilDrvInst.exe
user-20283e 14 March, 2022, 13:35:00

Thanks! Sending this to IT and will let you know their response.

user-8b8802 13 March, 2022, 15:55:56

Hello, I have download the core program from the website and when I plugged the glasses, it says that the device is found but there is no image. Could you please help me? I am working from a Mac...

papr 14 March, 2022, 07:29:50

Please be aware that you need to run Pupil Capture with administrative privileges on the new macos Monterey. Please see the release notes for details.

user-8b8802 13 March, 2022, 16:02:09

Chat image

user-8b8802 13 March, 2022, 17:26:51

can somebody help me?

user-6e1219 14 March, 2022, 06:14:35

Hello, I am facing couple of issus (1) I am doing the conversion from Pupil time stamp to System time stamp . After the conversion I can see that the System time is actually 3 hrs lesser then the exact system time of my computer. I am doing the whole conversion with by help of the tutorial provided on the Pupil Labs website.

(2) I am running an experiment where I need to send tags for every image presented on the screen( Image start time, Image end time and Image No.) Can someone help me out and let me know how I can do that using Pupil Labs eye tracker. (3) At last but not least, can we connect Pupil Labs eye tracker with Psychopy?

It will really helpful if anyone can help me out to figure out the solutions. Thank you so much.

papr 14 March, 2022, 07:31:52

Hi, the conversion to system time will yield time in UTC+0. You are likely in a different time zone and need to adjust accordingly.

You can send such tags via the annotation feature. https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py

But with our new Psychopy integration this might not be necessary at all. https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html#pupil-labs-core

user-8b8802 14 March, 2022, 07:18:39

Good morning, Is there someone that can help me with my problem? I need guidance to know what I am doing wrong!

papr 14 March, 2022, 07:28:51

The maximum sampling rates are 200 Hz at 192x192 120 Hz at 400x400

The real sampling rate might be lower due to insufficient cpu resources.

The sampling rate of the gaze (combined data from both eyes) might have an higher sampling rate.

user-7daa32 14 March, 2022, 07:33:27

Thanks. So it is best to present the sampling rate we calculated from our data. We have ordered systems that have a high memory and sampling rate.

The result from our pilots study shown SR of 140Hz at a resolution of 400 X 400

papr 14 March, 2022, 08:10:02

Hi, thank you for sharing the video. As you can see, the 2d pupil detection (yellow circle) is still being fitted to the iris instead of the pupil. Please reduce the pupil max parameter in the 2D Pupil Detector settings until this does no longer happen. You might need to reduce the Pupil Min value, too, but this is difficult to tell from the video.

user-205d47 14 March, 2022, 08:15:32

Thanks @papr for your quick reply. I will try your suggestion today and update. In the mean while if you have any thoughts on the other 2 points I mentioned in my comment then please let me know.

user-8b8802 14 March, 2022, 08:19:51

thank you for your answer could you plese tell me where are those release notes?

papr 14 March, 2022, 08:21:57

https://github.com/pupil-labs/pupil/releases/tag/v3.5 See the macOS Monterey support section.

papr 14 March, 2022, 08:21:32

I will have to come back to you in this regard. I will ask my colleagues for best practices.

user-205d47 14 March, 2022, 09:25:53

Thanks, that will be great. Looking forward to it.

papr 14 March, 2022, 09:40:06

The reflection only become a problem if they lie on top of the pupil border. In these cases, the confidence values will be lower than they usually would be, but the gaze estimation should be affected much. Regarding the eye camera positioning: One option is to use the VR headset's built-in IPD-adjustment functionality. Based on the shared video, the positioning is ok, though. The eye ball is fairly centred. πŸ‘

user-205d47 14 March, 2022, 14:34:46

Hello @papr I tried it again with you suggestions by changing the pupil min/max, intensity and exposure etc. Unfortunately, nothing has helped so far. The tracking is better for darker eyes in general. The attached video is for a blue eye subject. Do you have suggestions for how it can be improved. @nmt Also pointed a few issues and we have also tried adjusting the ROI, exposure etc.

user-e29c16 14 March, 2022, 12:50:03

Hello Papr, thanks for replying, Here is the capture.log file.

capture.rar

papr 14 March, 2022, 13:06:27

Looks like the plugin is not loading due to one of my recent changes. Sorry for that! Do you have the possibility to test a possible fix in a few minutes?

user-e29c16 14 March, 2022, 13:07:03

Sure, Tell me what do.

papr 14 March, 2022, 13:15:20

outlet.py

user-e29c16 14 March, 2022, 13:07:09

to*

papr 14 March, 2022, 13:14:51

@user-e29c16 could you please replace the outlet.py file with this one? Afterward, start Capture. The LSL Relay plugin should be listed in the plugin manager. Enabling it should cause the stream to appear in the LSL Recorder app.

user-e29c16 14 March, 2022, 13:18:21

Thank you so much papr, now its working.

papr 14 March, 2022, 13:18:41

Great, thanks for testing! I will release the fix now

user-e29c16 14 March, 2022, 13:22:16

Sure, Thanks.

user-e29c16 14 March, 2022, 13:38:14

Hi papr, after running a few trails there's the issue that comes with camera, It's like if I turned on both the eye0 and eye1 the main camera is getting disconnected if I turned off the eye 0 and eye1 the main camera is getting activated again, could you be able to help, please? Thank you.

papr 14 March, 2022, 13:40:40

And just to confirm, are you using a Pupil Core headset or custom cameras?

papr 14 March, 2022, 13:39:43

I do not follow. Which cameras are you turning off? And how do you do that (via software or by disconnecting them physically)?

user-e29c16 14 March, 2022, 13:43:30

So, first I start capture after that everything works smootly and i tried enabling eye0 and eye 1, once i enable them the main cam disconnected. Yes i'm using pupil core headset.

Chat image

user-e29c16 14 March, 2022, 13:45:19

Now, it's keep saying WORLD: Camera disconected. Reconnecting...

papr 14 March, 2022, 13:45:50

Please try restart with defaults settings. Both eye windows should appear and all three cameras should work as expected.

user-e29c16 14 March, 2022, 14:02:09

Yes, once I hit reset with default setting my computer got restarted too, and now it's working, Thanks again for your help.

papr 14 March, 2022, 14:40:35

Would it be possible for you to share a raw 1-minute-long Pupil Capture recording with this exposure setting to [email removed] Having access to the raw eye video data allows me to give more concrete recommendations regarding the parameters.

user-205d47 14 March, 2022, 14:49:56

Sure, I will send the raw recordings to the email.

user-e29c16 14 March, 2022, 15:32:17

Could anyone be able to help with reading the data recorded from the lab recorder by using matlab? Thanks.

papr 14 March, 2022, 15:33:12

Check out https://github.com/xdf-modules/xdf-Matlab/tree/master

user-e29c16 14 March, 2022, 15:35:15

Thanks for replying, I tried that one and I gave the correct path but after trying to load, it gives me an error like this "Unrecognized function or variable 'load_xdf'."

papr 14 March, 2022, 15:39:46

Unfortunately, I do not have access to a Matlab instance. From that error message, it looks like 1) addpath was not called or 2) called with an incorrect path (see the repo's README for reference). If you do not think that this was the issue please reach out to the author.

user-e29c16 14 March, 2022, 15:37:23

This is the file I'm trying to load.

sub-P003_ses-S003_task-Default_run-001_eeg.xdf

papr 14 March, 2022, 15:39:59

The issue is not related to this file.

user-e29c16 14 March, 2022, 15:41:44

Thank you so much again, I'll check with that.

papr 14 March, 2022, 16:37:37
user-5fb1ee 14 March, 2022, 18:53:04

Hi @papr I am currently having a pupil headset and trying to use it with a zed stereo camera. I know that pupil capture can time sync all the nodes in one pupil group automatically. I tried to open the zed camera by pupil capture and create a pupil capture instance, but it failed.

papr 15 March, 2022, 08:38:46

Please be aware that Pupil Capture only supports cameras out-of-the-box that fulfil the requirements listed in this message: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

Before you start writing your own backend, could you let us know a bit more about what you want to achieve? Does your Pupil Core headset have a built-in scene camera or are you planning on replacing it with the ZED camera? In case of the former, do you only need time synchonization between the Pupil Core data and the ZED video stream?

Short note regarding Pupil Groups and time sync: Pupil Groups does not perform time sync. There is a dedicated plugin to enable time sync between Pupil Capture instances. See also this document for current best-practices on synchronizing time with external data streams https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py

wrp 15 March, 2022, 06:24:00

Hi @user-5fb1ee Have you already created a video backend for the ZED camera? ZED cam has a python lib from what I can see already. IIRC you will need to build a plugin that inherits from the video capture base plugin: https://github.com/pupil-labs/pupil/blob/cd0662570a1a495c42b0185ed4b28630d1ba70ee/pupil_src/shared_modules/video_capture/base_backend.py ( @papr please feel free to elaborate when you are online )

user-20283e 15 March, 2022, 02:36:00

Hi again! I would like to add reflective markers for motion tracking to the pupil-core frame. Do you have any experience with certain shapes? 3D prints? suggestions? Thanks!

wrp 15 March, 2022, 06:20:11

Hi @user-20283e πŸ‘‹ no personal experience here with reflectors for mocap. For head pose have you considered: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking - for integration with other mocap systems you might want to check out @nmt's repo: https://github.com/N-M-T/Pupil-Labs-Mocap-Trigger

user-8a20ba 15 March, 2022, 06:16:11

@papr Hi! I want to make an app that shows gaze points in real time . I wonder if this can be done with MATLAB

wrp 15 March, 2022, 06:17:42

Hey @user-8a20ba πŸ‘‹ - have you already looked at: https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/filter_messages.m#L58 ?

user-b91cdf 15 March, 2022, 09:26:22

Hello all,

I noticed that the eye camera image for the left eye is blurry. However, the focus wheel is clued in the case of the PupilLabs Core.

Is it possible that the focus adjustment wheel moved somehow? No matter how I adjust the eye camera, the confidence of the left eye is always a bit worse.

Can anyone help me with this ? Look at eye 1 - the lashes

Kind regards,

Cobe

Chat image

papr 15 March, 2022, 11:22:30

Hi, would you be able to share a one-minute recording of you looking into different directions while keeping the head still? If so please send it to [email removed] I would like to confirm that the blur is really the cause for the lower confidence. In our experience, it is more likely for there to be a different reason. It is not possible to adjust the focus of the 200 Hz eye cameras.

While the eye model looks well fit already, I would generally recommend to rotate the eye cameras slightly more outward s.t. the eye ball center lies further in the eye cams' fields of view. This can be helpful to avoid clipped pupil images when looking to the far right or left.

user-ace7a4 15 March, 2022, 11:14:06

Hi, are there any requierements for downloading the pupil core Software? Like how much ram etc is needed

papr 15 March, 2022, 11:23:20

The required RAM is difficult to estimate as this heavily depends on the recording length and the detection features being used.

user-ace7a4 15 March, 2022, 11:15:57

And is there a recommended software for analyzing the Data?

user-3b5a61 15 March, 2022, 11:16:39

pupil player is pretty great...

user-3b5a61 15 March, 2022, 11:16:54

it gives you all the csv you will need

user-ace7a4 15 March, 2022, 11:28:16

The required RAM is difficult to estimate as this heavily depends on the recording length and the detection features being used. @papr Thanks! So there are no specific requirements that my device should have?

papr 15 March, 2022, 11:32:19

For Pupil Capture, a fast CPU is most important (>2.8 GHz Intel Core i5). For Player, RAM is most critical. We recommend 8GB or more.

user-ace7a4 15 March, 2022, 11:41:40

For Pupil Capture, a fast CPU is most important (>2.8 GHz Intel Core i5). For Player, RAM is most critical. We recommend 8GB or more. @papr πŸ‘

user-ace7a4 15 March, 2022, 12:25:16

On the website it is written, that in order to properly adjust the eyetracker, a green circle should appear in the eye camera. however only a light or dark Blue circle and a smaller red one with a red dot appear. Where did I go wrong?

papr 15 March, 2022, 13:10:16

Let me update that accordingly.

papr 15 March, 2022, 13:09:44

The newer software version uses blue instead of green colors. You should aim for a dark blue model, together with the other points listed in that docs section

user-ace7a4 15 March, 2022, 14:25:31

Thank you so much, this is incredibly helpful! Is there anyway to get the amount of eyemovements made (so saccades essentially) and the pupil diameter? Or do I have to import a plugin for that?

papr 15 March, 2022, 14:30:20

Pupil diameter is calculated by default. Saccade detection is not available.

user-e29c16 15 March, 2022, 18:22:47

Hi Papr, I Just did some recordings using lab recorder but seems like none of them recorded any numerical data, after installing the EEG toolkit to Matlab and tried to read but when I checked the cells their nothing. Could you be able to help me with this? I'll attach the .xdf file and could you tell me is it ok or is there anything missing with the file? Thank you.

sub-P004_ses-S004_task-Default_acq-100_run-001_eeg.xdf

papr 16 March, 2022, 13:16:16

I have confirmed that the file contains the two LSL streams (gaze and fixations) but did not receive any data. It is very likely that you just need to run the calibration to get the setup working.

papr 16 March, 2022, 13:03:51

Just to confirm, did you run a calibration prior to the recording? The outlet uses gaze data which requires a calibration in Pupil Capture.

user-e29c16 15 March, 2022, 18:29:14

I just wanna make sure the data is recoding properly from lab recoder and saving as .xdf file without missing any?

user-e29c16 16 March, 2022, 12:13:38

Can anyone able to verify this to me please? Thank you.

user-f75e38 16 March, 2022, 00:44:41

Has anyone encountered a "moov atom not found" error when trying to view their world.mp4? Does anyone know how to troubleshoot this other than trying to repair the file post-process?

user-f75e38 16 March, 2022, 00:46:46

I have tried to reboot my Android device but still face this issue.

user-e29c16 16 March, 2022, 13:11:26

Hi, Thanks for replying, I just used the default settings and I didn't change anything. if I have to change before recording what should I need to change?

Chat image

papr 16 March, 2022, 13:14:49

You do not need to change anything. You just need to follow the normal getting started procedure, connecting your headset, adjusting the eye cameras for good pupil detection and perform a calibration. See steps 3 and 4 of https://docs.pupil-labs.com/core/

user-ace7a4 16 March, 2022, 13:46:28

Im having the issue, that fixations are not detected with the respective plugin. Although pupil confidence is almost 1.0 for both eyes, and I am consciously fixating a spot for a few seconds, the plugin doesnt detect any fixation. What could be the mistake?

papr 16 March, 2022, 13:48:09

The detection can take a moment to complete. On the right, you should see a progress indicator when the detection is performed. Can you confirm that? To reinit the detection, change the gaze source temporarily from "Gaze from recording" to "post-hoc calibration" (or vice versa)

user-ace7a4 16 March, 2022, 13:49:51

The detection can take a moment to complete. On the right, you should see a progress indicator when the detection is performed. Can you confirm that? To reinit the detection, change the gaze source temporarily from "Gaze from recording" to "post-hoc calibration" (or vice versa) @papr Wow thanks! My mistake was using the posthoc option and not the recorded one. Thank you so much!

papr 16 March, 2022, 13:53:21

Just to clarify: Using the post-hoc calibration can be a valid option! Just make sure to set it up and calculate some gaze first. Without gaze data, the fixation detector won't have any input.

user-ace7a4 16 March, 2022, 13:55:12

Just to clarify: Using the post-hoc calibration can be a valid option! Just make sure to set it up and calculate some gaze first. Without gaze data, the fixation detector won't have any input. @papr Makes sense! I will try some options to work things out!

user-e29c16 16 March, 2022, 17:01:47

Does anyone have any idea why my world camera is blurry?

Chat image

user-a07d9f 17 March, 2022, 13:04:38

check the lens - focus or dirt on surface/inside lens

papr 17 March, 2022, 06:14:37

Please see https://docs.pupil-labs.com/core/hardware/#focus-world-camera

user-e29c16 16 March, 2022, 17:11:31

I'm just getting this message.

Chat image

user-04dd6f 17 March, 2022, 10:47:32

@papr Hi, I would like to know the unit of the "gaze_point_3d_xyz" in "gaze_positions", is it in "mm" or "pixel"?

Many Thanks~

Chat image

papr 17 March, 2022, 10:55:26

It is mm. See "3D Camera Space" for reference https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-a07d9f 17 March, 2022, 14:52:15

Hi all. I trested several times the calibration in 3D mode and 2d mode. and all the time we had better results with 2D.

papr 17 March, 2022, 14:53:47

yes, the 2d mode can be more accurate after calibration but is more prone to slippage. This makes 3d mapping the better choice in many cases.

user-a07d9f 17 March, 2022, 14:53:08

Is 2D more preferrable? or it could be related to usage of older (2.3.0) version of software

user-a07d9f 17 March, 2022, 14:53:23

(i have data uploaded if needed)

user-a07d9f 17 March, 2022, 14:55:06

we made many tests and 3d (on 2.3.0) is always worse.

papr 17 March, 2022, 15:02:13

You might also want to consider testing a newer release. We have updated the 3d model with the Pupil Core 3.x software release.

papr 17 March, 2022, 14:59:33

That might very well be πŸ™‚ Although, one might also need to consider gaze accuracy after wearing the headset 5/10/15 minutes. There is a lot of natural slippage that the 2d can not compensate for. If you only plan to record 20 seconds after a fresh calibration (without slippage), then the 2D pipeline is the better choice. As described the in the Best Practices: The choice is use case dependent.

user-a07d9f 17 March, 2022, 14:56:23

I also wated to know if surface tracking data is sent via network api if plugin is enabled

user-a07d9f 17 March, 2022, 15:03:07

are there any network api changes which can cause backward compatibility issues?

papr 17 March, 2022, 15:11:54

yes, see the "Network API Changes" section in https://github.com/pupil-labs/pupil/releases/v3.0 You are most likely affected by "Changed 3D pupil datum format" and "Binocular 3D gaze data with string-only keys". But these changes can be adapted very quickly.

user-a07d9f 17 March, 2022, 15:18:18

Thank you. I hope we'll be able to adapt without many issues to latest releases.

user-ecbbea 17 March, 2022, 21:30:48

I've been having a lot of trouble with my Pupil Core picking up the right eye's pupil. It's seems to be consistently bad across participants. I'm not sure what else I can do besides adjusting the 2D settings and resetting the 3D model every now and then. Any help would be appreciated

papr 18 March, 2022, 06:51:44

Have these issues appeared just recently? Could you share an example picture of the right eye? Does it look noticeably different from the left eye?

user-ced35b 17 March, 2022, 23:38:49

Hello, is it possible to calibrate one eye at a time. Im using an older version of pupil capture. Do you simply just close one of the cameras?

papr 18 March, 2022, 06:58:59

Hi, new calibrations overwrite each other, i.e. with your approach only one eye at a time could be calibrated. Could you clarify the requirements a bit further? 1) Do you have one or two target coordinate systems (example: single scene camera vs dual-display hmd) 2) Are your subjects to fixate the gaze target(s) with both eyes simultaneous?

There are some custom calibration choreographies + algorithms but they all require Pupil Capture 2.0 or newer.

user-ced35b 21 March, 2022, 18:26:03

Or, is it sufficient to just calibrate, separately on a single monitor?

user-ced35b 21 March, 2022, 18:05:49

We are using a stereoscopic display. So the observer views two monitors through two mirrors: the right mirror feeds the display of the right monitor to the right eye and the left mirror feeds the display of the left monitor to the left eye. We are using both left and right eye cameras (and one world camera). The participant is positioned in front of the mirror set up (centered between them). I'm not sure which is the best way to calibrate given this setup. What would you suggest? I was also having trouble calibrating viewing the images through the mirror. The calibration wouldn't start unless I was looking directly at the monitor (as opposed to through the mirror). Thanks again for all your help. I've attached an example of our display setup (taken from erkelens et al.)

Chat image

user-b74779 18 March, 2022, 07:18:30

Hi I am currently using the pupil core device and I would like to prototype a wireless use by sending the video of the three cameras to a computer, is it possible to do the pupil tracking on a recorded video using pupil capture or another software ?

papr 18 March, 2022, 07:23:33

Hi, yes, Pupil Player supports post-hoc pupil detection and calibration given a Pupil Player compatible format. See https://docs.pupil-labs.com/developer/core/recording-format/ and this example script that can convert an externally recorded video into a Player compatible format https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a (please be aware that this does not yet solve your temporal alignment challenge for the three videos)

user-ecbbea 18 March, 2022, 16:53:38

These issues have been happening for about a ~year (maybe longer). The issue is that the algorithm just seems to be having a ton of trouble locating the pupil, especially when the eye moves towards the nose bridge. I can get an image, but I'll have to get one of my research associates to do it as I don't have access to the hardware atm

papr 21 March, 2022, 11:57:23

This sounds like the pupil rotates away from the eye cameras, making it difficult to detect it. Have you tried the orange ey camera arm extenders? They might position the eye cameras further towards the nose, getting a better view of the pupil.

user-06390a 18 March, 2022, 17:22:48

Hey is there any way I can get the specs on the camera module for the VR/Ar camera module? I’m writing a research paper on it

papr 21 March, 2022, 11:54:51

Please see https://pupil-labs.com/products/vr-ar/tech-specs/

user-b74779 18 March, 2022, 17:24:59

Hi again, has anyone ever used the Pupil Core device without usb wire, as I am trying to mount in on Hololens 2 I would need my user to be able to move anywhere without cable ? I thought that I could also mount a Raspberry on the headset to do the pupil tracking with pupil capture on the Rapsberry and then send it to my computer, yet if someone has a better solution I would be glad to here it. Thank you

papr 21 March, 2022, 11:45:22

Please see this project https://github.com/Lifestohack/pupil-video-backend The RPi is not powerful enough to run Capture but it can stream the video to another computer running the app.

user-b74779 18 March, 2022, 17:26:30

Also is it possible to order only the connection wire (no cameras no frame) ? as I already have a pupil core device

papr 21 March, 2022, 11:38:11

I think we can make this happen. Please send an email to info@pupil-labs.com

user-5fb1ee 18 March, 2022, 18:35:31

Yes, I only want to achieve time synchronization between the Pupil and the zed video stream

papr 21 March, 2022, 11:52:19

Great! I this case, you can use a small external script instead of investing the time to build a custom backend. See this example for reference https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py You might want to use the Camera.get_timestamp function as your corresponding client clock https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1Camera.html#af18a2528093f7d4e5515b96e6be989d0 Together with TIME_REFERENCE.CURRENT https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1TIME__REFERENCE.html

user-ab038d 19 March, 2022, 12:25:51

Hello, I have pupil core product with world camera and eye0 camera. I am trying to use pupil core with raspberry pi. Raspberry pi is just for the streaming video over WiFi to another PC where pupil capture is runnig. Is there a way or example project about this? I tried Pupil Video Backend, but I could not run the program. Somehow it crashes. Any help will be very beneficial for me. Thanks in advance.

papr 21 March, 2022, 11:54:08

The mentioned project is the way to go in this regard. I do not know a better solution than this project. Please contact the author regarding the issue.

user-6338f3 21 March, 2022, 00:46:30

Hello, I bought Pupil Core. Where can I check the serial number?

user-755e9e 21 March, 2022, 11:25:02

Hi, Pupil Core does not have a specific serial number, but the hardware gets assigned to the related order ID(eg. pupil_w120_e200b_2022031609XXXX).

user-b74779 21 March, 2022, 15:14:29

Do you think a Raspberry could compute the pupil tracking ?

papr 21 March, 2022, 15:14:47

90% sure that it is too weak, at least for higher frame rates

user-b74779 21 March, 2022, 15:15:33

Okay, maybe send the video to a computer through the raspberry might be a better idea ?

papr 21 March, 2022, 15:16:48

Yes, that is what the linked project is for :)

user-b74779 21 March, 2022, 15:18:25

Thanks a lot !

papr 21 March, 2022, 19:38:22

then this is for you https://gist.github.com/papr/fcbafd5cf748c9b11e64a4dd37ec8e9a

user-ced35b 21 March, 2022, 20:07:26

Thank you! Do these plugins work for older versions of Pupil Capture. We are currently using a much older version (1.14.9)

papr 23 March, 2022, 17:01:42

@user-ced35b the idea is that the two screens replace the scene/world camera with two artificial scene coordinate systems. That is what these plugins are all about. Yes, you can calibrate using the world camera but it won't be any use to you.

papr 21 March, 2022, 20:10:01

No, 2.0 or newer is required πŸ™‚

user-ced35b 21 March, 2022, 20:14:16

haha we are using software that records eeg and presents our stimulus that is pretty old and doesnt allow us to update our IOS. Do you have any other suggestions to calibrate through these mirrors?

papr 21 March, 2022, 20:12:11

what is your reason for using this old/ancient version? πŸ™‚

user-ced35b 21 March, 2022, 20:15:38

Would it not work to calibrate on a separate monitor that has the same viewing distance?

papr 21 March, 2022, 20:18:58

The default calibration method assumes binocular vision. I do not know if that would be ok for your. Can you tell us more about what you are trying to measure? And how about running Capture on a separate computer?

user-ced35b 21 March, 2022, 20:22:31

I was worried about temporally synchronizing three systems since now our EEG is too old to implement LSL or any updated synchronizing tools (computer running Capture, the computer presenting the stimulus, and our EEG system). But youre right I think thats our best bet. We are just measuring gaze position and pupil dilation while the participant views two distinct images through the mirror stereoscope. They are just fixating the center of the mirror setup, thiis leads to perceptual dominance of one of the two images.

user-371233 21 March, 2022, 20:22:22

Hello. I have access to a pair of the vr-ar googles. But I just want to use then for eye tracking on a computer monitor. And the world cam is giving me some issues. I have gotten it to work by attaching the googles to a pair of glasses, and using an old webcam as the world cam, but this is a very awkward setup. Do you have any ideas as to how I could spoof my monitor output as the world cam, or how to circumvent the world cam altogether? Again, for my use I only need to track gaze on a screen with no head movement. Thanks in advance.

user-ced35b 21 March, 2022, 20:23:23

I think assuming binocular vision is ok. I'm just not sure what the best way is to calibrate through the mirrors on the two separate monitors

papr 22 March, 2022, 11:01:22

Is it possible for you to display the calibration marker window on each of the displays at the same time?

user-e29c16 22 March, 2022, 03:38:42

Hi Papr, Because of your Help I finally go the data for that thank you so much! , So in Gaze --> we have 22 channel count data points for each time point and in Fixations --> we have 7 channel count data points for each time point, I was wondering if you could answer me the difference of gaze and fixations data stream and also wondering what each of these data points represent. Thank you so much again kindly, for helping to understand.

Chat image Chat image sub-P005_ses-S005_task-Default_run-001_eeg.xdf

papr 22 March, 2022, 11:10:30

The xdf file contains meta data with names for each of these channels. Depending on your xdf loading software, you should be able to extract these. For the meaning of the gaze channel names see https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

For the fixation channel names see https://docs.pupil-labs.com/core/software/pupil-player/#fixation-export

Generally, gaze is an estimated viewing direction based on one or two eye images. Fixations are based on series of gaze estimates which do not deviate by a specific threshold during a specified time, i.e. when the subject is fixating an object.

user-f9f006 22 March, 2022, 08:19:59

hi~I have a question about the code in the pye3d/base.py in the image. Why does the right gaze_2d point towards the sphere center? I found that it’s stated in the paper that we should disambiguate the right gaze vector to be the one pointing away from the eye ball center.

Chat image

nmt 22 March, 2022, 10:46:56

Hi @user-371233 πŸ‘‹. This should be possible, but there are certain constraints.

In order to understand where the wearer is looking, it's necessary to calibrate the eye tracker. During calibration, the wearer fixates or follows a known target (reference location) while Pupil Capture records and matches pupil data to these locations. Normally, these reference locations can be displayed on the monitor and picked up by the scene camera.

With no scene camera, you could write a custom calibration plugin for Pupil Capture that collects reference data from the system that also drives the monitor. Important to note: The monitor would need to have a fixed spatial relationship with the wearer in order for the calibration to be accurate – like a head-mounted VR system.

An alternative approach to make the system less cumbersome might be to de-case your webcam. Then you can use the default calibration, and subsequently April Tag markers to map gaze onto your screen: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-371233 22 March, 2022, 19:18:38

Thank you so much for the answer. The April tags are definitely useful. I am trying to look at the calibration. But I cannot find a "calibration" file in the recording, or find calibration values in any of the files. Where is the calibration for the user saved?

user-430fc1 22 March, 2022, 12:30:11

Hello, will the HTC Vive binocular add on work with these virtual reality glasses? https://www.shinecon.com/vr-glasses/vr-glasses-for-mobile-phone/vr-shinecon-hot-seller-vr-headsets-vr-glasses.html

papr 22 March, 2022, 12:56:59

Hi, thanks for your question. I discussed it with @user-92dca7 and you are right, the correct 2D gaze vector, v, has to point in the opposite direction of the line from its 2D starting point, p, to the estimated 2D sphere center, c. Mathematically that means: v*(c-p)<0. Note, there is a typo with respect to sign in the original Swirski paper. In our code, we are effectively testing for an equivalent condition, namely v*(p-c)>0. For this note, when dot_products<0, we use the 2D gaze line which was NOT used to calculate dot_products in the first place, but the other one (np.where(dot_producs, 1, 0)).

user-f9f006 23 March, 2022, 08:36:09

Thanks for the reply! Do you mean that you will correct the code to "np.where(dot_producs, 1, 0)" ?

papr 22 March, 2022, 13:02:03

That depends if the VR headset has the same geometry as the HTC Vive. The add-on clips on https://pupil-labs.com/_nuxt/img/htcvive_e200b.adba5ce.png

You can buy the cameras + cable tree without the casing if you want to engineer your own casing.

user-430fc1 22 March, 2022, 13:27:31

Ah ok, cheers!

user-99bb6b 22 March, 2022, 15:29:09

Hello, I was just wondering if there was anything already created that allowed the glasses to be used in real time? I've been using the core for a Senior capstone project and I have somewhat of a working program but thought I ask before diving into a rabbit hole.

user-e29c16 22 March, 2022, 15:34:25

Thank You so much for your response again, I used MATLAB to load my .xdf files for that I installed eeg libraries, but after loading the .xdf file it does gives me the data but it's not showing what it represents is there a way to get it correctly without using MATLAB or how you suggests me to get this in MATLAB. And also for the visualizing what kind of tools should I need to use, is there a way to get live visualization if yes how can you describe the process for me please? Thank you!

papr 22 March, 2022, 16:00:05

See the LSL documentation https://labstreaminglayer.readthedocs.io/info/viewers.html

nmt 22 March, 2022, 15:34:44

Hi @user-99bb6b πŸ‘‹. Would you be able to elaborate on you use case? When Core is connected to Pupil Capture, our desktop software, both gaze and pupil data can be recorded and streamed all in real-time.

user-99bb6b 22 March, 2022, 15:47:23

Sorry still kind of new to all this haha, we are using the glasses to capture text when a user stares at an object for a few seconds. But to do this I need to export the video to get the data from the CSV file. But atm I am now reading about the IPS Background the the remote Pupil APIs.

user-ced35b 22 March, 2022, 17:22:58

I can't do it at the same time, but do you think it would be sufficient to just do a binocular calibration on just one of the monitors (each monitor is identical to each other in size and distance)? Would this give me a meaningful calibration?

papr 22 March, 2022, 19:45:07

Wouldn't this be weird visually for the subject? Would the second eye be able to fixate the correct target if the second screen did not display a visual target?

user-ced35b 22 March, 2022, 19:46:12

I guess I was just imagining that they directly look at one display instead of viewing the display through the mirrors?

papr 23 March, 2022, 07:31:18

Ah, I understand. Can they look behind the mirror, to the target A in your previous diagram? Is this why you need the scene camera?

user-ace7a4 23 March, 2022, 09:45:11

Is there any advice on how to set up the glasses in a way that allows to capture the pupil in extreme angles? I have really been testing all kinds of stuff, but when looking down as far as I can, the confidence for both eyes always decreases below 0.5 I have been watching the yt Videos but still feel like I am doing something wrong. Is it just that "hard" or what could be the issue?

papr 23 March, 2022, 09:47:36

It is likely that the eye lids are occluding the pupil in this configuration. You can use the orange arm extenders to place the eye cameras more frontal and further down to the eye. This should give you a better view on the pupil in the described situations.

user-ace7a4 23 March, 2022, 09:58:43

It is likely that the eye lids are occluding the pupil in this configuration. You can use the orange arm extenders to place the eye cameras more frontal and further down to the eye. This should give you a better view on the pupil in the described situations. @papr

Thanks! But is it possible that there are simply angles that cannot be captured because of the eye-lid und eye lashes? Or is the camera made to capture every angle?

papr 23 March, 2022, 10:07:07

The former. Pupil detection quality will always depend on the actual eye camera position. While Pupil Core is designed to be as flexible as possible regarding eye camera placement, it is still limited by the length of the eye camera arm, the range of the ball joint, and lastly the length of the eye camera cable.

You can find the arm extender geometry here https://github.com/pupil-labs/pupil-geometry/blob/master/Pupil%20Headset%20triangle%20mount%20extender.stl You are free to modify it and 3d printing your own arm extenders that are more suitable to your use case (remember the cable length constraint though!).

user-ace7a4 23 March, 2022, 10:10:49

The former. Pupil detection quality will always depend on the actual eye camera position. While Pupil Core is designed to be as flexible as possible regarding eye camera placement, it is still limited by the length of the eye camera arm, the range of the ball joint, and lastly the length of the eye camera cable.

You can find the arm extender geometry here https://github.com/pupil-labs/pupil-geometry/blob/master/Pupil%20Headset%20triangle%20mount%20extender.stl You are free to modify it and 3d printing your own arm extenders that are more suitable to your use case (remember the cable length constraint though!). @papr

Thank you so much!

user-f9f006 23 March, 2022, 14:06:06

But in your code, you use dot_products<0 to find the gaze_2d direction, and you said that v*(p-c)>0 should be the right condition. 🀣 πŸ˜‚ πŸ™‚

Chat image

papr 24 March, 2022, 09:42:56

Please see this PR for reference https://github.com/pupil-labs/pye3d-detector/pull/51

papr 23 March, 2022, 14:51:45

I know, it is not intuitive but bare with me for a moment:

Each observation has two possible directions, indexed with 0 and 1 in aux_3d. gaze_2d are the projections based on the vectors at index 0 (I will call these base and the vectors at index 1 other).

dot_products is therefore based on base directions.

Now, let's define this array mask correct_choice = dot_products > 0. It is true for every entry whose base direction points away. False otherwise. With the ~ operator, we can invert this mask incorrect_choice = ~correct_choice. Here, we will assume incorrect_choice == dot_products < 0.

There are two possible implementations: 1. Select base (index 0) for all true entries in correct_choice

np.where(correct_choice, 0, 1)
  1. Select other (index 1) for all entries in correct_choice that are False. Or alternatively (!): Select other (index 1) for all entries in incorrect_choice that are True.
np.where(incorrect_choice, 1, 0)

Our implementation corresponds to 2) while the paper describes 1) but they are effectively the same.

papr 23 March, 2022, 14:53:03

Maybe, I should really just change the few lines of code to avoid any future confusion about this.

user-ced35b 23 March, 2022, 17:04:29

Ok so thats what the dual display HMD calibration does? I'm working on setting up a separate computer so I can use these newer plugins. I'll definitley look into these plugins and will probably come back with more questions πŸ™‚ thank you so much for your help, i'm very new to pupil core!

papr 23 March, 2022, 17:05:13

Sure, no worries. πŸ™‚ Happy to help!

user-ced35b 24 March, 2022, 19:17:11

Hi, I just wanted to confirm that everything is working how it should be. I installed the Dual-Display HMD calibration plugin. The participant will then look through the mirrors (viewing the fusion of the two displays) and calibrate following the calibration client instructions (look top left corner for 1 s, top right corner 1 s, etc.)? All the subject perceives is just one display during calibration.

user-cdb028 24 March, 2022, 08:07:43

Hello everyone, We have tried using the plugin pupil_capture_lsl_relay. The LSL stream is detected by our program but no data is then received. We believe it could be because of the error that we have when activating the plugin in Pupil Capture, i.e., "Couldn't create multicast responder for 224.0.0.1 (set_option: An operation was attempted on a host impossible to reach)". Have you ever had this error? Do you know what is its cause? Best, Puzzled researchers

papr 24 March, 2022, 08:40:24

Hi, apologies for that. The README was missing a note that you need to calibrate first before you can receive data via the LSL relay. I have just added it to the readme to avoid this issue in the future. I do not believe that the displayed warning affects your setup.

user-cdb028 24 March, 2022, 08:08:11

Chat image

user-a07d9f 24 March, 2022, 08:27:02

Hello all! I wanted to know if itt is possible to set up a config for pupil capture which will always set same settings every time pupil capture starts

user-a07d9f 24 March, 2022, 08:28:16

Every time we start the app - it uses same defaults - with resolution 1280-720 instead of fullhd, manual exposure, wrong settings for network api and calibration mode.

papr 24 March, 2022, 08:50:54

The app should be able to restore these settings (with the exception of the Pupil Remote port) from a previous session. From the top of my head, there are 2 possible reasons for this not to happen: 1. The session settings are from a different version in which case Pupil Capture starts with defaults to avoid incompatibility issues 2. The previous Pupil Capture session was not terminated gracefully, not allowing Pupil Capture to store the session settings.

You can set the Pupil Remote port via the --port argument when launching Pupil Capture.

user-a07d9f 24 March, 2022, 08:28:56

so any start of the app causes the operator to set it up the same way.

user-a07d9f 24 March, 2022, 08:29:35

If there is any settings or config file to make settings permanent - it would be extremely welcome

user-a07d9f 24 March, 2022, 12:31:01

Are there any way to remove incompatibility issues and re-save latest correct config

papr 24 March, 2022, 12:35:30

The former can only happen if you use a different version than before. If you always use the same version there should not be any issues. Can you share the capture.log file with us?

user-b28eb5 24 March, 2022, 13:23:11

Hello, I would like to know if it is possible make the calibration of the gaze from Pupil Mobile directly, without using Pupil Capture

nmt 24 March, 2022, 13:27:56

Hi @user-b28eb5. It isn't possible to calibrate with the mobile app. What you can do is record a calibration choreography, and subsequently run the calibration post-hoc in Pupil Player

user-b28eb5 24 March, 2022, 13:32:20

So for example, if I want to use Pupil Mobile to record a video and I want to know where I am looking at, then first I have to make the choreography in the video and after that the gaze will be detect in the rest of the video, right?

nmt 24 March, 2022, 13:34:48

The workflow would be to record a calibration choreography and then perform your eye tracking experiment. Subsequently, you would load the recording into Pupil Player and perform the calibration in a post-hoc context. Read more about post-hoc calibration here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

user-b28eb5 24 March, 2022, 13:45:58

Okay, thank you

user-b28eb5 24 March, 2022, 14:24:19

Another question, for IOS is it available Pupil Mobile?

user-b28eb5 24 March, 2022, 14:24:46

is Pupil Mobile available?*

papr 24 March, 2022, 14:25:38

Hi, no, it is only available for Android. Please be also aware that we have stopped maintaining the app two years ago. It remains in the Play Store for legacy reasons.

user-b28eb5 24 March, 2022, 14:26:25

Then, is there another app mobile that I can use?

papr 24 March, 2022, 14:29:19

If you need to be mobile, I can recommend having a look at our Pupil Invisible product.

Alternatively, you might be able to build Pupil Capture from source on a Raspberry Pi. Note, that the RPi does not have the computational resources to run the Pupil detection in real time. But they should be sufficient to store the video stream to storage. Alternatively, you can use the RPi to stream the video via the local network to a running Pupil Capture instance.

user-b28eb5 24 March, 2022, 14:52:05

Okay, do you have any documentation about Raspberry Pi with Pupil Capture?

papr 24 March, 2022, 16:00:25

You can find the Ubuntu source install instructions here. https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu18.md Note, that some of the pre-built Python packages are not built for ARM (the raspberry pi CPU type). You will need to build these from source, as well (pip, the python package manager, will automatically attempt that for you and will let you know if you are missing any dependencies). Our documentation does not cover this but you are welcome to discuss any issues with building them here.

user-b28eb5 24 March, 2022, 16:15:08

Okay, thank you

user-3c88b8 25 March, 2022, 04:15:18

Hello, I want to know whether we can set AOI using heat map?

papr 25 March, 2022, 08:32:13

Hi, yes, this is possible via https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

papr 25 March, 2022, 08:57:18

The existing client is just meant as a template/example implementation on how to interact with Pupil Capture and how to generate reference data. I highly recommend adapting the script to do two things: 1. Display a target stimuli on each display (ideally looks like one for the subject) 2. Send the corresponding target locations for each display to Capture (see https://gist.github.com/papr/fcbafd5cf748c9b11e64a4dd37ec8e9a#file-dual_display_choreography_client-py-L137-L150)

As a result, you will get gaze estimates for each display separately.

The base implementation will be inaccurate as the target locations are not reproducible (e.g. "center" will mean something different for each subject).

user-ced35b 29 March, 2022, 19:08:24

Ok thank you! Does it make sense to essentially combine a custom screen marker choreography plugin (e.g. nine point screen marker choreography) with dual_display_choreography_client.py?

user-027014 25 March, 2022, 15:11:41

Hi @papr quick question; How similar should gaze traces: {'gaze_normal0_x'} {'gaze_normal0_y'} {'gaze_normal0_z'} {'gaze_normal1_x'} {'gaze_normal1_y'} {'gaze_normal1_z'} for the left and right eye be? I'm analysing some data and i find for all participants fairly large differences. Overal shape between left and right eye seems to match, yet scale does not and also i notice an offset between both eyes when i compute gaze with these traces.

papr 25 March, 2022, 15:14:47

Could you please specify what you mean by "shape"? y should be more similar than x

user-027014 25 March, 2022, 15:17:23

by shape i mean the general rough estimates of where the eyes looked. See image. Right is orange blue is left. Targets were at 1m distance, so convergence shoudlnt realy play a role here. Gaze computation was as described by you in a previous post:

r = sqrt(x.^2 + y.^2 + z.^2); el = acosd(y./r); az = atan2d(z,x);

So what I mean here is, yes, they do look similar but also they differ quite a lot in terms of their offset and scale. So which eye is correct?

Chat image

papr 25 March, 2022, 15:34:38

Also, out of interest, have you removed low confidence values already?

papr 25 March, 2022, 15:33:09

So which eye is correct? Probably, neither. gaze_normal is an estimate that depends on (1) the fitness of the eye model and (2) calibration quality. There are many places where errors may aggregate. The calibration is optimized s.t. that the 2d projection of gaze_point_3d corresponds to the recorded 2d reference targets as well as possible. gaze_point_3d is composed of both gaze_normals and averages out some of their errors.

user-027014 25 March, 2022, 15:43:03

Thanks for getting back. Ok, so I take it that these differences are to be expected and reasonable when comparing the individual eye cameras? Regarding your question; data is not filtered in anyway. Only NaN values are removed in order to compute gaze.

papr 25 March, 2022, 15:45:38

Some differences are expected. The amount of difference will depend on the factors mentioned above. If you want, I can try to reproduce the graph with my own data the week after next.

user-027014 25 March, 2022, 15:53:34

That would be great! So, when it comes to position data from pupil labs what is the most 'raw' data that you get? I assume it is gaze_normal_x/y/z, right? from there on you combine the information to get. norm_pos_x/y and gaze_point_3d_x/y/z or do all of them follow different pathways?

papr 25 March, 2022, 15:56:35

The former is correct. If you don't need to rely on the calibration (don't need scene camera coordinates) or are planning on building your own calibration, have a look at the circle_3d_normal pupil data.

user-027014 25 March, 2022, 16:01:12

Ic, but this stream is not provided in the LSL i think. Anyways many thanks. I think I've got a plan now.

user-99bb6b 25 March, 2022, 20:43:07

I am not sure if I am just not looking in the right place or if it just a so simple I haven't figured it out but is it possible to receive the data from the glasses while they are recording(basically real-time data capture)? Or a way to basically open pupil capture and press export for a previously recorded file?

nmt 28 March, 2022, 07:17:33

Hi @user-99bb6b. You seem to have stated two aims here, 1) to stream real-time data from a Core system, and 2) to access recorded data without Pupil Player. If you can elaborate more on your intended use case I will try to point you in the right direction.

user-4811f9 26 March, 2022, 04:21:24

Hi! Does the HTC vive eye-tracker use corneal reflections to track gaze? It seems like Pupil Labs is using "dark pupil" detection is that correct? Then what is the use of the IR LEDs and IR camera's, in our eye-image I see 5 dots around the pupil, which I assume are coming from the IR LEDs? Thanks for the help!

nmt 28 March, 2022, 07:22:20

Hi @user-4811f9 πŸ‘‹. The VR Add-on, like Core, employs dark pupil detection rather than corneal reflection tracking. The pupil detection algorithm is designed for black and white eye images captured by an IR camera, hence the purpose of the IR illuminators.

user-5da140 26 March, 2022, 11:26:58

Hi, I'm Hani, I'm a new user of pupil core. May i know, if i want to get the Area of interest result, did i need to set the marker before recording or after recording ? and Could u please advice me how to set up it ?

nmt 28 March, 2022, 07:26:30

Hi @user-5da140, welcome to the community! Yes, you'll need to set the AprilTag markers prior to recording. Check out the documentation for a detailed overview of how to set up and use surface tracking: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-d50398 28 March, 2022, 15:58:14

Hello, I am wondering how can I run 2 Pupil trackers (using Pupil capture) in a same PC? Also, for saving the computational cost, is it possible for me to firstly use Pupil capture to perform the calibration, then I can run 2 Pupil capture using python APIs (and linking to the corresponding calibration files)

user-8cef73 28 March, 2022, 16:54:58

Hi, can you download gaze overlay footage? The description of it in the product overview says you can, but it doesn't seem to be an option

nmt 28 March, 2022, 17:56:43

Yes, just enable the World Video exporter in the Player plugin menu: https://docs.pupil-labs.com/core/software/pupil-player/#world-video-exporter

nmt 28 March, 2022, 17:55:56

It's recommended to have two PCs, if possible. That said, you can open two instances of Pupil Capture on the same PC. To reduce processing requirements, I'd recommend ensuring that everything is set up correctly (eye camera positioning etc.), then disable real-time pupil detection. Calibration and pupil detection can be performed post-hoc. Note that you'll need to record a calibration choreography, one for each eye tracker. Read more about post-hoc pupil detection and calibration here: : https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

user-d50398 29 March, 2022, 08:20:57

Many thanks Neil, may I further ask you that for saving processing cost, is it possible if I could perform calibration first. Then, I could turn off Pupil detection (while turning on Detect eye 0 and Detect eye 1 - for recording of pipil). After collection, I could drop the collected folder into Pupil Player for post-detection of eye gaze?

user-99bb6b 28 March, 2022, 21:27:32

I am using the glasses to capture a user reading text, whether that be a sign or a book. I then take that image or video and convert the words into text. I am trying to test the software that gets the words from the image or video in 3 different way. The first being with images from the world.mp4, then just the world video and finally in real-time. Once I get them working I can optimize the real-time accuracy of the other software to make it faster. The issue I would like to start with is exporting the data without opening pupil player and pressing export. I was maybe thinking of using CMD commands, or if could use the pupil remote to just export it similar to how you would stop and start a recording.

nmt 29 March, 2022, 07:52:44

Thanks for clarifying. You can receive world video frames in real-time via our Network API. This example Python script shows how https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py. You should be able to run your image processing on these frames

user-b74779 28 March, 2022, 21:30:47

Hi I am still working on the wireless connection with pupil core, it appears that the github backend that you send me seems a bit slow. I am streaming the video from the raspberry to the port of my computer, the video has no latency when I watch it, yet when I use the backend to send it on capture it becomes REALLLLY slow, I don't know if it is my CPU but I don't think so, I think the github backend is not very well optimized, I would like to add a direct link between capture and my video stream on port but I don't know what file I should modify to do this in your github files ? Do you have any idea ?

user-7e7501 29 March, 2022, 02:29:28

have no response from pupil-labs staff! trying to buy core product and get tech data on NIR led wavelength (850nm?) custom app. anyone has a phone or email contact for pupil-labs tech/sales staff?

nmt 29 March, 2022, 07:54:06

Hi @user-7e7501. Welcome to the community πŸ‘‹. I note that you have reached out via email. A member of the team will respond to you there πŸ™‚

user-b74779 29 March, 2022, 06:00:05

Well I made some changes and it is now better, I decreased the image resolution, the thing is that the problems comes from the backend solution, I guess that comes from the notification system, sending a payload with a "high resolution" image (640x480) might be hard sometimes I don't know why, it is working pretty well with (320x240)... I would still prefer to configure my own source over the network rather than use the notification system, I tried to use usbip to connect over IP, capture detect the USB but then when I select it... capture crash... If there is no other solution I will keep working with video backend plugin i think

papr 29 March, 2022, 07:45:11

Hey, yes, it is likely the network bandwidth is the limiting factor. the backend currently only supports uncompressed image formats. To optimize for bandwidth, you would need to extend both components (client and capture side) to support reading, transferring, and uncompressing the compressed video.

nmt 29 March, 2022, 08:40:15

You can indeed perform a calibration first to check that it worked okay. Important note: Make sure that you record the calibration, as disabling real-time pupil detection will also disable real-time gaze estimation. You'll need to run pupil detection and calibration post-hoc.

user-d50398 31 March, 2022, 14:02:03

Hi @nmt , following the question of reducing the computational cost. Please let me know if the following steps are correct. I will firstly perform the calibration with markers and record the calibration file as a normal recording file? Then, I could start recording as usual. Do I need to turn on Pupil detection during time of calibration? Also, I suppose Detect eye 0 and Detect eye 1 should be turned on all the time, either in calibration files or in normal recording files?

user-1936a5 29 March, 2022, 13:40:44

Hello, I would like to know does pupil cloud charge?

nmt 29 March, 2022, 13:53:12

I've responded in the Invisible channel πŸ™‚: https://discord.com/channels/285728493612957698/633564003846717444/958363045699158037

user-19e084 29 March, 2022, 20:54:36

Hi all, I'm new here. Hoping someone can help me with a technical issue with my Pupil Core. I own two Pupil Core headsets, both with bilateral pupil cameras and world cameras. One headset works just fine, the other displays the pupil cameras but returns the attached error when I start Pupil Capture. My inclination is to think that the world cam is broken here, but I'm not sure how to check this. I'm seeing (2) Pupil Cam2 ID0 cameras in my device manager under "libusbK USB Devices" but can't seem to identify a world camera. Any advice here?

Chat image

nmt 30 March, 2022, 07:20:54

Hi @user-19e084 πŸ‘‹. In the first instance, please follow these instructions to troubleshoot the camera drivers installation: https://docs.pupil-labs.com/core/software/pupil-capture/#windows

user-64deb6 30 March, 2022, 07:20:18

Hi guys,I want to calculate the actual distance of eye gaze from one point to another point on the screen(ie. which is calibrated), but the norm_pos information obtained from gaze datum is based on the world coordinate system rather than based on the coordinate system of my screen, how can I get the result I want? 1. Is norm_pos multiplied by the actual size of the scene captured by the world camera? If so, how can this actual size be obtained? 2. Is it possible to get norm_pos based on the coordinate system of my screen, so that I can directly multiply norm_pos by the actual size of the screen to get the data I want.

nmt 30 March, 2022, 07:27:55

Hi @user-64deb6. You can use the Surface Tracker Plugin + AprilTag markers to map gaze from scene camera coordinates to screen-based coordinates. This will get you norm_pos relative to your screen. Read more about that here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-64deb6 30 March, 2022, 11:30:35

Thank you very much, but I also want to ask if I don't need to calibrate after opening Pupil Capture before I use Surface Tracker Plugin + AprilTag markers?

user-b0efbb 30 March, 2022, 10:14:15

Hello, I'm trying to use Pupil Core for an Intel Macbook (Monterey 12.3). However, while the device is detected on Pupil Capture, I only see a grey screen. What should I do to get it working?

nmt 30 March, 2022, 10:30:03

@user-b0efbb, you'll need to run Pupil Capture with administrative privileges on MacOS Monterey. Please see the release notes for detailed instructions: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-b0efbb 30 March, 2022, 10:34:36

Thanks for this. Much appreciated.

nmt 30 March, 2022, 13:04:56

No problem πŸ™‚. Yes, you'll need to calibrate. Calibration is a necessary step to obtain gaze data. Surface tracking is an additional step to map that gaze to surface-based coordinates.

user-64deb6 30 March, 2022, 13:41:49

Thanks a lot.

user-092531 30 March, 2022, 18:10:39

Not sure if this is the right channel to ask in, but I'm working on a research project with underwater eye tracking. We have a lens over the camera as part of the water proof enclosure. Is there anyway to get a read out on what effect this has for the IR? Whether it's filtering wave lengths or reducing the intensity?

user-b9005d 30 March, 2022, 18:31:04

In the gaze_positions.csv export, the website says that the norm_pos_x and y positions are normalized coordinates. What are these normalized coordinates? Like what range of coordinates are there across the whole image?

user-b49db3 31 March, 2022, 09:20:53

Hi All. I can start pupil capture (mac monterey) using sudo and the world camera shows - but for some reason the eye camera is gray. The problem is apparently the following : eye1 - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied. Wonder if anyone could suggest. Glasses work fine on a windows PC

nmt 31 March, 2022, 10:04:20

Hi @user-b49db3. Please ensure that the wired connection to the eye camera is secure, then try restarting Pupil Capture with default settings (you'll find that option in the General Settings menu of Pupil Capture)

user-b7f1bf 31 March, 2022, 09:50:24

Hi all, recently my pupil core stopped working, when I launch pupil capture it says video_capture.uvc_backend: could not connect to device! No images will be supplied.

user-b7f1bf 31 March, 2022, 09:50:49

I followed these instructions https://docs.pupil-labs.com/core/software/pupil-capture/#windows to no avail

user-b7f1bf 31 March, 2022, 09:51:17

I found the drivers, uninstalled them, rebooted, ran pupil capture with admin privileges but now the drivers don't even show up in device manager anymore

user-b7f1bf 31 March, 2022, 09:51:30

could it be my device is broken?

nmt 31 March, 2022, 10:01:55

Hi @user-b7f1bf. We have responded in our email thread πŸ™‚

user-b7f1bf 31 March, 2022, 10:05:54

Thank you! I'll try the things you mentioned πŸ™‚

nmt 31 March, 2022, 10:32:41

Normalized space is ~~coordinates are~~ bound between 0 and 1, e.g. image centre: x=0.5, y=0.5 . You can read more about Pupil Core's coordinate systems here: https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-b9005d 31 March, 2022, 10:54:48

That’s what I thought it was as well, but my output csv includes values above 1 at certain x and y timestamps. Would there be a reason for this?

nmt 31 March, 2022, 11:05:56

I made a slight edit to my previous response. The normalized scene camera frame is bound between 0 and 1. However, gaze can be estimated outside of the scene camera's field of view. So normalized gaze coordinates exceeding those bounds are not unexpected.

user-b9005d 31 March, 2022, 11:08:57

Okay, thank you for the quick response!

user-d1bcb6 31 March, 2022, 11:59:08

Hi all! I am using one Pupil core with a Macbook. I have also used it as a dual eye-tracking set up with another Macbook, and Pupil Groups and Time Sync. Now I intend to buy a new computer for the second tracker. What do you recommend? Should I go for a Macbook to secure that the two get connected easily, or would Windows work fine? Does Windows editions have any benefits over Mac? (I am particularly interested about recording sound, which is not any more avaliable for iOs versions)

nmt 31 March, 2022, 14:41:42

Hi @user-d1bcb6 πŸ‘‹. Mac or Windows will work with your Pupil Groups set up. The key specs are CPU and RAM. Since you are already using MacBooks, I'd be inclined to go for a Mac as your new addition. Pupil Core software performs very well with M1 silicon (that's what I use). Just be aware that you'll have to run Pupil software with administrative privileges on macOS Monterey. In order to record sound (Windows, Mac or Linux), you'll need to use third-party software, such as Lab Streaming Layer's audio capture + our LSL integration.

user-d1bcb6 31 March, 2022, 14:46:10

Thanks, Neil, great, good to know with which system is works stable. . With the audio, I intend to follow the recommendations that you have put here: https://discordapp.com/channels/285728493612957698/285728493612957698/887783672114208808 Do you still recommend exactly this? Do you expect it will work on macOS Monterey? (with my current machine I have not been successful: AudioCapture wants me to use later version that I have, but when I use later version, than PupilLabs doesn't work any more... πŸ™‚ )

nmt 31 March, 2022, 14:47:45

Can you clarify which version of macOS your Pupil Core installation doesn't work with?

user-d1bcb6 31 March, 2022, 14:50:19

No, I am afraid I cannot, since I downgraded back. Also, I am using old version of Pupil Core (1.12.17), because I have further processing adapted for it.

nmt 31 March, 2022, 14:53:47

Ah. You'll need to upgrade to the latest version of Pupil software if you intend to use it with macOS Monterey. What specifically is preventing you from upgrading?

user-d1bcb6 31 March, 2022, 14:57:01

I see. I post process the data from two eye-trackers to overlay gaze paths on the world camera video with an external utility, which might not work with the updates. (but I get that I better solve it πŸ™‚ )

Yet the question about sound: what would be your advice: go for Mac, put Monterey, and try the steps from previous messages?

nmt 31 March, 2022, 15:08:53

Audio recording is possible with the LSL integration, yes. Note, however, that the LSL plugin adjusts Capture's timebase to synchronize Capture's own clock with the LSL clock. This means that time synchronization will potentially break if the Pupil Time Sync plugin is active. Also note that while the audio is captured, there are no utilities to easily play it back in synchrony with the recorded scene video.

user-d1bcb6 31 March, 2022, 15:33:22

Oops. Thanks a lot for getting seriously in my situation. What if I use LSL plugin on the Clock Master? Would then LSL adjust the Master's time, and Pupil Time Synch adjust the time on the second machine?

Do I understand it right that the outcome I can (1) approximately synchronise with exported video e.g. by ffmpeg, (2) use exact timestamps to draw audiogram on the frames of this video?

Last question: is there any version of old Pupil Capture + iOs system where old synchronisation with sound works? (actually, this is not only my problem, I know also other researchers who really keep their old versions to keep sound recording working)

papr 01 April, 2022, 10:23:53

What if I use LSL plugin on the Clock Master? Would then LSL adjust the Master's time, and Pupil Time Synch adjust the time on the second machine? That should work. I recommend setting up the time sync config first, then start the LSL relay plugin on the Clock Master. Starting the LSL relay on the clock followers will not work.

Do I understand it right that the outcome I can (1) approximately synchronise with exported video e.g. by ffmpeg, (2) use exact timestamps to draw audiogram on the frames of this video? I am not sure about (1). Once you have time synced timestamps, (2) should be doable.

Have you tried the LSL Recorder plugin for Capture? Instead of changing the time base, it stores LSL data in Pupil time https://github.com/labstreaminglayer/App-PupilLabs/blob/lsl_recorder/pupil_capture/pupil_capture_lsl_recorder.py#L162-L163 in a CSV file. Using an external script, one should be able to transform this csv file into a Pupil Player compatible audio file. Using this CSV, it should also be possible to implement (2).

nmt 01 April, 2022, 10:13:10

It's our LSL Plugin that updates Capture's timebase to sync with the LSL clock. So AFAIK, you would need to avoid using Pupil Time Sync. I'm unsure about the best way to merge the audio waveform with the scene video. I think @papr will be able to provide more insight on those things next week. We made the decision to remove audio capture when we released Pupil Capture v2.0. Unfortunately, this feature was too difficult to maintain across our supported operating systems.

user-ced35b 31 March, 2022, 19:00:18

Hello, could someone point me to the original screen marker choreography code/plugin? Or any default screen marker or calibration choreography code. I found the custom nine-point calibration plugin but was hoping I could get the base code? I am trying to implement displaying a visual target in my dual_display_choreography_ client (not too sure how to do this since its not a plugin that I'm subclassing and rather a client that enables a plugin). Thank you!

End of March archive