core


user-695fcf 01 November, 2021, 12:56:19

Hi, I am trying to do some simple calculations of time inbetween fixations, but they do not seem to match up with the "Time since prev. fixation" and "Time to next fixation" from fixation detection in Pupil Player. Are the values in Player measured from start_timestamp to start_timestamp or from end to start? For example one fixation says .11 seconds to next fixation in Player, but the following fixation starts .48 seconds after (when looking through exported fixation data). Am I misunderstanding the values, or how am I supposed to interpret this?

user-7bd058 01 November, 2021, 14:56:03

hello, in one of our videos a participant wears glasses and pupil detection is bad. So I used post hoc pupil detection and narrowed down the region of interests. This worked out perfectly, pupil detection was way better after that.The eye confidences went from 0.14 up to 0.83. Still this did not change anything in fixations. The fixations were as bad as before, I thought it had to do something with the confidence but obviously it did not. I tried this with many exetracking data, the post hoc pupil detection improved a little bit but it did not change anything for the fixation. Then I perfmored a post hoc calibration afterwards trough manual edit mode (although the video included a functional calibration already) but then it was all messed up. In general when I activate post hoc calibration I get the impression that the eyetracking data gets extremely worse. It seems almost impossible to get a good post hoc calibration.

user-7bd058 01 November, 2021, 14:56:29

here a picture before post hoc pupil detection

Chat image

user-7bd058 01 November, 2021, 14:56:47

and here after post hoc pupil detection

user-7bd058 01 November, 2021, 14:57:25

it's clearly visible that pupil detection now works way better but nothing else has changed

Chat image

user-7bd058 01 November, 2021, 14:57:56

this is the result of post hoc calibration. Often it results in absolute chaotic fixationes

Chat image

papr 01 November, 2021, 15:01:00

Hi, this is how we calculate time between fixations: https://github.com/pupil-labs/pupil/blob/121ab4778297f1ce9f2318927616eee4e7154f66/pupil_src/shared_modules/fixation_detector.py#L617

I think there is indeed an error in our calculation. The idea is to calculate (timestamp_start_next - timestamp_end_previous) . But for that the code is missing a pair of parentheses. Alternatively, the code needs to subtract, not sum, the duration. Thanks for catching that!

user-695fcf 02 November, 2021, 08:28:05

Ahh I see, good to know. Thanks for the answer!

papr 01 November, 2021, 15:09:54

Hi, so, yes, you need to recalibrate in order to make use of the post-hoc detected pupil data for anything gaze related, specifically fixations in your case.

The post-hoc calibration plugin loads all calibrations that were made during the recording and makes them available as calibrations in the "gaze mapper" sub-menu. When a recording is started, Pupil Capture stores the last known calibration to the recording. If you have not calibrated immediately before the recording start, odds are high that this last-known calibration is very inaccurate. It is possible that the post-hoc calibration selected this inaccurate calibration for the post-hoc gaze mapping. Please look at the gaze mapper submenu and check which calibration has been selected. If it says "Recorded Calibration" my theory of what is happing would be confirmed.

In order to make use of the post-hoc pupil data to its fullest, I recommend creating a new post-hoc calibration in the calibration sub-menu. If you recorded the calibration you can use the automatic reference location detection to find reference data which then can be used for a new calibration. Otherwise, you will need to manually annotate locations where you know the subject gazed at.

Please let me know if anything is unclear ๐Ÿ™‚

user-7bd058 01 November, 2021, 16:07:59

First, thank you very much for your answer. I already did a new calibration via automatic reference location detection and also chose it in the drop down menu. Then I got something like this. It's worse than the recorded calibration without post hoc but I don't get it why

Chat image

user-4f89e9 01 November, 2021, 15:27:38

So would you be able to save/record calibrations for a bunch of people then re-open them depending on the person? We want to run 5 tests on about 20 people and need everyone to finish each test before moving on, we just need to know what surface they're looking at for the last 3 tests so it doesn't need to be perfect.

papr 01 November, 2021, 15:29:14

Technically yes, but I highly recommend making separate recordings for everyone.

papr 01 November, 2021, 15:44:50

@user-4f89e9 To clarify this statement: It is recommended to start a new recording for each subject. This way, when you open a recording in Player, it is clear which subject the recorded calibration belongs to.

user-4f89e9 01 November, 2021, 15:36:09

yeah sorry that's what i meant, Ideally I want to save 20 calibrations instead of calibrating for all 100 runs

papr 01 November, 2021, 15:37:30

Yes, calibrating each subject is definitively a best practice.

user-4f89e9 01 November, 2021, 15:43:10

so to reopen the calibration for Person 1 after person 20 finished, what would I have to do? I'm currently using an old version of pupil capture so maybe there's some buttons I need to update to find

papr 01 November, 2021, 15:45:31

The necessary buttons are the R button to start and stop recordings, and the C button to start and stop calibrations.

user-4f89e9 01 November, 2021, 16:01:44

I guess I misunderstood what you said, I was hoping to calibrate and record test 1 for each person(20 recordings), then for test 2: open each the person's corresponding calibration and record test 2 (another 20), then repeat until there's 100 recordings but we only needed to calibrate the first 20 times.

papr 01 November, 2021, 16:06:21

Ah, this is not possible in Capture. If you want to test the calibration accuracy after taking off the glasses, I suggest the following protocol: For reach subject: 1. Start recording 2. Calibrate 3. Test 4. Take off glasses and put them back on 5. Test 6. Repeat steps 4+5 as often as necessary 7. Stop the recording

papr 01 November, 2021, 16:07:09

As soon as you calibrate, you are overwriting the previous calibration. Technically, it could be possible to load a prior calibration but this involves writing a custom script that interacts with the network api.

user-4f89e9 01 November, 2021, 16:10:03

ok yeah that's what I what I wasn't sure about. When you mentioned that the gaze mapper could check which calibration I got my hopes up

papr 01 November, 2021, 16:11:20

Well yes, but this is in Player. Technically, you can import calibrations from other recordings. From what I understood, you needed to load prior calibrations in realtime.

papr 01 November, 2021, 16:14:38

Mmh. I can't tell the reason from this description alone. Would it be possible for you to share this recording with data@pupil-labs.com such that we can have a concrete look?

user-7bd058 01 November, 2021, 16:15:24

sure, thank you

user-4f89e9 01 November, 2021, 16:15:02

I figured that if I was going to load an prior calibration I'd need to make sure it works correctly. Otherwise i'd risk saving a full recording, importing it, finding out its bad and making them do it again lol

papr 01 November, 2021, 16:16:28

You get real-time feedback after performing the calibration. Isn't this information sufficient for you?

user-4f89e9 01 November, 2021, 16:15:11

but ok ill just recalibrate each time

user-4f89e9 01 November, 2021, 16:44:04

Well if person 3(P3) has glasses, my first test/recording with them would be super accurate in real time and pupil player. But in the 2nd test for P3, the real time view is using the last calibration (so P20's). meaning it might look super inaccurate in real time but perfectly fine in Player after applying the P3 calibration from test 1

papr 01 November, 2021, 16:46:55

That would be correct. Could you elaborate on why you need to test 20 people first before starting to test at the first person? How much time is there between each test? Why don't you simply recalibrate at the beginning of each test?

user-4f89e9 01 November, 2021, 17:05:09

We're comparing a company's training solution against alternate methods. So we need to test a variety of people and to insure that the subjects don't quickly memorize the instructions we need time between the tests which will just end up being however long it takes to loop everyone through. we're expecting each test to take about 1-2 minutes so we wanted to cut down time however possible

user-4f89e9 01 November, 2021, 17:11:08

So we can (and will) Calibrate each time it, i was hoping to save some time since our subjects are generally busy

user-eeecc7 02 November, 2021, 02:15:12

Hi, Is there a standard set of steps for running this on ubuntu. Neither the .deb nor the cmake have been working for me so far.

papr 02 November, 2021, 07:06:44

Installing the deb file is the standard way on Ubuntu. When double clicking the file, the software installer should open

user-7daa32 02 November, 2021, 14:36:25

I'm really interested in the reason for recalibration offline and how it's done. Is it is done only from PLs Pthyon API?

papr 02 November, 2021, 15:39:08

Hi, the post-hoc calibration was introduced with Pupil Mobile which was not able to perform a calibration in real-time. Nowadays, Pupil Mobile is no longer supported but the post-hoc calibration feature has stayed in Player. It can help applying manual offset correction or recompute the calibration with post-hoc detected pupil data. In short, it can help you manually optimise you data. Everything is possible through the user interface. Please see our documentation for details.

user-7daa32 02 November, 2021, 14:37:59

How can one synce the eye camera timestamp to the recording computer clock ?

papr 02 November, 2021, 15:40:29

This one cannot be done via the ui. Please see this notebook as reference. https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-1a6a43 02 November, 2021, 19:21:30

hello! I have installed the LSL plugin for pupilcore, but I can't find it as an extension

user-1a6a43 02 November, 2021, 19:21:53

Chat image

user-1a6a43 02 November, 2021, 19:23:02

is it there and I just don't see it?

user-1a6a43 02 November, 2021, 19:23:07

I followed the instructions from here

user-1a6a43 02 November, 2021, 19:30:26

good news, fixed the problem

user-1a6a43 02 November, 2021, 19:30:30

but I have a new question

user-1a6a43 02 November, 2021, 19:30:56

I want to set up the pupil core to recognize settings, specifically pupil_capture_settings, from a public folder

user-1a6a43 02 November, 2021, 19:31:13

instead of the local folder

user-1a6a43 02 November, 2021, 19:31:34

that way multiple users on the same computer can all access the same plugin without having to reinstall for all accounts

user-b9005d 02 November, 2021, 20:16:53

The problem ended up being because the laptop we were using was still on HIgh Sierra OS, which the newest version of pupil programs wasn't compatible with. So after updating the laptop to Catalina and restarting the newest version of pupil programs with default settings, the problem was resolved! ๐Ÿ˜„

user-364e5c 02 November, 2021, 20:22:13

Thanks for the quick reply! I'm on Mojave (between High Sierra and Catalina).

user-d7d258 03 November, 2021, 18:42:54

Hi there, I'm at a research lab at Ohio State and we're having trouble with vis circles (and other markings) coming up in our raw videos as viewed in the player before toggling anything. When I add a vis circle in Player, it's superimposed. We want the videos to be totally clean in export. Is there anything I can do in 1) Pupil Player to clean up our previous recordings and 2) Pupil Capture to prevent the markings in the first place?

papr 03 November, 2021, 20:28:15

The visualization are only visible because these plugins are enabled by default. Each visualization plugin has a remove button within its menu. With that you can turn off all visualizations. Pupil Capture records the raw video without any visualizations.

user-d7d258 03 November, 2021, 20:29:41

I donโ€™t have any of the plugins toggled other than World Video Exporterโ€ฆ

papr 03 November, 2021, 20:31:36

Non-unique plugins like vis circle can be enabled multiple times. Each instance creates a menu entry in the right icon bar. Clicking on these will open their menu from which each instance can be closed. The plugin manager can only close unique plugins.

user-d7d258 03 November, 2021, 20:32:19

Thanks Iโ€™ll try taking a look at that tomorrow

user-55fd9d 04 November, 2021, 12:40:59

Hi everybody! I am trying to map gaze on my screen, so I calibrate the gaze position by scaling and adding offset and it works quit nice when the head is stable (figure 000). However, when I turn my head a bit aside (the whole surface is still in the view of the world camera) the offset of the gaze shifts (figure 001 - 003, the red line represents the fit from 000). I tried 2D and 3D gaze calibration but both have the same issue. Does someone have an idea how I could solve the problem so that I don't have the offset when I rotate my head?

user-55fd9d 04 November, 2021, 15:23:23

I believe it may happen because of the camera distortions.

user-55fd9d 04 November, 2021, 12:41:11

On the left subplots you can see position of a moving fixation target in black, gaze position on a predefined surface from pupil-capture in blue and calibrated gaze position in red. On the right figures I plotted target position vs gaze position and then fitted a line to illustrate gain and offset for calibration.

Chat image

user-55fd9d 04 November, 2021, 12:41:12

Chat image

user-55fd9d 04 November, 2021, 12:41:14

Chat image

user-55fd9d 04 November, 2021, 12:41:15

Chat image

user-55fd9d 04 November, 2021, 12:43:30

The surface. I tried different amount of tags. This is the last attempt xD

Chat image

user-9db68b 04 November, 2021, 18:58:12

Hi folks! Glad this community exists ๐Ÿ™‚ I am looking into whether the Pupil Core (or Invisible) would be a good purchase for my use and thought I'd ask for your input. I am a researcher looking to use an eye-tracking device for a human subject experiment. I am mainly interested in tracking clinicians' (expert) gaze when evaluating patients performing rehabilitation exercises. Since the object of the gaze will not be a nicely defined rectangular shape that I can identify with april tags, I was wondering how I would go about defining areas of interest. Is it possible to define areas of interest post-hoc? (i.e. based on world video recordings? Or does the software only allow the user to define AOIs prior to data collection? If you know any resources on a similar set-up please share!

papr 05 November, 2021, 13:00:35

In this case, I would highly recommend using Pupil Invisible for its easier workflow. Since you are specifically interested in gaze on body parts, I would suggest writing a script that uses one of the existing body-part detection neural networks that are available on the internet, apply it to the scene video Post-hoc, and then check which body part the subject is gazing at in every frame. I would be happy to assist you with writing such a script.

papr 05 November, 2021, 12:56:02

Hi, I am not sure if I am 100% following your process. Are you scaling and offsetting the scene camera gaze or the surface mapped gaze? Are you collecting the data in real-time or do you use the Player exports?

user-55fd9d 05 November, 2021, 13:41:08

Hi, I am using player exports and I take the data from "gaze_positions.csv" in columns "norm_pos_x" and "norm_pos_y". I want to calibrate position of the gaze with my screen so that e.g. gaze position with [0.5, 0.5] correspond to the screen's [0.5, 0.5] and so on.

From the info txt: "norm_pos_x - x position in the world image frame in normalized coordinates norm_pos_y - y position in the world image frame in normalized coordinates"

user-7daa32 05 November, 2021, 13:20:13

Hello everyone

I'm here again!

Can anyone help me on how to remove bad data ? In such a way that we can still get all fixations? We interpolated to remove blinks, off-screen and other bad data but still have our fixations within the blinks. It there a source code for modifying so we can get bad data smoothen out and recalculate fixations?

Can we get fixation code and run it in Excel?

papr 08 November, 2021, 08:57:11

You can find the post-hoc fixation detector algorithm implementation here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L162 In line 169-171, we remove gaze data with low confidence. There, you could apply additional filtering+interpolation if you wanted.

But to be honest, that is likely more difficult than performing the cleanup+fixation detection outside of Player.

The algorithm is fairly simple as to how it classifies gaze data as a fixation: 1. Collect a sequence of gaze positions that has a minimum duration of X milliseconds. 2. Check if the gaze data has a dispersion/spread of less than Y degrees. 3. Extend the sequence with subsequent gaze positions until the constraint from step 2 is no longer fulfilled. 4. Classify the sequence as a fixation.

It is called a minimum-duration-maximum-dispersion-type fixation detection algorithm.

user-7daa32 06 November, 2021, 13:16:22

I really thought I could find some answers to this. Please help, thanks

papr 05 November, 2021, 13:42:32

And did you have a look at the surface mapped data already?

papr 05 November, 2021, 13:45:40

That is the gaze_positions_on_surface_<name>.csv file

user-55fd9d 05 November, 2021, 13:47:00

I was just about to ask if that one would be the right. I didn't try this one. I'll do then. Thanks!

papr 05 November, 2021, 13:47:29

If everything is setup correctly, there would be no need for manual correction

user-9db68b 05 November, 2021, 13:50:42

Hi papr! Thanks for your input. I am curious, what is the main benefit for using Invisible over Core in this case? I saw somewhere that Invisible doesn't support fixations, right? So if I chose to go with Invisible I would only get gaze positions (?)

papr 05 November, 2021, 13:53:53

Pupil Cloud will be able to provide fixation data very soon! The main benefit is that Invisible is much easier to use compared to Core. @marc might be able to elaborate on this, too.

marc 05 November, 2021, 14:04:16

Yes, fixation detection will be available in Pupil Cloud very soon (hopefully still this month) and we believe it will be the most robust fixation detection available for mobile eye trackers. I agree with @papr that Pupil Invisible is probably the better choice for you. The biggest advantages of Pupil Invisible are that 1) it does not require any setup or calibration, 2) it works in every environment and 3) it works well even if the subject moves a lot. Therefore the overall user-flow for data acquisition is very convenient and there is not a lot one can do wrong. The setup and requirements for Pupil Core and other traditional eye trackers are more complex. The advantage of Pupil Core is that the accuracy of the signal, when calibrating well and following all the requirements, is a bit better. To distinguish e.g. which limb a clinician is focusing on, this level of accuracy is however not needed.

We are planning on building tools for automatically tracking humans and individual body parts in Pupil Cloud, but I do not expect a release before Q1 next year.

user-9db68b 05 November, 2021, 14:06:37

Awesome, thank you so much! I was wondering as well about limitations related with having subjects wear face masks when collecting the data (due to COVID prevention measures) -- won't that cause the glasses on the Invisible to fog up and affect detection?

marc 05 November, 2021, 14:10:42

Your welcome! The lenses might fog up a bit as with regular glasses, but this does not affect the functionality. The cameras directed at the eyes are located between the lenses and the eyes, so they are not "looking through" the foggy lenses and they do not fog up themselves.

user-9db68b 05 November, 2021, 14:11:39

Aha cool -- thanks for clarifying! This is super helpful

user-7daa32 06 November, 2021, 14:10:22
  1. Can we modify (remove bad data points and blinks) raw gaze data. Import it into Pupil labs, and recalculate fixations?

  2. Would the 2D data calculated in 3D pipeline approximately the same as the 2D data calculated in 2D pipeline?

Questions 1 is the same as what I asked before. @papr @marc Thanks in advance

papr 08 November, 2021, 09:03:34
  1. As described above, it is technically possible but you need knowledge of the internal workings of Pupil Player to get it working.
  2. If you have a perfectly fit 3d model, then yes. If the 3d model is not fit well, the generated 2d data will be incorrect as well. This is visualized by the dark blue (2d pipeline) and red (3d pipeline) pupil outlines in the eye window. They will only overlap if the eye model is fit well.
user-7daa32 06 November, 2021, 17:35:35

I don't know if setting the blink pluggin parameters appropriately will avoid have fixation with the blinks ranges of the raw data

papr 08 November, 2021, 09:05:39

The blink and fixation detectors work independently of each other. They do not influence each others' results. If you wanted to combine their results you would need to do so externally after exporting the detected fixations and blinks.

user-f84304 06 November, 2021, 18:50:59

Hello everyone! I am doing a study about eye movements using the pupil labs and I have some questions. 1.Can we receive and visualize surface scene in real-time from Pupil Capture? I found an example can only visualize the frame from world and eye camera. It does not seem possible to receive the predefined surface's frame and visualize them. 2.Can we subscribe multiple topics using IPC Backbone at the same time? I need to get surface, gaze and blink data for each moment in real-time to visualise or analysis them. But i didn't find any example to use it like what i say. Can you give me some any suggestions please? Thank you.

papr 08 November, 2021, 09:12:45

Hi ๐Ÿ‘‹ I will start with your second question as it will help to answer the first one. 2. Yes, you can subscribe to multiple topics at once. Make sure to be able to process the data in realtime as the network API would eventually start dropping data. You can simply call subscribe() multiple times on the SUB socket. https://pyzmq.readthedocs.io/en/latest/api/zmq.html#zmq.Socket.subscribe 1. You can subscribe to and receive the scene video and surface locations in real-time. You can then use the surface locations to visualize the surface within the scene or apply the homography to crop the scene video to just the surface content.

user-7daa32 08 November, 2021, 09:10:11

Thanks so much. It is late here and I need to go over all you said again. I mean we highlighted blink data , but still have fixations as aggregate of data under blinks. We thought that there shouldn't be fixations when there are blinks.

Sorry for interruption. Please say all you need to say. Thanks so much

papr 08 November, 2021, 09:15:02

As I said, the fixation and blink detection plugins work independently. The fixation detector will drop low confidence data, e.g. during blinks. If you looked at the same position before and after the blink, this position will likely be classified as a fixation. If you do not want to count these as valid you will have to discard them in a step after exporting the data from Pupil Player.

user-b7f1bf 08 November, 2021, 15:50:13

Hey guys, Iโ€™m writing a script in matlab where I pull monocular gaze data from the gaze.2d.1. Topic. The norm_pos seems to contain the coordinates in world space. However, I want the coordinates within the calibrated area (i.e. my screen). How can I go about this? Many thanks

papr 08 November, 2021, 16:01:46

You will need to setup surface tracking (https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking) subscribe to surface-mapped gaze (python example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py)

user-4d0392 08 November, 2021, 20:48:40

Hi everyone, I am using pupil core device for a research study. The cameras were working properly. But for some reason, one of the cameras stopped working. At the error shows: eye1 - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied. I tried relaunching the program; but did not help. How can I solve this? Thanks in advance!

papr 09 November, 2021, 07:59:02

Please contact info@pupil-labs.com in this regard :)

user-4f89e9 08 November, 2021, 20:52:23

not to be that guy but did you unplug and replug it, that usually fixes the issue for me

user-4d0392 08 November, 2021, 20:53:18

Yeah multiple times, did not help

user-4f89e9 08 November, 2021, 20:52:40

at both connections

user-e0402b 09 November, 2021, 09:53:09

do you have any office in india ?

papr 09 November, 2021, 09:53:31

No ๐Ÿ™‚ Feel free to post any questions here or to info@pupil-labs.com

user-e0402b 09 November, 2021, 09:54:48

is there any chance to pay after delivery of the product pupil core ?

papr 09 November, 2021, 09:55:34

I am fairly sure that this would not be possible but please reach out to the email address above to get a definitive answer ๐Ÿ™‚

user-4f89e9 09 November, 2021, 17:40:48

any idea what could have caused these errors?

Chat image

papr 09 November, 2021, 17:42:22

Unfortunately, not. It is a known error which will be handled gracefully y our next release. It might be possible that it is related to (a) no network interface being available, or (b) the network interface not providing a IPv4 address. But I am not sure.

user-4f89e9 09 November, 2021, 17:41:27

worked fine yesterday

user-4f89e9 09 November, 2021, 17:46:19

maybe just try redownloading then?

papr 09 November, 2021, 17:47:21

Could you please share the capture.log file? Maybe I can tell you which network interface is causing the error.

papr 09 November, 2021, 17:46:33

This won't help the issue. ๐Ÿ˜•

user-4f89e9 09 November, 2021, 17:46:59

is there a work around then

user-4f89e9 09 November, 2021, 17:47:51

yeah what folder is it in

papr 09 November, 2021, 17:48:16

User directory -> pupil_capture_settings

user-4f89e9 09 November, 2021, 17:51:49

DMed

papr 09 November, 2021, 18:00:50

For reference: Disabling all VPN connections in the network settings fixed the issue.

user-4f89e9 09 November, 2021, 18:01:04

as shown here

Chat image

user-7daa32 09 November, 2021, 18:24:37

Thanks for all the responses, they helped a lot. Here another question please

How similar are the 2d data ( normalized X and Y position data record ) recorded using 3D method and the one recorded using the 2D method?

user-e22b51 09 November, 2021, 21:26:07

The error in degrees reported by the Accuracy visualizer, what exactly it is? If the viewing angle error is reported then what is the viewing distance used to calculate them? Is there a way to get the reference locations from world camera image translated in 3d world camera coordinates? THX

papr 10 November, 2021, 08:07:57

It is viewing angle error. But it is not calculated at a specific viewing distance but by unprojecting the 2d coordinates into 3d viewing directions. Capture then calculates the angular difference between these 3d vectors. We use the camera intrinsics to transform coordinates between the two coordinate systems. https://docs.pupil-labs.com/core/terminology/#camera-intrinsics

papr 10 November, 2021, 08:05:34

If the 3d eye model is fit perfectly the values will be the same.

user-7daa32 10 November, 2021, 14:33:10

Thank you. Please can we note the fact that 2d method has better accuracy than the 3d method? Which might imply that 2d data collected under 2d method are more accurate than 2d data collected under 3d method?

user-e242bc 10 November, 2021, 14:10:05

hi, I have a question regarding the recorded file "gaze_positions.csv". I am confused, are "norm_pos_x" and "norm_pos_y" recorded for only one eye? Then which eye is recorded? And also are "gaze_point_3dx/y/z" recorded for one eye as well? but I see "gaze_normal_x/y/z" with subscript 0 or 1 are recorded for two eyes. What's the difference between them? Thank you!

papr 10 November, 2021, 14:38:27

Capture tries to combine data from both eyes if possible (binocular mapping). In some cases, gaze is only mapped monocularly. Check out the base_data column. It contains information about which pupil data (eye id + timestamp) was used. Also, see our documentation of the file here https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv

papr 10 November, 2021, 14:42:16

My prior statement assumed perfect 2d data. This is actually not always the case, e.g. if the pupil is occluded by the eye lid/lashes. In cases of low 2d confidence, the 3d eye model performs a 3d search which can yield a better result that the raw 2d input. But yes, in most cases, the raw 2d input fits the pupil better than 3d-eye-model generated 2d data. But the 3d eye model data has a series of advantages, e.g. being able to correct pupil radii for foreshortening and corneal refraction errors.

user-7daa32 11 November, 2021, 14:53:38

Thanks so much. This helps. The 3D pipeline seems to have the most advantages. I want to let you know that we are going to use chin rest for our experiment and in a room lit with florescent light

user-e22b51 10 November, 2021, 19:50:19

thank you for your prompt answer. I would like to clarify to see if I understood the algorithm. The gaze point and the reference positions from the scene camera image are converted in 3d lines that start from the center of the world camera to each estimated point and the angle between these lines in 3d space of the world camera is the error reported. And then do the mean for all the reference points during validation - accuracy test. Did I summarize correctly? Thank you

papr 10 November, 2021, 19:52:17

Yes!

user-869b6e 11 November, 2021, 06:44:20

Hello everyone I am Japanese Can you tell me how to install background_helper in Python? When I did pip install background_helper, I got the following characters ERROR: Could not find a version that satisfies the requirement background_helper (from versions: none) ERROR: No matching distribution found for background_helper

papr 11 November, 2021, 10:36:16

Hi, the background_helper is a module that ships with Pupil Capture. It is not installable like other modules. Could you elaborate on what you need it for?

user-80316b 11 November, 2021, 08:11:09

Hi, I'm right now confronted with the question how to analyze my data. I got like two setups which I had to build up on different days, so it's like not as accurate as the pupil labs example for the heat maps with the shelf. And I will take further measurements on other locations with the same setups. So my question is, if it's possible to separate the world view in areas to calculate the view time in those areas and if this is possible, are there any existing codes for this. As i'm not a programmer I'm not able to create my own code. Thanks a lot

papr 11 November, 2021, 10:34:46

Hey, I think you are referring to defining areas of interest. Please see our surface tracking feature for more information https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-55ae67 11 November, 2021, 10:29:44

Hello, I am quite new to Pupil Labs eye trackers. I can collect audio data when I connect Core to a Mac, but it seems audio recording does not work on Windows. Am I missing something? I selected "Sound only" under Audio Mode under General Settings if it is relevant information. What else do I need to do? Thank you!

papr 11 November, 2021, 10:31:48

Hi, audio recording is no longer supported on any of the three platforms. I am assuming that you are using an older version on your Mac. The Audio Mode in the general settings refers to the short audio signals that are played back during calibration.

user-55ae67 11 November, 2021, 10:32:39

Oh, I see. Thank you for the information. Then I guess we will need to use another external device to collect simultaneous speech data?

papr 11 November, 2021, 10:33:46

Correct. See also this message https://discord.com/channels/285728493612957698/285728493612957698/887783672114208808

user-e22b51 11 November, 2021, 13:47:36

one more question: What the precision is? the encompassing cone that contains all the 3d lines when the gaze dwells on a reference at a specific location or the mean of the angles of 3d lines corresponding to successive gaze points ? Thank you

papr 11 November, 2021, 13:48:13

The latter

user-364e5c 11 November, 2021, 13:54:47

I will be assessing eye movements at a viewing distance of 3-6m. Can I use the single marker for calibration?

papr 11 November, 2021, 14:10:11

Yes, you can, but make sure that your subject performs the accompanying choreography during the calibration. (either a) subject keeps head steady and follows the moving marker, or b) the subject fixates the steady marker while rotating their head)

user-364e5c 11 November, 2021, 14:13:42

Does the calibration target have to remain in the world camera for the length of the calibration choreography, could the subject rotate their head too wildly?

papr 11 November, 2021, 14:16:44

It is ok if the marker leaves the camera's field of view. The biggest issue with moving the head too quickly is motion blur. In these cases, the marker detection might fail. One head rotation should take 1-2 seconds. Ideally, the subject performs multiple of these with different radii of the rotation or a spiral-like movement.

user-364e5c 11 November, 2021, 14:19:29

If I calibrate appropriately, are there any other settings/plug-ins that I have to be mindful to activate before conducting an experiment, or can everything be adjusted post-hoc? For example, but not limited to fixation detection, blink detection which are currently on default settings?

papr 11 November, 2021, 14:22:43

The most important thing to consider before the experiment is the adjustment of the cameras. Make sure they are oriented and exposed correctly. If the eye images are under/overexposed you won't be able to correct for that. All other analyses can be done post-hoc. It is recommended to run pupil detection and calibration in real-time though to get feedback about the success of the configuration. I saw that you posted the same question via email. I will mark this email as "answered" internally.

user-e242bc 11 November, 2021, 15:06:10

Hi, I want to use the data for both eyes separately, so I look into this 'gaze.pldata' file. I see that you provide information for both eyes separately in the 'base_data', and also some combined information for both eyes. I can see that e.g. 'confidence' and 'timestamp' are averaged over two eyes, but I am confused about 'norm_pos' and ''gaze_point_3d'', how do you combine the data for them? and how could I find/calculate the 'gaze_point_3d' for every single eye? Thank you!

user-d50398 11 November, 2021, 16:17:44

Hello everyone, can someone let me know how should I perform the calibration in an appropriate manner? After finishing the detection of eye 0 and eye 1, I was trying many times to calibrate the eye tracker using the screen marker of the Pupil Capture (see my video: https://youtu.be/52f1sOnYaRs) but was never successful. There were errors: "world calibration fail" or "not enough ref point". Hope to receive your support!

nmt 11 November, 2021, 17:11:03

Hi @user-d50398 ๐Ÿ‘‹. It looks like you need to toggle 'Detect eye 0' and 'Detect eye 1' in the main settings of Pupil Capture. Once you have done that, be sure to position the eye cameras such that the pupils are clearly visible, like in this video: https://docs.pupil-labs.com/core/#_3-check-pupil-detection

nmt 11 November, 2021, 17:02:27

Hi @user-e242bc. Pupil's 3D gaze estimation pipeline defines gaze positions from the 3D intersection (gaze_point_3d) of the normal vectors of the left and right eyes (gaze_normal0/1). We then project this point onto the world camera plane, which gives us norm_pos. Thus, for individual eye direction, look at the gaze_normals. Check out the online documentation for details of data made available: https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv

user-e242bc 11 November, 2021, 20:16:25

Hi Neil. Thank you for your answer. I want to know how can I transform (gaze_normal0/1) to (gaze_point_3d0/1), since I want to calculate the gaze velocity for each eye as you showed in the tutorials. I am also interested in how you project this point to (norm_pos)? Thank you again!

user-d50398 11 November, 2021, 19:21:32

Hi @nmt , thank you very much, it is solved now. One more question, it is possible to launch 2 Pupil Capture apps at the same time to control 2 eye-trackers. If so, how could I set up the socket (with 2 different ports) to start/stop recording 2 eye-trackers. Also, to reduce the computation cost, how could I disable the visualization (of Pupil Capture) during the time of recording?

nmt 12 November, 2021, 10:11:09

It's recommended to use a different computer for each eye tracker. If both computers are connected to the same network, you can enable the Pupil Groups Plugin (see screenshot). When you start a recording in one instance of Capture, the other will start/stop synchronously. For integrating this functionality in code, check out the developer docs: https://docs.pupil-labs.com/developer/core/network-api/#pupil-groups In order to reduce computation cost, I'd recommend a) disabling all unused plugins like online fixation filter, surface tracker etc. b) disable real-time pupil detection (be sure that everything is set up properly first, i.e. good pupil detection, calibration choreography was recorded). You can then run pupil detection and calibration post-hoc

Chat image

user-d50398 12 November, 2021, 15:43:08

Thank you very much for your instructions! I have tried to connect 2 eye-trackers to 2 PC in the same network and turn on the Pupil Groups plugin on both 2 Pupil Capture (https://docs.pupil-labs.com/developer/core/network-api/#pupil-groups). However, it turns out that they could not detect each other as in the image below. I have turned off the firewall on both 2 PCs. So, are there any instruction steps that I missed?

Chat image

nmt 12 November, 2021, 14:20:20

@user-e242bc gaze_point_3d is the intersection of both eyes' direction vectors (gaze_normal0/1). The 3d point is projected onto the camera image plane using the camera intrinsics. View the code for binocular case here: https://github.com/pupil-labs/pupil/blob/eb8c2324f3fd558858ce33f3816972d93e02fcc6/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L301 Are you referring to this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb ? You can do everything in the tutorial with the existing raw data export

user-e242bc 12 November, 2021, 14:33:43

@nmt Thank you very much! So is there any way that I can calculate the velocity of each eye separately? because in the tutorial, calculation with "gaze_point_3d" gives us one value?

papr 12 November, 2021, 14:34:18

You can use gaze_normal0/1 instead of gaze_point_3d (or even use circle_3d_normal from the pupil data (equivalent to gaze_normal0/1 but in eye instead of scene camera coordinates))

user-e242bc 12 November, 2021, 14:43:18

yep. (gaze_point_3d) with unit "mm", right? it has a different unit with (gaze_norm0/1), should I convert norm to "mm" before calculating the velocity as in the tutorial? if so, how can I convert this?

papr 12 November, 2021, 14:45:59

The tutorial converts the 3d vectors into spherical coordinates (psi, theta, r) and calculates the velocity for psi and theta only. In other words, the length of the input vector does not matter. The tutorial generalizes to all euclidian 3d vectors.

user-e242bc 12 November, 2021, 14:47:12

I understand now. Thank you very much! Have a nice weekend!

papr 12 November, 2021, 14:47:19

You too!

papr 12 November, 2021, 15:56:28

@user-d50398 Could you please try again and share the capture.log files? Unfortunately, the log files only contain some basic information about the pupil groups plugin. In addition, it broadcasts information via notifications. These can be monitored with this script: https://gist.github.com/papr/1f32d08202d5c130e15aee9509963b70

It requires zmq and msgpack to be installed [1]. By default, it monitors the notifications on the same machine as the script is running. With the -ip flag, you can monitor other instances, too.

The script logs the received notifications to the terminal and into a log file monitor_notifications_<ip>_<port>.log. Please run the script and then start all Pupil Capture instances and the Groups plugins. Feel free to turn the plugins on and off to generate more log messages. Afterward, please share the log files with us.

[1] pip install pyzmq msgpack

user-d50398 12 November, 2021, 17:21:31

Thank you @papr , when I am running the monitor_notification.py on the local PC (127.0.0.1), so only the information from the local PC (Pupil Capture) can be captured. Vice versa, when I changed the IP address to the second PC, only the information from that PC (Pupil Capture) could be captured. Please see the log files attached. The same thing happened when I run the pupil_remote_control.py (https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py), only a single Pupil Capture whose IP is specific could be triggered.

papr 12 November, 2021, 17:49:56

You can run the script twice, once for each PC. Please also provide the capture.log files which can be found in the pupil_capture_settings folders. They compliment the monitor logs

user-d50398 12 November, 2021, 17:22:07

monitor_notifications_127.0.0.1_50020.log

user-46e145 12 November, 2021, 19:06:09

Hi everyone, I'm having trouble with one of the eye cameras, is this the right place to ask for help?

papr 12 November, 2021, 20:13:35

Please contact info@pupil-labs.com in this regard

user-8a4f8b 13 November, 2021, 00:49:46

hi, does anybody have success in using Pupil Core in the outside where light condition is not so stable (natural light condition)?

user-9a05f5 15 November, 2021, 02:13:04

Hi, I've used the Pupil Core in outdoor daylight shaded conditions (1000-3000 lux) and it works, but with lower accuracy than comparable indoor conditions (same calibration procedure etc., target viewing tasks etc.). The eye cameras tend to pick up more glare/shadows and there is also more variation in contrast for the detected pupils. Also the variation in lighting between the eye cameras varies more also.

user-e56ae9 14 November, 2021, 17:01:16

With an IMU

Chat image

nmt 15 November, 2021, 10:18:51

Hi @user-e56ae9 ๐Ÿ‘‹. This looks like an interesting addition to the headset. Would you mind sharing some details of your use case?

user-bf5c9a 15 November, 2021, 09:38:07

Hey, sorry for this, I guess there is a fix / instruction for this (which I simply can't find) but I have trouble getting the pupil capture running on an M1 Mac with MacOS Monterrey. It seems like the app does not ask for permission to use the camera and I can't grant it to it. Is there any good work around or will I have to compile from source? Thanks in advance ๐Ÿ™‚

papr 15 November, 2021, 09:42:35

Hi, currently the only way to access the cameras on macOS Monterey is to run from source, installing libusb via brew install libusb --HEAD, and then running Capture with root privileges.

I am currently working on a release that ships this newer version of libusb. It is likely that one will have to run the bundled application with root privileges, too. See https://github.com/libusb/libusb/issues/1014 for reference

user-bf5c9a 15 November, 2021, 09:43:13

Thanks for the quick answer! I will try this then ๐Ÿ™‚

user-364e5c 15 November, 2021, 12:39:31

Calibration questions: 1)where is calibration data and validation data stored? 2) if single marker or natural features have been used for calibration, what is the validation choreography? 3) Is the calibration or the validation data referred to before deciding to continue with the recording? 4)what limit is typically used experimentally use (I've read 1.5 degrees?)

nmt 15 November, 2021, 14:48:40

1) Calibrations can be stored in three places: a) The most recent calibration is stored in the Pupil Capture settings folder as prerecorded_calibration_result. Capture will use this by default until a new calibration is performed b) When a new recording is made, prerecorded_calibration_result is also stored in notify.pldata found in the recording folder; validations are stored in notify.pldata when performed during a recording. Note that these files require some basic coding to access, and as such it might be better to document the results manually for easier reference c) Post-hoc calibrations are stored as *.plcal files in the recording's calibration subdirectory โ€“ this is useful if you want to transfer calibrations to other recordings (https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection)

2) You can use the same choreography as before. For the natural features, you would ideally ask the subject to fixate other target locations than during the calibration

3) Validation is preferable as it verifies the accuracy of a previous calibration. See this message for further details: https://discord.com/channels/285728493612957698/285728493612957698/838705386382032957

4) See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/806138691734208552, and this message for calculating required accuracy depending on stimulus size: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246

user-d50398 15 November, 2021, 14:12:30

Hello, a quick question related to USB cable. Do you think are there any potential problem (e.g. data distortion) if we extend to a USB cable connecting the eye-tracker to the PC in say, in 3 more meters?

nmt 15 November, 2021, 14:50:07

You may run into data transfer issues with those lengths. USB repeaters can help in such circumstances.

user-4f89e9 16 November, 2021, 15:10:25

Does recording in Pupil capture utilize the GPU or is it all just CPU encoding

papr 16 November, 2021, 15:11:53

It uses the CPU. The video itself is not re-encoded by default. Pupil Capture will store the raw mjpeg buffers from the camera.

user-4f89e9 16 November, 2021, 15:11:51

or does it also depend on the version

user-4f89e9 16 November, 2021, 15:12:56

dang ok

user-e56ae9 17 November, 2021, 09:02:16

Sure, there was some electrical engineering students that got the job of integrating the IMU to the glasses. Iโ€™m using the glasses for a deep learning course project where the goal is to use eye tracking for vertigo symptoms detection.

nmt 17 November, 2021, 13:44:35

Thanks for sharing. That's very interesting! Are you using the modified headset to collect NN training data?

user-d50398 17 November, 2021, 13:14:51

Hi, I am trying to use the Pupil Groups on two Pupil Capture (of 2 PCs in the same network). When I edit the NAME in the Pupil Group plugin of a Pupil Capture. The following messages showed up on the other Pupil Capture:

2021-11-17 13:09:10,326 - eye0 - [WARNING] pyre.pyre_node: Peer None isn't ready 2021-11-17 13:09:10,326 - eye1 - [WARNING] pyre.pyre_node: Peer None isn't ready 2021-11-17 13:09:10,326 - eye1 - [WARNING] pyre.pyre_node: Peer None isn't ready 2021-11-17 13:09:10,328 - world - [WARNING] pyre.pyre_node: Peer None isn't ready 2021-11-17 13:09:10,328 - world - [WARNING] pyre.pyre_node: Peer None isn't ready

Could you please let me know what was the problem?

papr 22 November, 2021, 11:11:04

While we investigate the issue, you can use this script to start and stop recordings of multiple Pupil Capture instances https://gist.github.com/papr/33426d01d8f817a74d3e7247ab4dc29f

You can call the script like this

python start_stop_recording.py addr1 addr2 ...

addr* needs to follow this format: <ip>:<port>, example 127.0.0.1:50020

user-d90133 17 November, 2021, 15:54:37

Hello! So with some subjects I've run into a problem where eyelashes, particularly long and dark eyelashes, are being misinterpreted as the pupil by the eye cameras (even with eyes wide open), thus hindering any chances of collecting accurate fixation data. Is there any way to change the lighting of the cameras so that they don't pick up the eyelashes, or another way to circumvent this issue? I feel that this is a substantial roadblock in collecting data from many potential female subjects, so any solution would be really helpful!

papr 17 November, 2021, 15:56:21

Hey, check out the ROI mode in the eye windows general settings. It allows you to set an area of interest for the pupil detection. You should adjust it s.t. eye lashes are excluded

user-d90133 17 November, 2021, 16:06:10

Awesome, thanks! I'll try that.

user-f14633 18 November, 2021, 12:54:40

Dear Pupils! I found many entries in my capture.log saying that clipboard can not be accessed.

papr 18 November, 2021, 12:56:14

Hi, yes, unfortunately this is the case on some Windows machines. ๐Ÿ˜• We were not able to find the underlying reason for it, yet.

user-f14633 18 November, 2021, 13:00:42

Okay, thanks for the quick response. So I try not to worry about this any more...

user-e22b51 19 November, 2021, 00:04:52

Hi, How is the gaze point calculated in 3d space? Right Eye gaze line does not intersect the Left Eye gaze line, it comes close but not an intersection. So I am wondering how is it done if you do not know the distance to the screen. Thank you

wrp 19 November, 2021, 04:08:11

@user-e22b51 the gaze vectors geometrically do not need to intersect - 3d gaze estimation depends on bundle adjustment. https://github.com/pupil-labs/pupil/blob/eb8c2324f3fd558858ce33f3816972d93e02fcc6/pupil_src/shared_modules/gaze_mapping/gazer_3d/bundle_adjustment.py

See how it is used here: https://github.com/pupil-labs/pupil/blob/eb8c2324f3fd558858ce33f3816972d93e02fcc6/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L89

Hope this helps ๐Ÿ˜ธ

user-da621d 19 November, 2021, 08:32:41

Hi, I want to inquire about the angle output from the eye-tracker. It shows my rotation angle is 163 degrees but actually, it is not. Even I stare at one point the angle output is not the same.

Chat image

papr 19 November, 2021, 11:43:16

Hi, This angle refers to the rotation of the 2d ellipse, not the angle of the viewing direction ๐Ÿ™‚

Chat image

user-da621d 19 November, 2021, 08:33:29

Why the eye tracker output this angle?

user-1aa696 19 November, 2021, 08:55:53

Hi, I have a question regarding the pupil camer configuration, I want to run a pupil core at 200Hz, and I wonder what the maximum manual exposure time is that I should configure. I assume that the pupil camera exposure time setting is in milliseconds, [email removed] there are 5ms inter-frame period so that the exposure time can at best be 5ms (probably shorter) but at 5ms the picture is pretty dark, and if I set the exposure time to 10 or even higher I get better intensity images and the FPS counter still implies the camera to operate at ~200Hz. Questions:

user-1aa696 19 November, 2021, 08:56:10

1) Is the exposure control in Milliseconds? Found the answer, it is in units of 0.1 ms, so for 200Hz up to probably 30 should be fine (though with rolling shutter, actually 5 ms might be achievable?)

user-1aa696 19 November, 2021, 08:56:30

2) What is the maximum value than can/should be used for 200Hz operation?

user-1aa696 19 November, 2021, 10:00:50

Feature request: in the Pupil Capture windows it would be helpful if the labels contained information about the units. Like e.g. in 2D "Pupil min [pixel]" instead of the current "Pupil min", and in camera "Absolute Exposure Time [0.1 ms]" instead of just "Absolute Exposure Time", and more or less the same for all parameters that have relevant units. Alternatively pop-up tool tips could be used if that additional text is deemed too much, but these can be annoying.

papr 19 November, 2021, 11:46:41

2) I recommend selecting a value that gives you a well lit image. If I remember correctly, the target frame rate takes priority over the selected exposure time. In other words, the camera will not allow selecting an exposure time that would require a lower frame rate.

user-1aa696 19 November, 2021, 14:56:00

Excellent! Userfriendly UI (compared to e.g. intels realview for realsense devices...)

user-d50398 19 November, 2021, 13:44:27

Hi, may I ask you that if I perfomed Pupil detection plugin. After collection, how could I get the world.mp4 file (RGB images) without the combination of gaze position?

papr 22 November, 2021, 11:09:06

Pupil Player only exports active visualizations. If you open the visualization plugins' menus, there is a "close" button at the top to disable them.

Alternatively, you can use ffmpeg to convert the intermediate video recording: ffmpeg -i world.mp4 -vf format=yuv420p world_compatible.mp4

user-05e570 19 November, 2021, 16:12:52

Hello, I have the same problem, but this solution doesnโ€™t work

user-04dd6f 19 November, 2021, 16:19:34

Just got the new macbook pro with macOS Monterey, also have the same question and the solution seems not working for me because of some comflicts with anaconda.... Is there any other solution? Or when is the next version that compatible with Monterey

papr 19 November, 2021, 16:15:53

Did you have an existing pupil source installation or did you just set it up from scratch?

papr 19 November, 2021, 16:24:30

@user-05e570 @user-04dd6f We are in the final testing phase of our release. It should be published next week. It will support accessing cameras on macOS Monterey but will require you to run the application with root privileges

user-04dd6f 23 November, 2021, 01:53:00

Good evening, I would like to know if there is any exact release date within this week for the version which is compatible with macOS Monterey, and the access to download it (on githab or official website)? Many Thanks again!! ๐Ÿ™‚

user-04dd6f 19 November, 2021, 16:26:27

Thanks a lot, looking forward to the new version..

user-86d8ec 19 November, 2021, 19:55:20

Hi, All of a sudden the world camera no longer works. Both eye cameras are working fine. I do not see the world camera in the Device Manager (windows 10) either. Is there any way to troubleshoot this? How could I test for a bad connection?

papr 22 November, 2021, 09:34:36

Hi, please contact info@pupil-labs.com in this regard.

user-a4ad21 20 November, 2021, 19:13:33

Hi, Can I use it with any usb camera ? In the diy

papr 22 November, 2021, 09:36:41

Please see the first paragraph of this message https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

user-48fec1 21 November, 2021, 21:10:48

Hi. Completely new to eye tracking but looking forward to integrate it with an EEG experiment. I wanted to ask how the world coordinates are defined (e.g. What is the origin)? Also, if someone ever used lsl with pupil core, there is a channel called confidence but to what exact data confidence is this refering to?

papr 22 November, 2021, 09:43:06

Hi, welcome to the channel! Please see my notes below: - coordinate systems - Gaze is mapped in the scene camera's coordinate systems, see https://docs.pupil-labs.com/core/terminology/#coordinate-system - confidence - Please see https://docs.pupil-labs.com/core/terminology/#confidence for reference - reconstructing fixations - yes, this is technically possible - saccades - You can either use the cyclopian gaze (gaze_point_3d_xyz) or the circle_3d_normal from the pupil data. For the difference between pupil and gaze data see https://docs.pupil-labs.com/core/terminology/#pupil-positions - LSL recording - We have developed a new plugin that allows you to record LSL streams, e.g. your EEG data, in Pupil Capture. You no longer need the LSL Recorder application. Instead, Capture records the EEG data into a CSV file within its native recording. EEG time is synced to native Capture time. https://github.com/labstreaminglayer/App-PupilLabs/blob/71fd79dd0e503272ae97befe9fbc5760b3740152/pupil_capture/pupil_capture_lsl_recorder.py

user-48fec1 21 November, 2021, 21:15:57
  1. If I want to reconstruct fixation intervals, is it possible to use the gaze_point_3dxyz recorded using lsl alongside the timestamps and then use the dispersion-duration based algorithm
user-48fec1 21 November, 2021, 22:37:54
  1. If I want to later implement a saccade detection algorithm, are saccades usually calculated from gaze point movements (gaze_point_3dxyz) or from eye movements (e.g. eye_center)
user-da621d 22 November, 2021, 04:49:00

Hello. What's the accuracy of the parameter of phi?

user-da621d 22 November, 2021, 04:57:13

and what is world image frame?

papr 22 November, 2021, 09:44:10

This refers to one image taken from the headset's scene camera. (world is another term for scene in our case)

user-1aa696 22 November, 2021, 11:01:29

Quick question regarding camera intrinsics, https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation differentiates between default intrinsic estimates and Newly estimated camera intrinsics. Am I correct in assuming that only the latter class will create the <camera name>.intrinsics files in the capture_settings folder, while the former will just call https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L26-L152 and apply these default corrections unless there are <camera name>.intrinsics in capture_settings?

papr 22 November, 2021, 11:06:12

Yes, this is the expected behavior.

user-da621d 22 November, 2021, 13:35:15

looking left increasing phi. So I want to inquire what's the accuracy of the parameter phi? Does this phi can indicates the pupil rotation angle in horizontal?

papr 22 November, 2021, 14:35:07

Yes, it does, but within the eye camera coordinate system which can be rotated in relation to the head, e.g. looking to the real left might change phi and theta, and not only phi. The accuracy depends on the quality of the 2d input pupil detection and how well the 3d eye model is fit.

user-da621d 22 November, 2021, 13:35:29

@papr

user-da621d 22 November, 2021, 15:37:03

If my head keep still only rotated eyes. does the parameter phi can indicate rotation angle and what is the accuracy?

papr 22 November, 2021, 16:52:11

phi and theta are polar coordinates of the gaze angle in eye coordinates. if you take the difference between two phi and theta value pairs, you can calculate the horizontal and vertical rotation. Regarding the accuracy, I cannot be more specific than above. The accuracy is directly related to the eye model fit. The better the eye model fit the better the phi/theta accuracy

user-da621d 22 November, 2021, 15:37:33

@papr than you for being so kind

user-4f89e9 22 November, 2021, 16:47:41

When recording in capture, does the performance for a single camera(specifically world view) benefit from multi-threading or is core speed more important.

papr 22 November, 2021, 16:52:33

Generally, core speed is the most important factor to run Capture smoothly

user-4f89e9 22 November, 2021, 16:54:22

what about ram speed, any knowledge about performance on DDR3 systems?

papr 22 November, 2021, 16:55:12

All currently available RAM is quick enough. The amount of RAM is more relevant, especially if you want to open longer recordings in Pupil Player

user-4f89e9 22 November, 2021, 16:56:29

perfect, i'll reach back out if i run into anything interesting.

user-e22b51 22 November, 2021, 17:48:18

Hi, I noticed a lot of references that you can evaluate the kappa angle between the optical axis and the visual one using a one point calibration. Can anybody please describe how can you do it with the PupilLabs Core system? I am not sure what would be the reference standard in this case. Thank you

user-4f89e9 22 November, 2021, 19:58:27

sooo what about windows 8

papr 22 November, 2021, 19:58:58

Not supported :)

user-4f89e9 22 November, 2021, 19:59:01

knew it

user-4f89e9 22 November, 2021, 19:59:12

๐Ÿ™ƒ

user-4f89e9 22 November, 2021, 19:59:28

cant blame you, i forgot how bad this was until just now

papr 22 November, 2021, 20:00:25

๐Ÿ”ซ ๐Ÿ”ซ ๐Ÿ”ซ always has been

user-6448ad 22 November, 2021, 20:53:17

hi, is it possible to download earlier versions of pupil?

user-6448ad 22 November, 2021, 20:54:08

found it

user-75e3bb 22 November, 2021, 20:58:39

Hello, how can I solve a lagging video on pupil player? Most of the video plays fine but it would often skip a second or so.

user-4f89e9 22 November, 2021, 21:29:59

Watch your task manager while recording. I'm willing to bet it's at 100% usage

papr 22 November, 2021, 21:00:48

Otherwise, a question of computational resources. If you have many visualization plugins enabled it will require more resources.

papr 22 November, 2021, 20:59:58

When you play back the recording, there should be a number below the play button, saying something like 1x/2x/etc. Does it say 1x or something else?

user-75e3bb 22 November, 2021, 21:02:25

the playspeed is normal and even i just leave it as it is, it would skip. The core glasses does heat up while recording, would this be a problem?

papr 22 November, 2021, 21:03:09

It is normal that the cameras get warm.

papr 22 November, 2021, 21:03:42

But it could also be a recording issue. If the recording was recorded at low fps the playback will have low fps, too.

papr 22 November, 2021, 21:04:21

If you want you can share an example recording with data@pupil-labs.com and I can check. Also, could you please share the CPU frequency of the recording computer?

user-75e3bb 22 November, 2021, 21:05:56

Thank you, I can send the recording later tonight I'll send the CPU info of both the recording desktop and the playing desktop. We use different desktops for each

user-4f89e9 22 November, 2021, 21:30:37

That's the problem I ran into. Tried it on a stronger PC and frame rate went from ~20 to ~55 on the world camera

papr 23 November, 2021, 07:02:52

No exact release time yet. You will find it on Github. I will share a release candidate as soon as possible.

user-04dd6f 23 November, 2021, 12:01:05

Thanks~

user-05e570 23 November, 2021, 10:35:28

Hello, Iโ€™ve faced another problem. The camera for the left eye doesnโ€™t work. There is only grey screen instead of video. This problem appears on different computers.

papr 23 November, 2021, 10:45:21

Hi ๐Ÿ‘‹ Please contact info@pupil-labs.com in this regard ๐Ÿ™‚

user-205d47 23 November, 2021, 12:19:02

@papr just logged in to check for Monterey updates and happy to see you guys are on it. Saves the effort to downgrade OS

user-ae5240 25 November, 2021, 01:05:07

Hello, dropping by to ask some questions: I recently came across a webcam designed for barcode readers (that already has IR filter removed and already has IR LED built in) from a company called HBVCAM, but Pupil Labs software keeps saying "Unknown Device" and won't recognize it. It's using UVC, so I have no idea why it's not working.

papr 25 November, 2021, 07:21:24

Hi :) Devices are usually listed as unknown if they do not have their libusbk driver installed (I am assuming that you are using, correct?). Follow steps 1-7 of these instructions https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

Please be aware that this will make the camera inaccessible for other programs that do not use the libusbk driver. You can reverse the steps above at all times by deinstalling the libusbk drivers again.

user-ae5240 25 November, 2021, 01:05:58

What could be the reason?

user-ae5240 25 November, 2021, 01:07:23

The software managed to even lock up my laptop.

user-ae5240 25 November, 2021, 01:09:18

(The webcam does work with any other software, just not Pupil Capture)

user-ae5240 25 November, 2021, 01:18:18

To make this matter clear, I'm testing the camera out before ordering the DIY kit

user-ae5240 25 November, 2021, 07:24:23

Thanks, I will try that and see if it works.

user-55fd9d 26 November, 2021, 09:49:58

Hi, can I buy spare parts for pupils core e.g. the part where the clothes-peg is? Or are there 3d-printer templates available?

papr 26 November, 2021, 09:55:24

Please contact info@pupil-labs.com in this regard. ๐Ÿ™‚

user-75e3bb 28 November, 2021, 18:43:46

hello, how can i change the timestamp values into regular minuts:seconds? Or can you let me know what formula I need to use to change the values?

Chat image

papr 29 November, 2021, 16:51:50

Please see our post-hoc time sync tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-aa7c07 29 November, 2021, 03:56:38

Just a quick one for after ColdNoodles query: Is post-hoc pupil detection generally more accurate and worth the extra time than pupil detection from recording? Is there any difference here or research implications?

papr 29 November, 2021, 16:53:40

It can be used to improve pupil detection, yes. This is especially helpful if you want to refit the 3d eye model or exclude, e.g. the eye lashes region, from the pupil detection.

user-7e1bf7 29 November, 2021, 15:37:29

As a recomendation for the DIY version: add to the BOM that the exposed film has to be colour film and that B/W film doesn't work

user-ae76c9 30 November, 2021, 17:12:24

Hi Pupil team! We are using the core with the pupil mobile app and pupil capture. The phone we are using for pupil mobile and the computer we are using for pupil capture are on the same wifi network, however the camera views are not showing in capture. I saw on your website that pupil mobile isn't being maintained - is this error a result of that? (we are using capture v2.2.0 by the way)

nmt 30 November, 2021, 17:21:40

The Pupil Mobile app is no longer maintained. We made the decision to phase out its development after the release of Pupil Invisible. That said, it should still work. Are you connected to an institutional wifi network? These usually have firewalls that will block the connection. In such cases, you can also set up a wireless hotspot from the computer, or use a dedicated wireless router independent of the institutional network. It's also worth checking that the connection is not being blocked by your computer's firewall. Additionally, please consider upgrading to the latest version of Capture, v.3.5: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-ae76c9 30 November, 2021, 18:52:35

We are using a wireless router, and I've tried turning off all firewalls on the computer, as well as connecting to the computer's wireless hotspot. I also updated Capture, and tried this all over again on a separate computer. Apologies if this is a little outside of y'alls domain, but is there anything else I might try?

papr 01 December, 2021, 11:03:24

There is one possible issue that can be resolved quite quickly. To verify if this is the case, please try connecting again and share the capture.log file with us afterward.

End of November archive