πŸ‘ core


user-6a367e 01 July, 2021, 07:09:45

The accuracy of gaze is bad in dark environments. Is there any tips on improvising the requirements? Note: I have calibrated it properly though.

papr 01 July, 2021, 07:29:38

This sounds like you are getting bad pupil detection in the dark? Have you looked into that?

papr 01 July, 2021, 07:32:05

The scene camera does not record at perfectly even intervals. Also, the head pose can only be estimated if the 3d model can be located with a scene frame, i.e. the markers need to be visible. During quick head movements, it is possible that marker detection fails due to motion blur. For these frames, no head pose can be estimated and the frame will be skipped.

user-4bad8e 01 July, 2021, 12:08:05

@papr Thank you for the very clear and easy-to-understand explanation! It helps me a lot.

user-6a367e 01 July, 2021, 08:14:25

Yeah sometimes it flickers. Any fixes?

papr 01 July, 2021, 08:15:47

I would need to see an example recording for concrete feedback. One typical issue in dark environments is either (1) insufficient exposure (image too dark) or (2) or the pupil max parameter is set too low for the actual pupil dilation.

papr 01 July, 2021, 14:52:24

Generally, yes, but in case of the Pupil Core headset, incrementing phi for one eye camera does not result in an decrement for the other, because the right eye is upside down.

user-765368 05 July, 2021, 09:38:07

im a bit confused, sorry for bugging but i don't understand. is there a parameter that both pupils position have in common and that are not based on a separate coordinate system? ( since i understood that the phi and theta are also dependent an the position/orientation of the eye camera) i must quantify the horizontal and vertical movements of the eyes and that the parameters that are used for the measurements will be calculated so that the ratios of eye movements will be 1:1 (i.e. if both eyes move for the same direction the same amount i will be able to see from the data received that they made the same movement and that the distance/difference in their position is the same)

user-d41611 01 July, 2021, 20:11:20

@papr Hi, I am integrating the pupil into a dive mask and was looking to swap the recommended world view camera to an underwater fishing camera (either the spydro or the gofish cam) I wasn't sure who I could talk to to see if these would be compatible. But you look like the man to reach out to. Any help?

papr 01 July, 2021, 20:26:08

How are you planning to record the video? To which device do you want to connect the eye cameras to?

user-d41611 01 July, 2021, 20:39:03

@papr the eye cameras will wire into a laptop above water and the world camera will feed to the same laptop.

papr 01 July, 2021, 20:41:36

For the camera to be recognized by the software it needs to fulfil specific details. See this message https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

user-d41611 01 July, 2021, 20:45:03

thanks i will check this out!

user-d44f4f 02 July, 2021, 07:00:11

@papr Hi, I tried to install the pyglui as mentioned in ubuntu 18.04. But I encountered the problem. The error information is displayed in the txt. Can you help me?

pygui.txt

papr 02 July, 2021, 07:19:08

Hi, there is a series of known build issues in pyglui. I have been working to fix them this week, but unfortunately I am not done yet. I will let you know, as soon as it should work again. Meanwhile, have you tried running the bundled application already?

user-d44f4f 02 July, 2021, 07:22:32

@paprYeah, I installed the Pupil Capture, and tried to run the application directly. But I get the another error.

user-d44f4f 02 July, 2021, 07:22:46

Chat image

papr 02 July, 2021, 07:25:35

Which version of capture is that?

user-d44f4f 02 July, 2021, 07:26:16

I installed the Pupil Capture 1.0 before, but I have uninstalled this application. I searched the solutions, which seems to be related to the early packages. The version I tried to run is 3.3.

papr 02 July, 2021, 07:27:35

Just to be clear, is the screenshot from running 1.0 or 3.3?

user-d44f4f 02 July, 2021, 07:29:48

Sorry, the screenshot is from running 3.3.

papr 02 July, 2021, 07:31:11

I somehow get the feeling the uninstallation might not have worked as intended. Could you please run the Uninstaller again, delete the /opt/pupil folders manually in case they still exist afterward, and then use dpkg to install the deb files? sudo dpkg -i .deb

user-d44f4f 02 July, 2021, 07:48:15

Chat image

user-d44f4f 02 July, 2021, 07:49:08

@papr I have tried to uninstall all the pupil* application, and use dpkg to install the deb files. But I get the same error.

papr 02 July, 2021, 07:54:22

OK, thank you. I am very sorry that the installation is still not working. Unfortunately, I cannot tell what the exact issue is, yet. Thank you for providing the traceback though, this is very helpful.

user-d44f4f 02 July, 2021, 07:59:19

I installed this application in the Windows 10, and it works. But I found the 3D eye model seems to be abnormal. The 'No. of supporting pupil observations' always equals to zero. The mesh model doesn't show the 3D pupil circle.

Chat image

papr 02 July, 2021, 08:02:13

Good to hear, that you are able to fallback to a different OS for now. πŸ™ I fear the debug window has not been cleaned up since the early development of the pye3d integration. This value is likely no longer updated due to changes in the software architecture. I created an internal bug report to remove this inconsistency.

user-d44f4f 02 July, 2021, 08:08:06

Thanks for your reply. I first found the debug widnow has such problems in Windows 10. So I tried to run the application in Linux. You know, I found the software always has the different performance in different OS, which I consulted you several months ago. But I get these errors in Linux as mentioned before.

papr 02 July, 2021, 08:09:11

This debug window bug will be present on Linux, too.

user-d44f4f 02 July, 2021, 08:12:36

Does this debug window bug also include the abnormal 3D eye model?

papr 02 July, 2021, 08:14:00

What are you referring to by abnormal 3d eye model?

user-d44f4f 02 July, 2021, 08:17:49

Chat image

papr 02 July, 2021, 08:24:55

Also, short question, which hardware are you using?

user-d44f4f 02 July, 2021, 08:18:32

I means the corneal mesh model doesn't overlay with the real region.

papr 02 July, 2021, 08:22:05

Ah, I understand. Please be aware that you can rotate and zoom in this view to align the model. You can hit the r key in the window to reset the view.

Chat image

papr 02 July, 2021, 08:18:32

The negative pupil diameter is indeed unexpected given how well fit the model looks. Are you referring to that?

papr 02 July, 2021, 08:22:26

Reset view

Chat image

papr 02 July, 2021, 08:23:35

Please be aware that the displayed pupil is not corrected for refraction visually which is why it looks different than in the recorded eye image.

user-d44f4f 02 July, 2021, 08:26:56

I use the Pupil Lab device. But I don't the hardware version.

papr 02 July, 2021, 08:28:28

Could you share a picture of the hardware? Depending on how old the hardware is, it is possible that the software does not provide default camera intrinsics for the eye cameras. this could explain the incorrect pupil diameter estimation.

user-d44f4f 02 July, 2021, 08:31:21

Chat image

papr 02 July, 2021, 08:33:11

Yes, thank you. It indeed looks like this is an older model without a pre-configured focal length.

user-d44f4f 02 July, 2021, 08:31:55

Can you get it?

user-d44f4f 02 July, 2021, 08:48:53

Can I calculate the IR camera intrinsics and set it into the software?

papr 02 July, 2021, 08:50:32

Yes, you need to select the eye camera as the scene camera in the world window (Video source menu -> enable manual camera selection -> Select camera) and run the camera intrinsics estimation plugin https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation

user-d44f4f 02 July, 2021, 09:05:18

Thank you. You know, it is difficult for the IR camera to capture the image in compute screen. So I need to print the chessboard. How can I update the camera intrinsic parameters into the application? I don't get the idea from the webpage.

papr 02 July, 2021, 09:12:15

Please be aware that the plugin only works with the circle grid, not with the chessboard which is also a common pattern.

When the intrinsics were successfully estimated, they are stored as files in the user directory. The pre-recorded intrinsics are loaded from code. Both is explained in the Camera Intrinsics Persistancy section below the linked section from above.

Please let me know if this does not answer your question sufficiently.

papr 02 July, 2021, 09:46:44

I have just tried to reproduce the issue with a freshly setup Ubuntu 16.04 VM. I am not able to reproduce the issue. Looking at the responses here https://answers.opencv.org/question/18461/importerror-lib64libavcodecso54-symbol-avpriv_update_lls-version-libavutil_52-not-defined-in-file-libavutilso52-with-link-time-reference/ it looks like this issue appears due conflicts with other ffmpeg installations on your system. πŸ˜•

user-d44f4f 02 July, 2021, 11:26:20

Thank you very much. I have also found these responses before. I will try it.

user-d6701f 02 July, 2021, 09:58:10

Hi, I have a question: How can trigger be sent into pupil capture? Would it be possible to write down trigger (as annotiations) coming from e.g. Neurobs Presentation or also Matlab Psychtoolbox?

papr 02 July, 2021, 09:59:32

Yes, this is possible. See https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations

user-9b5cc8 04 July, 2021, 02:46:34

Just built an open source tracker per website. Installed latest Pupil Labs desktop software on Windows 10. Getting these errors when trying to run Pupil Capture. For some reason the eye and world cameras are not being detected. Windows Camera can see/play them. Any thoughts/hints to resolve this would be most appreciated. Thanks. Mike Brown, Boston MA

Chat image

user-9b5cc8 04 July, 2021, 16:02:35

One would think that by release 3.x the Windows 10 install would be nailed down. No wonder Tobii is #1 in market share.

papr 05 July, 2021, 09:04:01

If you are referring to the DIY eye tracker, please follow steps 1-7 from this link https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-9b5cc8 07 July, 2021, 18:27:48

Thanks for the pointer! Will give it a try. Mike

user-5fe343 05 July, 2021, 10:35:05

Hi everyone, sorry for the triviality of the question but this is my first time using pupil core. Eye 0 (right eye) connects without problems, Eye 1 (left eye) camera is disconnected. Do you know how I can solve this? If I choose manual camera selection I find only the camera of the Eye 0.

papr 05 July, 2021, 10:37:58

In this case, you need gaze data because the pupil data's phi/theta values are relative to their corresponding eye cameras. And since the eye cameras have different orientation in space, their horizontal/vertical axis are not comparable. Gaze uses the scene camera coordinate system.

user-765368 05 July, 2021, 10:46:15

so in this case, in the references it is described that the Z axis is the optical axis of the camera. and if that so, what does the center eye position describes? the location of the center of the pupil in our space? and if that is true, does the position of the Z axis changes in cases of eye movements? Thanks!

Chat image

papr 05 July, 2021, 10:39:49

If you do not see an "unknown" entry in the list, it means that a second eye camera is not connected. Please be aware that this is an expected behavior should you be using a monocular headset (one camera arm only on the right; vs one on each side).

user-5fe343 05 July, 2021, 10:47:25

I have binocular glasses, I have two eyes. I see "unknown" in the list but if I select it, it still won't connect.

papr 05 July, 2021, 10:47:19

eye centers are the location of the fitted eye models relative to the origin of the scene camera in mm

user-765368 05 July, 2021, 10:48:20

so in order to calculate movement i must use the gaze data, am i correct?

papr 05 July, 2021, 10:48:33

ok, then it means that the drivers were not correctly installed for this specific camera. Could you let me know the name of the successfully connected eye camera?

user-5fe343 05 July, 2021, 10:50:11

Pupil Cam1 ID0 @user-b772cccal USB is connected

papr 05 July, 2021, 10:49:17

In order to calculate between-both-eyes-comparable horizontal/vertical eye movement, yes.

user-765368 05 July, 2021, 11:44:40

when i receive the gaze normals, what are the values normalized compared to? also, does the X and Y values represent the horizontal and vertical movements compared to the optical axis?

user-765368 05 July, 2021, 10:52:31

and if i use the surface tracker, then can the system still differentiate between the eyes?

papr 05 July, 2021, 10:53:48

The surface tracker will only map the combined gaze point to surface coordinates. But you are able to backtrack which scene-camera-gaze datum was used, which in turn contains the gaze_normals to calculate eye rotation

papr 05 July, 2021, 10:55:12

Pupil Capture runs the driver installation on launch. Have you tried restarting the application yet to give the installation an other try?

user-5fe343 05 July, 2021, 10:57:50

I have already tried closing and reopening the software several times, but at start-up it always tells me "Could not connect to device".

papr 05 July, 2021, 10:58:34

ok, please try following these steps https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting

user-5fe343 05 July, 2021, 11:21:43

Done! Thank you very much!!!!

papr 05 July, 2021, 11:45:46

They are normalized to have length 1. See this for the description of the coordinate system https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

user-765368 06 July, 2021, 12:37:21

so are the gaze_norm for each eye are the values of a vector that is going through the eyes? thanks

user-7d4a32 05 July, 2021, 14:29:29

hi, can i know the recording rate of the eye tracker?

papr 05 July, 2021, 14:30:18

You can find the Pupil Core tech specs here https://pupil-labs.com/products/core/tech-specs/

user-7d4a32 05 July, 2021, 14:30:41

thank you!

user-7d4a32 05 July, 2021, 14:31:30

followup question is this correct for all eye-pupil models?

papr 05 July, 2021, 14:32:11

What do you mean by eye-pupil models? Are you referring to previous hardware iterations of the Pupil Core headset?

user-7d4a32 05 July, 2021, 14:32:37

I'm just not sure if these specs match my model. How do I know which model I'm looking at?

papr 05 July, 2021, 14:33:27

The easiest way would be to check the order confirmation email.

user-7d4a32 05 July, 2021, 14:32:56

there are multiple pupil core models correct?

papr 05 July, 2021, 14:33:52

There have been other hardware iterations with lower eye camera frame rates, yes.

user-7d4a32 05 July, 2021, 14:34:06

i understand thank you

papr 05 July, 2021, 14:35:14

Should the order confirmation email not tell you enough information, let me know. There are other ways to find out. But they require access to the hardware.

user-82bbcd 05 July, 2021, 14:50:19

Hi, does anyone have a CE certificate for the pupil core? Would you mind sending me a copy?

papr 05 July, 2021, 14:50:39

Please contact info@pupil-labs.com in this regard.

user-82bbcd 05 July, 2021, 14:50:56

I see. Thanks!

user-4bc389 06 July, 2021, 04:39:38

Hi This happens when I use surface tracking. What is the reason?

Chat image

papr 06 July, 2021, 06:04:03

Hi, this issue will be resolved in our next release. :)

user-4bc389 06 July, 2021, 06:22:22

OK thanks

user-aadbbd 06 July, 2021, 12:14:21

I have an error "KeyError: 'gaze on surfaces'" when I try to run the filter_gaze_on_surface.py code in a PsychoPy experiment. Does this mean that the surfaces are not being properly sent to PsychoPy, and thus not recognized?

user-aadbbd 06 July, 2021, 12:42:50

They are coordinates of where your gaze from that eye is, normalized within your surface bounds. I'm assuming this is for surface tracking?

user-aadbbd 06 July, 2021, 12:44:40

Oh the 3d gaze is relative to the world camera

user-765368 06 July, 2021, 12:48:30

no, im trying to visualize the gaze from each eye. i know that their values are based on the world camera system but i want to make sure that the vectors they create are through the eye centers (basically that the vector of the gaze is originated from the eye and if this plot is correct)

Chat image

nmt 06 July, 2021, 12:51:09

The gaze normals do indeed describe the axis that goes through the eyeball centre and object that's looked at

user-aadbbd 06 July, 2021, 12:49:15

Ohhhh that's definitely more of a papr question lol

user-765368 06 July, 2021, 12:51:36

thanks!

user-aadbbd 06 July, 2021, 13:32:49

It works when I run the pupil script alone, the problem is when I integrate it into my PsychoPy experiment then..

papr 06 July, 2021, 20:28:11

You should use gaze_on_surfaces not gaze on surfaces. Also, have you tried debugging the object throwing the key error and inspecting it for tis actual content?

Edit: Also see nmt's follow up here https://discord.com/channels/285728493612957698/446977689690177536/861966175432736829

user-d90133 06 July, 2021, 19:40:20

If I was to have the Pupil Core connected to a desktop computer, what would be the longest length of a USB extension where I would still have a reliable connection? I'm conducting a study measuring visual fixations during gait (walking), and need as much flexibility/slack from a USB extension as possible. For instance, would a 25' extension still yield consistent data collection?

papr 06 July, 2021, 20:25:08

We highly recommend using an extension with an active power supply and to test it thoroughly before conducting the actual experiment. Unfortunately, I cannot recommend a specific one.

user-5651f6 06 July, 2021, 22:03:14

Hi, my lab just got a pupil eyetracker so I’m very new. I was wondering if there are guidelines to set the different thresholds for example blink onset/offset ?

nmt 07 July, 2021, 07:57:26

Hi @user-5651f6 πŸ‘‹. Have a look at this message for reference of how the blink detector works: https://discord.com/channels/285728493612957698/285728493612957698/842046323430916166 You can then examine a given recording and set the thresholds such that they accurately classify your blinks. Use the Eye overlay plugin in Pupil Player to view the eye videos and confirm the thresholds are correct

user-670bd6 07 July, 2021, 06:28:04

Hey, im looking to upgrade the Pupil Core DIY to a wide angle camera that is compatible with the software. https://www.amazon.com/Arducam-Computer-Fisheye-Microphone-Windows/dp/B07ZS75KZR Would this be okay? (it is UVC compatible)

user-8e5c72 07 July, 2021, 12:33:24

Hi, can I somehow set it up that I will see on a recording the gaze with only one red dot ? .. since yesterday it's just nowhere

user-8e5c72 07 July, 2021, 12:33:44

now I have only two green circles..

nmt 07 July, 2021, 12:38:25

Hi @user-8e5c72 πŸ‘‹ . Did you change any of the visualization settings (https://docs.pupil-labs.com/core/software/pupil-player/#vis-circle) since you last looked at your recording in Pupil Player?

user-8e5c72 07 July, 2021, 12:42:43

where do I see if the world camera is in focus

user-8e5c72 07 July, 2021, 12:45:24

ahj Im lost a bit

user-8e5c72 07 July, 2021, 12:45:30

is it possible to send you some pictures?

nmt 07 July, 2021, 12:46:06

If you want to share them on here feel free, or else you can send them via DM

user-8e5c72 07 July, 2021, 13:00:48

@nmt so I thought I send you some pics so you can better understand what I mean

user-8e5c72 07 July, 2021, 13:03:01

Chat image

papr 07 July, 2021, 13:44:23

You might also want to disable Vis Polyline plugin which draws a red line between the gaze points.

user-8e5c72 07 July, 2021, 13:03:34

instead of a green circle or two green circle.. I need

user-8e5c72 07 July, 2021, 13:04:13

such a red circle

Chat image

papr 07 July, 2021, 13:51:25

If you look closely, there are two gaze points in your picture, too.

Generally, it is expected for Pupil Core recordings to display multiple gaze points per frame, because the eye cameras run at a higher frequency than the scene camera.

nmt 07 July, 2021, 13:07:25

You can easily change the gaze circle visual properties, such as size and colour. Please follow these instructions: https://docs.pupil-labs.com/core/software/pupil-player/#vis-circle

user-8e5c72 07 July, 2021, 13:49:23

Hmmm I gave it a try

user-8e5c72 07 July, 2021, 13:49:53

let's say now I have two red dots without that line

user-8e5c72 07 July, 2021, 13:50:04

Chat image Chat image

papr 07 July, 2021, 13:52:40

These pictures look like the calibration did not work correctly though. Specifically, it looks like output from the "dummy gaze mapper" that was included in Pupil Capture prior to v2.0. Can you check the Pupil Player general settings and look up the recording software version?

user-8e5c72 07 July, 2021, 13:50:59

but how can I combine them to only one dot so I would get

user-8e5c72 07 July, 2021, 13:51:04

Chat image

user-8e5c72 07 July, 2021, 13:51:10

only this one circle

user-8e5c72 07 July, 2021, 13:51:17

what else do I have to set up

user-8e5c72 07 July, 2021, 14:34:24

hmm so.. I just had a one participant now and measured a bit in our study.. Now suddenly it looks ok

user-8e5c72 07 July, 2021, 14:34:31

Chat image Chat image

user-fde905 07 July, 2021, 15:04:07

Hi, we are trying to use HoloLens 2 with the pupil labs hmd add on kit. Has anyone here designed 3d prints for mounting it on HoloLens 2?

user-4bc389 08 July, 2021, 00:42:48

Hi I encountered some problems, that is, when viewing the surface tracking data, I have some questions about the data in the red area in the figure. What time does world_timestamp refer to? Why do fixation_id and duration have multiple identical numbers? How do I choose from them, thank you.

Chat image

nmt 08 July, 2021, 09:11:01

Hi @user-4bc389. The world timestamps are the world camera time. The duration column shows fixation durations in milliseconds. There are identical ID and duration values because, by definition, fixations spread over multiple world camera samples. E.g. fixation ID: 18, has a duration of 80.104 milliseconds, and is spread over 3 consecutive world camera samples.

user-7c46e8 08 July, 2021, 07:27:41

Does online or offline pupil detection tend to be more accurate?

nmt 08 July, 2021, 09:43:24

Hi @user-7c46e8 πŸ‘‹ . The accuracy of pupil detection per se tends not to be influenced by running in an online or offline context. However, certain settings, such as eye camera exposure, sampling rate and resolution, can only be adjusted in real-time. Excessively low exposure, for example, can cause poor pupil detection, and you cannot change this post-hoc. So always check these are set correctly before you record data. There are other pupil detector settings that you can adjust, e.g. https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings), and you can do this in real-time or post-hoc. From a research perspective, while it can be less stressful to adjust these in a post-hoc context (more time to fine-tine the settings), it is recommended to ensure that pupil detection is robust before you make recordings. Also ensure that pupil detection is good at all angles of eye rotation you will record.

user-4bc389 09 July, 2021, 00:58:56

@nmt So how to choose these same values, are all valid or just choose one from the same value, for example, the surface I defined, such as multiple same fixation durations, then how should I use these data, thank you

papr 09 July, 2021, 07:34:51

You need to group the data by fixation id. Per fixation id, start time and duration will be the same. Only the surface-mapped location of the fixation might be different across multiple world video frames. See this message for reference https://discord.com/channels/285728493612957698/285728493612957698/859379631282454548

user-f5b6c9 09 July, 2021, 07:06:09

Good morning! Is there any possibility to turn off 2D detection and only use 3D detection during recording? Glad for any help! πŸ™‚

papr 09 July, 2021, 07:35:27

Unfortunately not, because the 3d detector uses the 2d result as input. If you want to save resources, I would suggest the other way around: Turn off 3d detection and leave 2d detection on. The quality of the 2d detection is most important to get a good 3d model later. (You can fit the model post-hoc in Pupil Player)

user-f5b6c9 09 July, 2021, 07:42:20

So would you still recommend "rolling with eyes" even if in this case only 2D detection would be possible? Id like to prevent some fails in Post-Hoc processing since I want to use 3D data.

papr 09 July, 2021, 07:44:31

Yes, basically perform all the steps as if 3d detection was enabled. You also could leave pye3d enabled at the start, make sure that the subject is able to perform the procedure yielding a good model and turning pye3d off afterward

user-f5b6c9 09 July, 2021, 07:48:12

How I can 'fit the [3D?] model' in PupilPlayer? And yes, that sounds good. But how can I turn off pye3d while running PupilCapture and having checked that the model is fine?

papr 09 July, 2021, 07:52:45

In Player, you just use the post-hoc pupil detection. This will rerun both detection algorithms on the recorded eye videos.

how can I turn off pye3d while running PupilCapture You either need a custom plugin or you send the command via the network api.

and having checked that the model is fine? To clarify, you would check first and then disable it, assuming the post-hoc detection will reproduce a similar eye model

user-f5b6c9 09 July, 2021, 08:00:08

Thank you very much, @papr! Have a nice weekend!

user-28c35b 09 July, 2021, 08:11:15

Hi, I want to get pupil data for both eyes, at the moment I am using:

subscriber.subscribe('pupil.0.3d') subscriber.subscribe('pupil.1.3d')

and a while True block to capture the data, but in some iterations I am not getting both, is there a better way?

papr 09 July, 2021, 08:12:42

Let's move this to πŸ’» software-dev

user-28c35b 09 July, 2021, 08:11:19

Chat image

user-28c35b 09 July, 2021, 08:13:49

sure

user-4bc389 09 July, 2021, 08:39:16

@papr I'm still a little confused. For example, I defined a Book surface, and now I need the fixation duration data of this surface. How can I extract these data with the same ID, choose all or only one? Thank you

user-9eeaf8 09 July, 2021, 15:51:02

Hello, if you have multiple different video recordings with the same apriltag defined surfaces, how can the surface definitions be shared between the recordings? So far I have tried to copy the surface_definitions-v01 file between recordings, but it doesn't work that way. Thanks in advance!

user-bd82ac 09 July, 2021, 23:04:20

Hi! I'm working with pupil capture video to feed it into a YOLOv5 model to detect objects within the frame as a subject is walking through space (example frame output in file named 'yolo_pupil_ouput.png'). To feed the post processed images in, I'll be using ffmpeg & mp4fpsmod to generate a new .mp4 file from extracted jpeg's with preserved relative timestamps [since the pupil capture video has a variable frame rate]. I was able to convert the raw pupil capture video to jpegs and back again into a video with the same relative timestamps, but when when I try to launch Pupil Player with the remade mp4, I receive a 'Found no maching pts! Something is wrong!' error (frameiterator_error.png). I've also attached the ffprobe output for each file (the original & remade files), can anyone comment on why I might be receiving this error?

ffprobe_output_original_file.rtf

user-bd82ac 09 July, 2021, 23:04:22

ffprobe_output_remade_file.rtf

user-bd82ac 09 July, 2021, 23:04:25

Chat image

papr 13 July, 2021, 15:36:29

The issue is that Player expects that: 1. Every video stream packet includes exactly one video frame. If the packet is empty, it assumes that it has reached the end of the video stream. 2. That the packet and frame pts (presentation timestamps) are the same.

user-bd82ac 09 July, 2021, 23:04:27

Chat image

user-bd82ac 09 July, 2021, 23:23:38

update - I realized I didn't copy the metadata from the original file, attached I have the ffprobe output on the remade file after adding in the metadata from the original file. I still have the same error when trying to use this new file

ffprobe_output_remade_file_with_metadata.rtf

papr 13 July, 2021, 15:34:21

Sharing the surface_definitions* files should work. πŸ€” Please copy them while the recording is not opened in Player and only open the recordings afterward.

user-9eeaf8 14 July, 2021, 10:02:52

Ah, yes, it works like that. Thank you!

user-b2de72 13 July, 2021, 17:04:30

Does pupil core record pupil size?

papr 13 July, 2021, 17:21:11

It does :)

user-f36813 13 July, 2021, 17:28:42

Hello, I had a problem with the right camera of the Pupil Labs Core, so I changed the left camera to the right side. Since most people have the right eye as the dominant one. When testing the equipment, the model is quite lost, and the large red circle in the photo appears. What should I set to make it work that way?

Chat image

papr 13 July, 2021, 17:30:32

The large red circle is part of the algorithm view. If you change the mode in the general settings it will go away

user-f36813 13 July, 2021, 18:22:26

Thanks! When working with monocular, the model loses precision? How should I do the calibration to ensure that it does not, since I am calibrating "Single Marker" at a distance of 5 meters which is where I expect the fixings.

papr 13 July, 2021, 18:26:32

The eye model is independent of the calibration. Try looking into different directions to fit the model, like on the left. (Note: This screenshot is taken with our latest version. The colors might differ from your version.)

But first, you need to make sure that the 2d pupil detection is stable. From your screenshot, it looks like your eye lashes might be detected as false positives. Please use the ROI mode to adjust the region of interest to exclude your eye lashes.

Chat image

user-f36813 13 July, 2021, 18:33:28

Thank you very much, I'll do that!

user-b60e38 14 July, 2021, 08:16:16

Hi - I'm having trouble downloading the pupil core software on windows 10. I'm trying to run from the command prompt but get an error saying

papr 14 July, 2021, 08:23:04

Hi, could you please let us know which software you used to uncompress the downloaded rar file? ~~@nmt Could you please try to reproduce the issue?~~ Resolved

user-b60e38 14 July, 2021, 08:17:01

'The installation package could not be opened'. Contact the application vendor to verify that this is a valid windows installer package'. Sorry I sent too early but any help would be great. Thanks!

user-b60e38 14 July, 2021, 08:33:39

When I download it, it has an .msi extension. Does this still need to be decompressed?

papr 14 July, 2021, 08:35:11

Which browser are you using?

user-b60e38 14 July, 2021, 08:35:42

Chrome

user-b60e38 14 July, 2021, 08:50:21

Okay I now seem to be able to run the installer. Is it correct that the publisher should be unknown? Application: pupil_v3.4-0-g7019446_windows_x64.msi Publisher:
Unknown publisher

user-b60e38 14 July, 2021, 08:58:08

All sorted and installed! Thanks for your help.

papr 14 July, 2021, 08:58:21

Nice! What did you change?

user-b60e38 14 July, 2021, 09:00:53

I didn't think .msi extension needed to uncompress so it was just a case of unraring

user-75e3bb 14 July, 2021, 18:07:14

Hello, I’m trying to figure out the time interval between fixations (by using the fixation plug in). I downloaded the data from the fixation plug in and may someone walk me through what does the β€œstart_timestamp” and β€œstart_frame_index” means and how would it be same or different from to the timeframe on the pupil player?

Chat image

papr 15 July, 2021, 13:57:13

Hi. The frame indices are comparable between both. They refer to the scene video frame index. The timestamps are equivalent but not the same. The Player ui displays time relative to the first scene video frame; the export uses the original pupil time. https://docs.pupil-labs.com/core/terminology/#timing

user-10fa94 15 July, 2021, 04:34:12

Hi after installing the latest release, I am getting a "refraction corrected 3d pupil detector not available" error. What could be the cause of this?

papr 15 July, 2021, 06:05:59

Are you running from source or from bundle? If it is the latter, please let us know which operating system you are using.

user-10fa94 15 July, 2021, 13:32:50

From bundle, macOS mojave

papr 15 July, 2021, 13:52:13

If the error does not go away, please share the capture.log file. You can find it in Home directory -> pupil_capture_settings.

papr 15 July, 2021, 13:47:43

I am on a similar setup (Catalina instead of Mojave) and I cannot reproduce the issue. Please try restarting with default settings from the general settings.

user-85ba36 15 July, 2021, 13:47:58

Hi, can i display Heat map for several respondents ? How can I do this?

papr 15 July, 2021, 13:50:35

Hi, Pupil Core software does not have built-in support for multi-participant analysis. You will have to export the surface-mapped gaze data from every subject's recording and aggregate it by yourself.

user-23aebf 15 July, 2021, 17:09:54

Hello, We are using Pupil Labs on a computer that multiple researchers will log into. I installed Pupil lab under my user ID (I am an Admin) but the Pupil lab software does not show up under other user ID's, I have had to reinstall under each person user ID. Is there a way to have it install once and be used under each user ID?

papr 20 July, 2021, 09:42:21

The Pupil Labs software is installed to C:\Program Files (x86)\Pupil-Labs by default. This directory should be accessible by all users. Nonetheless, you are right that the start menu entry and desktop shortcut are only installed for the current user. The next release will include a small change that will install start menu entries and desktop shortcuts for all users.

See https://github.com/pupil-labs/pupil/pull/2166 for reference

user-bd82ac 16 July, 2021, 03:08:12

hi! Can I get additional guidance on the gaze_timestamps value from exports? I grabbed the pts values for each frame using ffmpeg & mps4fpsmod independently (both methods as a sanity check) and I'm not getting equivalent values when looking at the gaze_timestamp for the beginning frame if using 0 based indexing pts from the world.mp4 using ffmpeg/mps4fpsmod at frame #3 is 916.966666666667 if using 1 based indexing pts from the world.mp4 using ffmpeg/mps4fpsmod at frame #3 is 957.977777777778 where gaze_timestamp at first instance where world_index = 3, pts = 1.747019805

Additionally, is there any advice that you could give for reconstructing the world.mp4 video from ffmpeg extracted jpeg's not aligning with the expectations that pupil player has about packet contents/pts for that same video? I extracted those frames using ffmpeg with vsync=0, setting pts time with mps4fpsmod with the original pts values, and setting the video metadata with the original file.

In all, I need to check whether gaze (with diameter=0) collided with any bounded box (grabbed from extracted frames being run through yolo) AND get the timestamps for the gaze data. Can anyone advise?

papr 16 July, 2021, 10:14:59

Using -vsync 0 is important to get the correct number of frames, yes.

The video file comes with an additional *_timestamps.npy (intermediate recording format) or _*timestamps.csv (video exporter format) file. Each timestamp entry corresponds to a frame from the video file. See https://docs.pupil-labs.com/developer/core/recording-format/#timestamp-files

These timestamps can be used to correlate to the gaze_positions.csv. Alternatively, the gaze_positions.csv also includes a world_index column that can be used to find the world rame index to which the gaze datum is closest to.

papr 16 July, 2021, 08:48:24

Hi, unfortunately, I do not know if this is possible. I will have to look into that next week. Can you confirm that you are using Windows?

user-23aebf 05 August, 2021, 10:44:58

I am not sure I ever got an answer to this question: We are using Pupil Labs on a computer that multiple researchers will log into. I installed Pupil lab under my user ID (I am an Admin) but the Pupil lab software does not show up under other user ID's, I have had to reinstall under each person user ID. Is there a way to have it install once and be used under each user ID? We are Windows 10 Enterprise 1909.

user-23aebf 16 July, 2021, 10:16:41

We are using Windows 10 Enterprise Version 1909

user-47f5be 19 July, 2021, 08:22:18

Hello the support team. I have a pupil core and one of the eye's connection cable was broken. how can I fix it?

user-47f5be 19 July, 2021, 08:23:19

If the connection cable for the eye camera is broken, do I need to mail it back to the company to repair it?

papr 19 July, 2021, 08:23:56

Please contact info@pupil-labs.com in this regard

user-0a8b4d 19 July, 2021, 16:59:38

hi, any chance in using a realsense camera as environment cam? Or is the pupil hardware required for the software to run?

user-0a8b4d 19 July, 2021, 16:59:41

thxn

user-552211 20 July, 2021, 05:10:02

Chat image

papr 20 July, 2021, 09:33:14

I see that you also wrote an email to [email removed] I responded to your question there.

user-552211 20 July, 2021, 05:11:13

i can't use it with capture 3.4, anyone knows the proper release version?

user-85ba36 20 July, 2021, 10:56:45

Hi,I have a question about pupil mobile app. I want use mobile app for researchers- in pupil capture I have "pupil detection color scheme" -then I can see if the pupil is correctly detected, but in mobile app there is no "pupil detection color scheme". If i want real results,Β what should I do to get my pupils correctly detected?

Thanks for your help

papr 20 July, 2021, 10:59:16

Hi. Pupil Mobile does not perform real-time pupil detection. Instead, you will have to transfer the recording from the phone to the computer, open it in Pupil Player, and perform the post-hoc pupil detection. You can also stream the eye video to Pupil Capture to get a real-time preview of the pupil detection but the results will not be stored as part of the Pupil Mobile recording.

user-85ba36 20 July, 2021, 11:57:58

okey so, I understand, that "pupil intensity range" is only in pupil capture and in pupil mobile app this parameter isn't taken into account? There are only post-hoc pupil deteciton, Yes?

papr 20 July, 2021, 11:59:11

This parameter is part of the pupil detection algorithm which cannot be run on the mobile phone. It can only be run on a desktop computer running Pupil Capture (realtime on video stream) or Pupil Player (post-hoc on recording).

user-85ba36 20 July, 2021, 12:03:56

Okey, thank you very much

user-8effe4 21 July, 2021, 10:24:38

hello everyone, I hope you are doing great. I have a pupilab simple question. i want to extract the pupil size for my eyes recording. anyone knows how or have a link to a walkthrough. thanks you in advance

papr 21 July, 2021, 10:25:41

The raw data exporter in Pupil Player creates a pupil_positions.csv file on export that includes this information.

papr 21 July, 2021, 10:26:41

Please be aware that the quality of this data is highly dependent on the 2d and 3d pupil detection quality.

user-8effe4 21 July, 2021, 12:23:47

Thanks a lot

user-7bd058 21 July, 2021, 12:04:38

Hi, I had to reinstall pupillaps and the annotation list is removed. I already have annotated a few videos. Can I find the annotation list in the files?

papr 21 July, 2021, 12:06:07

Also, have you installed a different version than what you have been using before?

papr 21 July, 2021, 12:05:25

Have you reenabled the annotation plugin after the reinstall?

user-7bd058 21 July, 2021, 12:10:31

yes I have reenabled the annotation plugin. The annotations are visible when you click trough the video (they pop up) but the annotation list is removed.

papr 21 July, 2021, 12:12:51

In this case, I am not sure what you are referring to by "annotation list". Are you referring to the keybindings that one can setup to create new annotations?

user-7daa32 21 July, 2021, 12:12:32

https://media.discordapp.net/attachments/849077710671708182/865758903827693608/20210716_205611.jpg

I tried out the new release. Please how is it ?

There were times I saw very very faint and disappearing orange lines around the blue circles... I don't know if is by eyes fooling me.

user-7bd058 21 July, 2021, 12:13:24

yes the corresponding hot keys

papr 21 July, 2021, 12:15:34

Unfortunately, these are part of the session settings which are reset when a new version is detected. I fear you will have to manually set them up again. I will note this issue down though. It makes sense that the hotkeys are saved as part of the recording and not of the session settings.

papr 21 July, 2021, 12:17:09

@user-7daa32 Please see the release notes ("Dynamic pye3d model confidence") in this regard https://github.com/pupil-labs/pupil/releases/tag/v3.4

user-7daa32 21 July, 2021, 12:17:42

If chinrest is being used, do you think 2d pipeline is the best to use ? I have issue creating surfaces when the precision is low. I encountered a situation where participant looking at A but the gaze will be in B. Creating Surface A would have to extend to B

papr 21 July, 2021, 12:19:20

If you only need the 2d gaze point, you should be fine with using the 2d pipeline.

user-7daa32 21 July, 2021, 12:24:24

We are manually taking data by watching the video on play because we need the transition time between AOIs. Which though we can't get the transition time between the last gaze in last fixation in one AOI and first gaze in the first fixation of the other AOI. We are currently thinking of using the AOI fixations counts

user-7daa32 21 July, 2021, 12:36:41

Since we are manually taking data from the video, what do you think of the data quality?

"System Time uses the Unix epoch: January 1, 1970, 00:00:00 (UTC). Its timestamps count forward from this point."

I guess we are using the system time ?

user-7daa32 21 July, 2021, 12:30:09

"Freezing the eye model is only recommended in controlled environments as this prevents the model from adapting to slippage"

What's the purpose of freezing ? I thought freezing is for the purpose of creating a surface. I am actually concerned about the gaze being precise enough to hit the AOI. We might to train the participants before carrying out the experiment.

papr 21 July, 2021, 12:31:59

Freezing the eye model is independent of freezing the scene video stream for the purpose of setting up a surface definition. It is something completely different.

user-7daa32 21 July, 2021, 12:35:10

Okay.. I will check it out.

papr 21 July, 2021, 12:35:41

You don't have to worry about it if you use the 2d pipeline. It only applies to the 3d eye model data (red and blue circles in your screenshots)

user-7daa32 21 July, 2021, 12:38:39

I have been using the 3d eye model and yet to try out the 2d eye model. But since the participants don't need to move their heads, 2d eye model should give the most precise data

user-b62d99 21 July, 2021, 15:15:48

hi, would anyone know what exactly is in a .pldata file? I tried looking over the documentation but it's still unclear to me

user-b62d99 21 July, 2021, 15:17:16

I also tried opening a .pldata file using the code linked at the bottom of the page (https://github.com/pupil-labs/pupil/blob/315188dcfba9bef02a5b1d9a3770929d7510ae2f/pupil_src/shared_modules/file_methods.py#L57-L87) but I kept recieving this error

papr 21 July, 2021, 15:18:39

Use this code instead https://gist.github.com/papr/81163ada21e29469133bd5202de6893e

papr 21 July, 2021, 15:17:50

This code is not meant to load pldata files πŸ™‚

user-b62d99 21 July, 2021, 15:57:39

thanks!

user-b62d99 21 July, 2021, 16:27:34

How would one go about rewrapping edited .pldata files?

user-53365f 21 July, 2021, 18:27:12

Hi there. Do I understand correctly that the DIY unit only has one camera? (I.e. pupil detection for one eye)

user-670bd6 22 July, 2021, 02:49:24

Hi, i purchased the upgrade to pupil core DIY - fish-eye lens (DSL235D-650-F3.2) but after replacing the lens, the camera appears to be completely out of focus? I have tried adjusting the absolute focus, it does not seem to help.

papr 22 July, 2021, 07:15:00

Correct, the DIY frame is monocular.

user-53365f 22 July, 2021, 08:47:19

Thanks for that. At 2740eur this might become quite an expensive hobby! πŸ˜„

user-562b5c 22 July, 2021, 14:20:47

Hi, I am running post hoc pupil detection for the first time, and, was wondering if I have to re run the 'Raw Data Exporter' to get the updated raw data files after post hoc detection

papr 22 July, 2021, 14:26:27

Once the pupil detection is done, the internal pupil data (e.g. used to display in the diameter 3d in the time) is updated. Existing exports are not changed. You will have to start a new export.

The tokens are used internally to determine if data needs to be updated. E.g. the blink detector is based on pupil data. If it updates, it needs to recalculate blinks. To avoid unnecessary recalculations, we use tokens identifying the data versions. They do not actually contain data.

For reference, you can find the implementation here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/data_changed.py

user-562b5c 22 July, 2021, 14:21:55

Also what are those token files in the "offline_data" folder,what is the purpose, how do I open them. Thanks

user-562b5c 22 July, 2021, 14:29:47

@papr Thanks

user-60f500 22 July, 2021, 16:28:34

Hi guys, do you Knowles if it is possible to record pupil data without recording world camera ? I'm just insterested in pupil size variations and world movies are huge and not useful for me. Thanks !

papr 22 July, 2021, 17:15:25

Technically, this is possible. I just do not know if there is a user interface option for it. I made note for myself to check that. If it is not yet possible, I will add an option to our next release. Until then, you can delete the world* files afterward. Player will adapt automatically.

user-60f500 22 July, 2021, 17:18:35

OK, thanks for the answer. Sure, it could be nice to have an interface option to enable/diable world camera recording ! :)

user-b62d99 22 July, 2021, 17:17:06

Hi, thanks for the help so far. We are not sure where we're going wrong with rewrapping the .pldata file, but when we try to reopen the folder containing the edited .pldata file, it doesn't run on pupil player. We're not sure if we're using the suggested class correctly, but we tried following the example rprovided and while it didn't produce any errors, it doesn't run

papr 22 July, 2021, 17:17:39

Could you elaborate on what you are trying to achieve?

user-b62d99 22 July, 2021, 17:21:55

We're undergrad interns and have been asked to write a program that will allow users to crop the video shown on pupil player. Our approach was to crop each type of file individually and edit out timestamp data that was unimportant. So, we were attempting to open, edit, and resave the .pldata file without the timestamp data that was outside of the cropped video range. I think we're going wrong when it comes to rewriting the .pldata file after editing

papr 22 July, 2021, 17:24:11

Each pldata file also has a timestamp file, that needs to be adjusted. The PLData_Writer class linked above takes care of that.

papr 22 July, 2021, 17:23:00

Ah, so temporal crop, instead of spatial crop? Are you aware that you can temporally crop the export?

user-b62d99 22 July, 2021, 17:26:23

We were told you can manually crop videos, but we were asked to automate it, that's all we really know. And yes, we were able to edit the timestamps using that class, but when we reopened the .pldata file using the load function you mentioned earlier it threw an error

papr 22 July, 2021, 17:27:36

Could you clarify if you need to crop/trim the actual recording or if cropping (we call it trimming) the export in an automated fashion would be sufficient for your usecase?

papr 22 July, 2021, 17:28:18

The former is much more difficult and error prone than the latter. The latter is actually very easy.

user-b62d99 22 July, 2021, 17:32:51

So sorry, we're not sure what you mean by the export

papr 22 July, 2021, 17:34:42

If you hit e in Player, an export will be started. By default, this includes the pupil and gaze data as CSV files as well as the scene video with gaze overlay. One can specify the (trim) range for which the export should be performed. https://docs.pupil-labs.com/core/software/pupil-player/#export

user-b62d99 22 July, 2021, 17:40:11

Our supervisor asked us to not use the exports; what is the best approach to trim the actual recording?

papr 22 July, 2021, 17:44:36

ok, in this case you are on the right path. Even though we do not give any guarantees regarding the intermediate format (in other words, it might change in the future; currently there are no such plans though).

Reading the pldata files and excluding samples outside of the trim range should work, too. You might want to adjust the info.player.json file as well in regards to the start_time and duration fields.

I suggest using https://github.com/pupil-labs/pyav for trimming the videos. Here you need to rewrite the timestamp files, too. Delete any _lookup.npy files that might exist already.

user-b62d99 22 July, 2021, 17:46:42

Okay thanks! We'll look into adjusting the info.player.json file as we didn't before and that may be the issue

papr 22 July, 2021, 17:48:21

Feel free to share the error message in πŸ’» software-dev. I might be able to give further hints.

Btw, if this works well, this tool might be useful to others as well! Should you consider open-sourcing your solution, please add a link to the project over at https://github.com/pupil-labs/pupil-community

user-b62d99 22 July, 2021, 17:59:30

Thanks for all your help! You'll probably be hearing from us in πŸ’» software-dev soon and we'll definitely consider open-sourcing our code if it runs well

user-312660 22 July, 2021, 19:20:39

Hi! I am new to using pupil labs and was wondering how to relate the timestamps given in the csv file to clock time. This is what my time stamp data looks like, and I am not sure how to interpret it. Would really appreciate some help!

Chat image

papr 22 July, 2021, 19:26:11

Hi! The clock starting point is arbitrary. On Windows, the clock start can be negative sometimes. (Unfortunately, I have not been able to find out the reason for this). These timestamp unit is seconds.

user-312660 22 July, 2021, 19:30:41

Thank you! So the timestamp is just giving us the time in between each data point (for example the time between point 4 and 5 is 0.0079 seconds). But is there no way to correspond it to a standard clock time? I am trying to align this data with behavioral data and need to match the time points. I am not sure how to convert these time stamps to actual time

papr 22 July, 2021, 19:31:17

There is. Check out https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-312660 22 July, 2021, 19:33:05

Thank you!!

user-562b5c 22 July, 2021, 22:10:02

Hi,

I have a question regarding the pupil_postions file generated after post-hoc processing.

1) I can see that pupil time stamps have changed. For example in the file generated from the recording the first pupil time stamp was '711705.765601999' but after post-hoc the first pupil time stamp changed to '711705.831237'. Shouldn't both be the same?

2)The 3d diameter generated by the recording was something like - '1.9...' or '2.4...' and it looked normal and in the expected range. However, after post-hoc I have certain rows, even with confidence level above 0.6, with 3d eye diameter as large as '174.75' or '55.7879' which look absurd. I am not sure if I did something wrong while doing the post-hoc or forgot to do something, or the data got corrupt.

Thanks for your help.

papr 23 July, 2021, 08:08:42

1) The post-processing generates two pupil datums per eye video frame (1x 2d, 1x 3d). In Capture, the eye video recording might start slightly delayed to the world process which is responsible for saving the pupil data to disk. This is why the recorded pupil data contains earlier timestamps. 2) The confidence column does not say anything about the quality of the 3d eye model which is responsible for inferring the 3d diameter. For fitting the eye model, it is best to look into different directions such that it can be triangulated well. Ideally, you have such a sequence recorded in your eye videos. The latest v3.4 release gives rough visual feedback regarding the fit and also adjusts the model_confidence value accordingly. See https://github.com/pupil-labs/pupil/releases/tag/v3.4 for details.

Pro tip: You can also restart the detection from the UI and it will keep your current model. this way you can reapply a well fit model to the beginning of the video.

user-d44f4f 23 July, 2021, 06:52:13

@paprHi, I have a question about the solution of device slippage. In the Pupil Core 3.4, the short term model of eyeball is built for adapting to slippage. However, as far as I know, in order to compute the right 3D gaze in the scene image, the rotation and translation matrix obtained in the calibration should also be updated in real time according the newest eyeball model. But I cannot find the code segment from the newest version of Pupil Core. Can you help me?

papr 23 July, 2021, 08:11:54

Hi πŸ™‚

the rotation and translation matrix obtained in the calibration should also be updated in real time This is actually not the case. These transformation matrices describe the relation between eye camera <-> scene camera, not eye model <-> scene camera. As long as you do not change the relative eye or scene camera positions, you do not need to update these matrices.

user-d44f4f 23 July, 2021, 08:38:09

@papr Thanks for your reply. Which parameters have been calibrated in the calibration? I learned the code of Pupil 1.0 before. I think the matrices describe the relation between the sphere center <-> scene camera.

papr 23 July, 2021, 08:39:18

No, they never have. The sphere center is estimated in eye camera coordinates, which is why there is no need to adjust the eye camera <-> scene camera transform

papr 23 July, 2021, 08:41:34

These are the only two parameters estimated for the binocular gaze mapper: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L226-L227

user-d44f4f 23 July, 2021, 08:50:06

@paprI think the person-specific kappa angle has been computed in the calibration. Do the two parameters include the information about the kappa angle?

papr 23 July, 2021, 08:51:13

the person-specific kappa angle has been computed in the calibration That is correct Do the two parameters include the information about the kappa angle? They do not explicitly, which is why the slippage compensation is not perfect and requires recalibration after some amount of time.

user-d44f4f 13 August, 2021, 13:15:37

Hi, @nmt. Several days ago, I discussed the binocular 3D gaze calibration process with papr. Recently, I learned the 3D gaze calibration and found there is a question. Firstly, we think the eye_camera_to_world_matrix0 and eye_camera_to_world_matrix1 include the kappa angle implicitly, and the relation between eye camera and scene camera https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L226-L227.
However, we found the sphere centers are transformed from eye camera coords to world coords using both eye_camera_to_world_matrix0 and eye_camera_to_world_matrix1 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L257-L258. We think it is unreasonable, because the transformation for sphere center doesn't need the kappa angle information. Can you help me with an answer?

papr 23 July, 2021, 08:51:35

Kappa is encoded implicitly within the transformation matrices.

user-d44f4f 23 July, 2021, 08:53:09

Did you mean that the transformation matrices are the eye_camera_to_world_matrix0 and eye_camera_to_world_matrix1?

papr 23 July, 2021, 08:54:56

Yes, I was referring to these two matrices for the binocular gaze mapper. Capture also fits two additional monocular gaze mappers which both own a separate transformation matrix.

user-d44f4f 23 July, 2021, 09:07:24

So the eye_camera_to_world_matrix0 not only describes the relation between eye camera <-> scene camera, but also includes the kappa angle. Why does the get_eye_cam_pos_in_world state that the eye_cam_pose_in_world is the eye camera pose in world coordinates?

Chat image

user-d44f4f 23 July, 2021, 09:16:40

@papr Hi, I mainly don't know, how to solve the slippage problem only by updating the eyeball model in real time?

papr 23 July, 2021, 10:56:29

Thank you for the clarification. I will try to think of a way to answer your questions more clearly than I have above.

user-d44f4f 23 July, 2021, 11:03:13

Thank you very much.

user-312660 23 July, 2021, 15:39:35

Hi! I just converted the time stamps to date/time but was wondering what the timezone is

papr 23 July, 2021, 15:50:01

It converts to Unix epoch. So UTC+0

user-7daa32 26 July, 2021, 14:29:13

I'm really finding it difficult to learn Psychopy. It seems most of what's done there involved programing and coding.

What's the best way to approach it?

papr 26 July, 2021, 15:08:22

We are currently working on extending PsychoPy's newest "Builder" components for eye tracking. We are planning on adding support for Pupil Core. Unfortunately, this work will take some time. Until then, you will have to integrate the Network API manually into the python code generated by PsychoPy.

user-7b683e 26 July, 2021, 17:50:06

Hi, I'm using Pupil Core for a project, and I need to access to angular velocity of the headset as well as angular velocity of eye of a participant. I think that I can reach to eye gyro data. However I don't sure that how can I collect angular velocity of the headset via Network API. What can I do?

papr 27 July, 2021, 07:32:58

Hi, please see my notes below: - eye rotation speed - Subscribed to pupil, discard 2d data (method field indicates if it is 2d/3d), and use the timestamp and circle_3d->normal fields to calculate rotation speed. You might want to discard low confidence data points and make sure that your 3d model is fit well. - headset rotation - In Core, it is only possible to achieve this using the head pose tracker which requires further setup https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking Subscribe to head_pose to get real-time head pose estimations

user-7b683e 26 July, 2021, 17:54:40

In addition, I have a copy of Pupil Capture v1.7.

user-5d0ddd 26 July, 2021, 18:15:14

Hello! I am using pupil core and I am getting readings on Diameter_3D for pupil size that are negative values. Is this an issue that we can correct on our end or is this an issue with the tracker?

papr 27 July, 2021, 07:44:49

Negative 3d diameter values are a strong indicator for the 3d model not being fit well. First, make sure that the 2d detection works well (high confidence values in the top left graph) and then rotate your eyes to different locations such the eye can be triangulated. (The pupil visualization colors might differ if you are using an older version of Capture)

Chat image

user-7b683e 27 July, 2021, 06:24:06

Other thing I have not understood is the data structure incame from Network API. In it there is a field named as "base_data". Sometimes I receive base_data field in a different base_data field, and each one have a norm_pos pair that is different than other identical keys. What is different among norm_pos data in gaze_on_srf and base_data fields?

papr 27 July, 2021, 07:38:25

Pupil Capture processes data by applying a series of transformations. These can include changes of coordinate system (while keeping the unit).

Each transformation generates a new datum, which references to its "ancestor" in base_data

This is the rough pipeline: 1. Capture eye image (eye camera coord system) 2. Pupil detection (eye camera coord system) 3. Gaze estimation (scene camera coord system) 4. Surface mapping (surface coord system)

Read more about the coordinate systems here: https://docs.pupil-labs.com/core/terminology/#coordinate-system

papr 27 July, 2021, 07:38:38
papr 27 July, 2021, 07:39:53

So in your case, norm_pos in gaze_on_srf is in surface coordinates, while the norm_pos of its base_data is in scene camera coordinates

user-7b683e 27 July, 2021, 08:10:03

Thanks for you reply. However I receive norm_pos data more than two in iterative base data fields. If I should show this structure simply: {"gaze_on_srf":{"norm_pos":"bla", "base_data":{"norm_pos":"bla", "base_data":{"norm_pos":"bla"}}}}

papr 27 July, 2021, 08:16:13

@user-7b683e

{
  "gaze_on_srf": {
    "norm_pos":"bla",  # gaze in surface coordinates
    "base_data":{
      "norm_pos":"bla",  # gaze in scene coordinates
      "base_data":{
        "norm_pos":"bla"  # pupil data in eye camera coordinates
      }
    }
  }
}
user-60f500 27 July, 2021, 12:04:56

Hi guys, I've trouble with the sampling rate of the pupil camera when I record data using Pupil Capture. I fixed the sampling rate of the pupil camera (directly in pupil capture) to 120Hz and after data collection I noticed that the mean sampling rate for my recording was around 247Hz. If I fix the sampling rate to 200Hz it does not change anything. Moreover, the difference (in second) between the time points (t i + 1 - t i) are absolutely not constant, it oscillate between ~0.006 and ~0.014. Is it normal ?

papr 27 July, 2021, 12:17:57

Hey, you are looking at the gaze data, correct? Please note, that the gaze data is actually a combination of three streams (monocular left, monocular right, and binocular) gaze stream. Pupil Capture tries to match two pupil datums for the binocular gaze estimation. Since this algorithm needs to run in realtime and there is only limited knowledge about future data, the algorithm upsamples the pupil data. See https://nbviewer.jupyter.org/github/pupil-labs/pupil-matching-evaluation/blob/master/gaze_mapper_evaluation.ipynb for details

user-60f500 27 July, 2021, 12:20:32

Hi papr, No, I am looking at the diameter data in the pupil_position. csv file.

papr 27 July, 2021, 12:21:25

In this case, make sure to differentiate between left and right, and 2d and 3d eye data. See the method and eyeid column (0: right, 1: left)

user-60f500 27 July, 2021, 12:31:29

Yes, I already separate left and right eyes and 2d/3 data but it does not solve the problem, I still have varying difference between time points, while these difference should be constant if the sampling rate was correct (right ?)

papr 27 July, 2021, 12:34:09

No, the eye cameras are of varying sampling rate. 120/200 Hz is the target frame rate, but in practice it varies. In your case 1/120 = ~0.008. The 0.012 entry indicates that a frame was dropped.

user-60f500 27 July, 2021, 12:32:33

0.007961 0.008041 0.007913 0.007999 0.008021 0.007962 0.008381 0.012261 0.003437 0.007916 for example, these are the 10 first differences between my time points

user-60f500 27 July, 2021, 12:32:54

So ti+1 - ti

papr 27 July, 2021, 12:35:45

If you need evenly spaced samples, I suggest linear interpolation. In fact, I recommend looking at Kret, M. E., & Sjak-Shie, E. E. (2019). Preprocessing pupil size data: Guidelines and code. Behavior Research Methods, 51(3), 1336–1342. https://doi.org/10.3758/s13428-018-1075-y

user-60f500 27 July, 2021, 12:44:10

Ok ! Thanks for the answer an the reference, I will have a look ! πŸ™‚ I have another issue, this time using LSL. I am recording synchronised EEG and pupillometer data using a plateform that use LSL to synchronise the devices. When I look at the data I have even bigger differences between time points (until 12 seconds sometimes !). And it is not rare that I have negative difference that suggest that the [ti +1] point is before the [ti] point. Do you know what it means ?

papr 27 July, 2021, 12:45:25

Which version of the LSL relay did you use?

user-60f500 27 July, 2021, 12:45:21

In this case the samping rate is closer than 250Hz than 200Hz. Even of the sampling rate is varying, is it normal to have such a big difference ?

user-60f500 27 July, 2021, 12:46:18

1.1

user-60f500 27 July, 2021, 12:49:25

It is the 2.1 sorry

papr 27 July, 2021, 12:50:38

2.1 forwards gaze, i.e. this is subject to the mixed streams mentioned above.

papr 27 July, 2021, 12:52:41

Unfortunately, in this version, it is not possible to reproduce the original pupil datum timestamp. Therefore, I suggest sorting the data by time and then applying the preprocessing linked above. And yes, this data is subject to higher timestamp-difference variance (due to using the gaze stream instead of the pupil stream).

user-60f500 27 July, 2021, 12:50:50

And I'm using Pupil Capture v3.1.16

papr 27 July, 2021, 12:53:16

Larger time gaps indicate a transmission issue.

user-60f500 27 July, 2021, 12:55:08

Ok, I understand. Thanks for your help papr πŸ™‚

user-85ba36 27 July, 2021, 13:14:24

Hi,I have a question about raw data-can i read the time to first fixation and the time of looking at a surface?

papr 27 July, 2021, 13:21:56

You will have to calculate these by yourself, as Pupil Player does not make any assumptions regarding the starting point.

user-b25f65 28 July, 2021, 13:44:22

hello, is the model number of the eye camera used in pupil core available for disclosure? I wonder where do you source cameras in such small form-factor while having really high refresh rate

papr 28 July, 2021, 13:45:23

I am not sure if this information can be disclosed publicly (if at all). Try sending an email to [email removed]

user-5651f6 28 July, 2021, 13:55:18

Hi, I am trying to use the surface tracker, I have 3 screens that are in the same plane but I have an additional screen that the participant will hold. It says in the doc that all surfaces have to be in the same plane. Is there any way to have surfaces defined in more than one 2d plane ? If this guideline is not followed will the results be worthless ?

papr 28 July, 2021, 13:56:55

I think the documentation says/should say that the markers are in the same plane as the surface that you want to map to. In your case, the additional screen just needs its own markers and surface definition. Then you should be fine.

user-5651f6 28 July, 2021, 14:02:14

oh yeah sorry my bad, I misread that and panicked for a second. Thanks for the prompt reply.

user-7b683e 29 July, 2021, 17:31:52

I have downloaded Capture v3.6 and received data via Network API. However, I couldn't see circle_3d->normal field you mentioned. In gaze_on_surfaces, there is only one base_data field. For example, "base_data":["gaze.3d.0", "12647.33"] was written on the field.

So, how can I receive circle_3d data on Capture v3.6 such as v1.7.

papr 29 July, 2021, 17:36:57

I was already wondering that your data had so many levels. We simplified the data structure at some point, because it blew the recording files up. In newer versions, surface-mapped gaze does not include the full pupil data.

papr 29 July, 2021, 17:35:26

Do you mean 2.6 or 3.4? 3.6 does not exist πŸ™‚

user-7b683e 29 July, 2021, 17:36:11

Opsh, yes 3.4.0. I'm sorry this day was so long

user-7b683e 29 July, 2021, 17:39:15

What can I do to be able to receiving angle of eye and head both? Is there any version these two data can be received?

papr 29 July, 2021, 17:43:10

Two questions: 1. Are you using the surface mapped gaze to estimate head rotation? 2. Do you need these two data streams in real-time or is it ok for you, if you matched them post-hoc?

user-7b683e 29 July, 2021, 17:47:29
  1. You have said me that you can use head_pose sub key to receive headset rotation info. For this reason, I would like to get head_pose data.

  2. I don't need to real-time degree data, but these must be sync.

papr 29 July, 2021, 17:51:28
  1. Please be aware of the difference between head pose data and surface tracking data. πŸ™‚ Both use the same markers, but make different assumptions and fulfil different roles.
  2. Understandable. In this case, the easiest way would be to make a Pupil Capture recording, exporting the head pose tracker and raw data to csv, reading the csv files with your post-processing software of your choice, and matching the streams this way
user-7b683e 29 July, 2021, 18:04:07

Okey, I learnt that I can use recording file for this case. However, I want to ask my question more clearly again.

I want to receive two data - eye degree and head degree to be done on x-axis. By using these data, I will calculate velocity.

My questions are; 1) Can I get degree data (of eye and head) on Network API? 2) If I can do this, with which version of Pupil Capture?

papr 29 July, 2021, 18:05:46

1) Yes, you can. 2) You can use the latest version for this.

But gaze_on_surfaces is not what you are looking for. Instead of using the surface tracker plugin, you need to you the head pose tracker plugin.

papr 29 July, 2021, 18:06:26

You need to subscribe to pupil and head_pose and process these two streams.

user-7b683e 29 July, 2021, 18:12:43

I would like to thank you. In base of versions, data structure was changed drastically. I have got data I want to now.

user-a0389d 29 July, 2021, 18:57:12

Hello everyone! As I am analyzing the pupil core data for my dissertation, I have a question related to fixation. Do we have a term defining the time elapsed between the first and the next fixation? I am attempting to call it "Fixation-changing time". Please let me know if anyone knows. I will continue to look it up on the documents from pupil labs and published papers. If I found anything, I will follow up here. Thanks!

user-adfe33 30 July, 2021, 01:34:52

Kia ora from New Zealand. I'm currently tasked with extracting saccades data from pupil labs tracking data. I've attempted to follow through the preprocessing work by @user-2be752 (https://github.com/teresa-canasbajo/bdd-driveratt). I'm a bit out of my depth and was hoping I could get a few pointers. I managed to get the 'make' file to complete, however when trying to run the python script I'm getting lots of errors. I was hoping someone might have some fairly detailed instructions that I could follow. Any help would be hugely appreciated. - Mat

user-4bad8e 30 July, 2021, 11:44:23

Hello. Thank you for your kind supports. I am trying to use data of fixation.csv to estimate subjects' visual angles. However, sometimes variance of values of gaze_point_3d_x,y,z is extremely high. What can be expected as the cause of this error?

Chat image

nmt 02 August, 2021, 09:18:06

Hi @user-4bad8e. Variance like this usually coincides with low pupil detection confidence, e.g. when the pupils become obscured by eyelids/lashes. You will probably need to filter these low-confidence events from your data (e.g. <0.6). Confidence values are under the column header β€˜confidence’

user-ee72ae 30 July, 2021, 14:02:16

hello, what is the best cell phone to use with the pupil core

user-ee72ae 30 July, 2021, 14:03:02

?

user-ee72ae 30 July, 2021, 14:03:32

I'm using one but with a lot of problems when we download the file to generate the results

nmt 02 August, 2021, 09:24:50

Hi @user-ee72ae, please be aware that Pupil Mobile is no longer maintained. With that said, it should still work on, e.g. OnePlus devices. Which device are you currently using?

user-7b683e 30 July, 2021, 18:13:15

Hey papr,

After you have said to me about getting info on pupil and head pose, I have looked documentation of these fields. However I haven't found some details I would like to see on the doc.

Firstly, in pupil datum structure, I want to eye rotation amount as degree. But its unit has been declared as mm - I hope it is millimeters.

Second stuff, I haven't saw a guide on head_pose sub key. Like eye rotation, I want to get rotation amount of headset in degree unit.

I was hoping that you could mention some things in below; 1) Can I reach degree values to calculate angular velocities? If it is yes, which field can I receive? Apparently, circle_3d->normal isn't suit value due to its unit - so I need to degree. 2) If it is not possible, how can I calculate degree amounts by using existing fields in currently datum structure - so Capture 3.4.0. 3) Or is there any different method you can mention to me to calculate velocity amount of eye and headset in deg/sec unit.

Sorry, I don't think that the doc in this subject is enough for my purposes. By this reason, I have conveyed you many question.

Thanks.

End of July archive