πŸ‘ core


user-da7dca 01 January, 2022, 19:10:54

Happy New Year everyone! Im currently trying to build pupil labs from source on an arm64 linux device. After a bit of stumbling i managed to install all python libraries as well as requirements. When starting pupil capture however, I get the following error message: File "/pupil/pupil_src/launchables/world.py", line 537, in world main_window = glfw.create_window( glfw.GLFWError: (65542) b'GLX: No GLXFBConfigs returned'I already tried to build glfw as well as the related python bindings from source. The weird thing is that with the pyglfw sample application I have no problems to initialize a glfw window. Also the pyglui sample runs without problems (so i dont think its graphics driver related). Does anyone know what else I could do to solve my problem? Thanks in advance!

user-d1ed67 03 January, 2022, 05:10:08

Hi @papr. Recently, I studied the calibration routine of the software, and I have some questions about it. 1. There are two "Gazer" classes. Since we are using the HMD streaming to provide images of all 3 cameras to the Pupil Capture, is our gaze calibration done by the GazerHMD3D class? 2. Also, for post-hoc gaze calibration through Pupil Player, is it done through the GazerHMD3D or the Gazer3D? 3. In GazerHMD3D, the eye translation with respect to the world camera is supplied externally. Could you please point out which class/method supply these parameters? Also, can we manually set the eye translation parameter? 4. For both gaze calibration routine, the reference data are essential. Could you please point out which class/method provide the reference data?

papr 03 January, 2022, 10:37:27
  1. The video backends (UVC backend, HMD Streaming), the calibration choreographies (screen marker calib., hmd calibration, etc), and gazers (Gazer3D, GazerHMD3D, etc) are components that fullfill different tasks:
  2. video backend: supplies images
  3. calibration choreographies: collects pupil and reference data. The latte is implementation specific. The screen marker calibration displays concentric circles that can be recognized in the scene cameras video feed. Their pixel location is used as reference data. The HMD Calibration requires an external program, e.g. HMD eyes, to display reference targets and send their locations to Capture.
  4. gazers: Take the collected pupil and reference data, calculate the eye to scene camera mapping and use it to transform pupil to gaze data. Gazer3D (for the Pupil Core headset) and GazerHMD3D (HMD addon) differ slightly in their assumptions, e.g. the location of the scene camera.

Just because use the HMD streaming video source, it does not mean you need the hmd calibration or gazer. It depends on the hardware that you are using. Could you remind me what hardware you use?

  1. Player uses Gazer3D by default. If you run this PR from source, you can use the HMD 3d gazer, too. But it is experimental. https://github.com/pupil-labs/pupil/pull/2176

  2. These are typically supplied by the software displaying the reference targets in the virtual world camera, e.g. our unity plugin https://github.com/pupil-labs/hmd-eyes/

  3. In case of the screen marker calibration choreo., the plugin detects the concentric circles within the scene camera video feed. In case of the HMD Calibration, the external program supplies the locations, see 3.

papr 03 January, 2022, 08:08:34

By default, we raise all glfw warnings as errors. You can add your error code (65542) to this whitelist of ignored error codes https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gl_utils/utils.py#L385-L389

user-da7dca 03 January, 2022, 17:30:47

hey @papr thanks for your quick reply! I have now whitelisted the error message, which led to the subsequent error message:

glfw.GLFWError: (65545) b'GLX: Failed to find a suitable GLXFBConfig'

If I now also try to whitelist this no window can be created and the startup is aborted at the next assertation glfwSetWindowPos: Assertion window != ((void *)0)').

user-d1ed67 03 January, 2022, 17:34:22

We are using the Pupil Lab's eye cameras and world camera, but we modified the setup a little bit such that the original eye translation hardcoded in calibrate_3d.py is inaccurate. Given that, I think I should just run from the source and modify the hardcoded eye translation in calibrate_3d.py according to our setup, right?

papr 04 January, 2022, 05:49:48

That is a possibility, yes.

user-4f103b 04 January, 2022, 00:41:35

Hi, can we use Reference Image Marker with Pupil Core? if yes, than how we can generate reference image marker? Thank you

papr 04 January, 2022, 05:50:53

This feature is only available in Pupil Cloud and therefore only for Pupil Invisible recordings.

user-624202 05 January, 2022, 16:50:11

Hi, is there any way to trim or split a surface post-hoc so that multiple heat maps may be made for it? Or a way to merge identical-sized surfaces?

papr 05 January, 2022, 16:54:22

Hi, could you provide an example use case for the latter? You can define multiple surfaces based on the same markers and adjust them, such that they do not overlap.

user-624202 05 January, 2022, 17:01:16

Thanks! I made a surface representing an area of a face (like "right eye") and would like to be able to compare fixation distributions between other eyes on other faces. However, as of now all eyes are treated as the same surface. However, each eye appears for only 2 seconds at a time, so would there be a way to trim by time?

papr 05 January, 2022, 17:05:30

You can trim by time, but you would need to do that manually after exporting the data. You could create annotations as reference points for trimming the data.

Are these faces displayed on a screen? Or are we talking real-life faces?

user-624202 05 January, 2022, 17:06:15

All are on a screen

papr 05 January, 2022, 17:07:03

And are you displaying the apriltag markes on screen, too, or on physically printed papers attached to the screen?

user-624202 05 January, 2022, 17:07:24

Yes--both faces and the apriltags

papr 05 January, 2022, 17:08:20

Then you could display different apriltags with every face. This way you can define a dedicated right eye surface for every face.

user-624202 05 January, 2022, 17:08:51

Thanks! I'll see what I can do

user-5ef6c0 08 January, 2022, 15:07:48

hello. Is there an official channel to make requests for Pupil Player?

papr 10 January, 2022, 09:44:02

Hey, feel free to write about your feature ideas/requests here. Many features can easily added via third-party plugins. Maybe the requested functionality is available already.

user-b91cdf 10 January, 2022, 08:47:29

HI everyone, is it possible to set a ROI for the auto exposure mode ?

papr 10 January, 2022, 09:44:32

No, unfortunately not. You can either select full auto exposure or one fixed exposure.

user-7904e8 11 January, 2022, 12:08:35

hi,when i pip install -r requirements.txt, it happens follow problem

user-7904e8 11 January, 2022, 12:09:05

Chat image

user-7904e8 11 January, 2022, 12:10:40

my computer version is win10,and python version is 3.9

user-7904e8 11 January, 2022, 12:11:19

how to save it?

papr 11 January, 2022, 12:16:54

@user-7904e8 hi. To run from source on Windows you need Python 3.6. I recommend running the bundled application. You can get it here: https://github.com/pupil-labs/pupil/releases/latest/#user-content-downloads Nearly every functionality can be added via plugins to the bundled application. So there are only very few use cases were running from source is necessary.

user-8add12 11 January, 2022, 12:31:30

hello everyone, I want to know, is the Pupil labs core suitable for people who wear glasses?

user-9429ba 11 January, 2022, 13:40:28

Hi @user-8add12 In principle Pupil Core can be used with glasses, providing that the eyes are visible to the eye cameras behind the lens/frame. This is not optimal in many cases, and depends on the size/shape of the glasses, and head physiology. Pupil Core will work fine with contact lenses though. I hope this helps!

user-69e8af 12 January, 2022, 18:01:49

Hello! Does anyone have a solution for our eye tracking data not uploading to the cloud? The data on the app is sitting and trying to upload but will not move past 0%. We have tried changing our wifi connection, restarting the phone and app, neither have worked.

user-9429ba 13 January, 2022, 09:48:07

I have responded over in the πŸ•Ά invisible channel πŸ™‚

user-5135bb 14 January, 2022, 04:42:45

hi i have a quick question, I have built the DIY pupil core. I cant seem to get it to select the cameras. I have verifed that in the windows camera app they both show up. I tried running the pupil capture as admin and it showed several cameras but when i try to select any of them it says the camera is already in use. How should i procede?

user-5135bb 14 January, 2022, 04:42:47

message.txt

user-5135bb 14 January, 2022, 04:42:56

there is the log from the core

user-5135bb 14 January, 2022, 04:43:13

Chat image

user-5135bb 14 January, 2022, 04:47:53

I also have tried uninstalling the device in the device manager

user-5135bb 14 January, 2022, 05:09:56

I tried to use a VM running Linux as well, in the VM the pupil capture software picked up the cameras (but had some issues with capturing the output).

user-5135bb 14 January, 2022, 05:39:19

is there a way to manual load the drivers?

user-5135bb 14 January, 2022, 05:52:01

I have attempted on both a windows surface pro 7 and a custom desktop both running windows 11

papr 14 January, 2022, 08:20:15

@user-5135bb Hey! Pupil Capture only installs drivers for Pupil Labs cameras by default. To manually install drivers for third-party cameras manually, please follow steps 1-7 of these instructions https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-5135bb 14 January, 2022, 08:20:39

Thanks!!!

user-5135bb 14 January, 2022, 08:20:33

ahhhh okay perfect i will give that a try

user-5135bb 14 January, 2022, 08:38:04

Fantasitc it is working great!! thanks for the help

papr 14 January, 2022, 08:39:00

Great to hear! Let us know how your projects goes πŸ™‚

user-5135bb 14 January, 2022, 08:39:10

Will do!

user-6cbd8b 14 January, 2022, 09:22:40

Hello, I have errors when using the eyetracker. I just received it and I have a computer with windows10. I have installed "pupil_v3.5-1-g1cdbe38_windows_x64.msi", and when I connect the core to the computer via USB, and I launch pupil-capture (v3.5.1), it only launches a black window, and nothing happens. I don't see the interface of the software appearing... even after an hour of time. I have uninstalled, reinstalled, restarted, and still the same thing happens.... Can you help me?

papr 14 January, 2022, 09:25:04

Hey, I am sorry to hear that! There should be two windows, a command line prompt window and the actual application window which seems to stay black in your case. Can you confirm that?

user-6cbd8b 14 January, 2022, 10:12:52

Hi, i have only one windows, the commande line prompt.

papr 14 January, 2022, 10:13:52

Ok, thank you for clarifying that. So it seems like there is a general problem with running the application. What kind of CPU do you have?

papr 14 January, 2022, 10:14:52

For context: The bundled application is compiled for a specific CPU architecture and using a different one causes the software not to run properly.

user-6cbd8b 14 January, 2022, 10:16:10

i have an alienware laptop. Intel Core i7-7700HQ CPU 2.80 GHz, 16 Go RAM, x64

papr 14 January, 2022, 10:20:17

ok, that should be compatible. Could you please check which Windows 10 version you have? Please also try opening a new command line prompt and running the pupil_capture.exe explicitly from there instead of double-clicking the app icon. I am hoping for some kind of text/log indicating the issue.

user-6cbd8b 14 January, 2022, 10:26:20

My pc is running under Windows 10 Professionnel. I try to launch "pupil_capture.exe" from the commande ilne prompt. It launch the same black window, like the double-cilck. No error messages on the commande line prompt. Do i need a specific python version on my computer?

papr 14 January, 2022, 10:34:06

Could you please check if your user directory (C:\Users\<user name>) has a folder pupil_capture_settings containing a capture.log file? If yes, could you please share the log file?

papr 14 January, 2022, 10:28:35

No, the bundle comes with its own python.

user-6cbd8b 14 January, 2022, 10:36:30

the capture.log: 2022-01-13 14:58:48,976 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version. 2022-01-13 15:54:59,801 - world - [DEBUG] launchables.world: Application Version: 3.5.1 2022-01-13 15:54:59,811 - world - [DEBUG] launchables.world: System Info: User: p.labedan, Platform: Windows, Machine: port-labedan, Release: 10, Version: 10.0.18362 2022-01-13 15:54:59,811 - world - [DEBUG] launchables.world: Debug flag: False

papr 14 January, 2022, 10:37:49

2022-01-13 15:54 Is this your current local time or are these from yesterday?

papr 14 January, 2022, 10:40:16

Every time the application launches, these logs are overwritten. So there seems to be something different between yesterday, when the application ran at least partially, and now where it is not even overwriting the logs anymore.

user-6cbd8b 14 January, 2022, 10:42:02

oh, it's very strange, i just lokk again the capture.log, and the time has changed! There is only one line in the file now: 2022-01-14 11:39:37,657 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version.

papr 14 January, 2022, 10:55:19

So the application launches, but seems to get stuck before the window is created. Could you please restart the computer (to ensure that there are no dangling Pupil Capture processes running somewhere), delete the user_settings_* and capture.log files in the pupil_capture_settings folder and then try launching the app. Please for a minute and share the newly generated capture.log file with us in case the issue was not resolved.

user-6cbd8b 14 January, 2022, 15:48:48

message.txt

user-6cbd8b 14 January, 2022, 15:48:34

After a long time, the window appreared! but, no camera deteted i think.... and the .log was full of text. I put it in the next message:

user-6cbd8b 14 January, 2022, 13:13:40

Ok, here's the result... but not after 1 mn (15 mn). only one line in the .log : 2022-01-14 14:01:35,987 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version.

papr 14 January, 2022, 15:51:20

Indeed. Something was blocking the application for over an hour. Is it possible that you are running an anti-virus or similar administrative software?

user-6cbd8b 14 January, 2022, 15:59:14

There is the Kaspersky antivirus yes. But there is no message from it...

papr 14 January, 2022, 16:00:54

@user-6cbd8b if no camera is detected, then it is likely that the drivers were not installed correctly. Check out https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting

user-7daa32 14 January, 2022, 20:26:05

Mac and windows which is better for eye tracking?

papr 17 January, 2022, 10:06:45

Pupil Capture for macOS/Linux and Windows only differ in a timestamp implementation detail. Otherwise, the biggest difference is that one needs to install drivers for the cameras on Windows. Warning: On the latest macOS Monterey, one needs to start Pupil Capture with administrator privileges to access the cameras.

user-7daa32 14 January, 2022, 20:26:11

Hello

Please I have a question.

In the video, how do you know the 0,1 X and Y coordinate of the screen so we can convert to the actual screen size in centimeter? The 0, and 1 are off the screen

user-7daa32 15 January, 2022, 13:16:02

Hello everyone

I want to start using Mac, would all my eye tracking file on window open on Mac ?

papr 17 January, 2022, 10:07:09

The files are compatible between operating systems

user-b91cdf 17 January, 2022, 08:29:18

Hello people, In my setup I want to do a Natural Feature Calibration from inside a car. I have placed different markers at different distances and angles. My original plan was to not allow any head movement during the calibration, as I am then calibrating to the largest FoV possible. However, during validation I wanted to allow the head movements to cover the normal glances and head movements like during driving.

So I get an accuracy of 2.58+/-0.09 after calibration and 1.726666667Β° after validation.

However, I noticed when I do not instruct the subject how to hold his head, I get a much higher accuracy. (Normal Gaze-behavior)

After calibration: 1.03+0.08 Β°. After validation: 1.3144444Β°.

What do you recommend ?

Allow head movements in calibration and validation Do not allow head movements in both cases or a mix ?

Problem: If I allow head movements (no instructions to subject) , I calibrate to a smaller FoV and can't make any statement about the accuracy at large eye angles, but accuracy is much better and calibrated to normal gaze patterns.

Cheers, Cobe

papr 17 January, 2022, 10:12:02

Hi Cobe, what you could do is to allow head movement during the calibration and disallow it during validation. This way you can measure the effect of calibrating a small area. @nmt what best practices would you otherwise recommend here?

papr 17 January, 2022, 10:08:38

0 and 1 of the surface coordinates are based on the surface definition. Where ever the top right surface corner is, it corresponds to (1,1) and the bottom left to (0,0)

user-7daa32 17 January, 2022, 12:01:57

Thanks... They can't be known if surface is not defined? There are background images in the videos together with the computer screen. The edges of stimulus read from 0.22 to 0.78. how would you convert to the actual stimulus size in centimeter? The coordinate at the edge of the computer screen is not known. Does 0,0 and 0,1 begin from the marker centre or the edge of the marker ?

nmt 17 January, 2022, 11:21:04

Best practice would be to cover all gaze angles during calibration that you might expect, and want to capture, in your experiment. For example, if stimuli appear in the periphery of vision (e.g. in the wing mirror), the driver may glance at the mirror without moving their head. In such a case, it would be worth having the system calibrated for that.

You could subsequently validate with or without head movement: With - provides accuracy during more typical gaze (majority of eye movements) Without - provides accuracy at larger angles (minority of eye movements, but possible, e.g. glancing at the mirror)

Note that even if accuracy is reduced at larger viewing angles when compared to more typical gaze, ~2.5 degrees is likely sufficient to identify the object being gazed at. This of course depends on the size of the object, but it certainly sounds reasonable in a driving situation.

Alternatively, if you aren't interested in those glances to more extreme angles, and most of the experiment contains gaze accompanied by head movements, then maybe calibrating within the smaller FoV is sufficient.

user-b91cdf 17 January, 2022, 12:09:43

Thanks ! I' ll think about it πŸ™‚

user-7daa32 17 January, 2022, 12:06:24

Like I need to define a big surface for the whole stimulus right ? The size of the stimulus is from 0.22 to 0.78 and converting to actual distance in centimeter is difficult because we don't know where 0,0 and 1,1 are

papr 17 January, 2022, 16:04:13

You can adjust the surface relative to the markers. This will move their coordinate systems accordingly. See surface B in the screenshot. The origin is not necessarily the corner of the marker. Therefore, I suggest adjusting the surface to match the size of your stimulus. Then you can calculate the real world coordinates in cm by multiplying the estimated surface gaze location by the size of the stimulus.

E.g. for the center point (0.5, 0.5). Assuming the stimulus has a width of 30 cm and a height of 20 cm. Then the center point would be at 0.5 * 30 cm = 15 cm from the left and 0.5 * 20 cm = 10 cm from the bottom.

Chat image

user-6cbd8b 17 January, 2022, 14:23:07

Hello papr, i tried what you suggest, but it's the same result (https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting). Tha cam is detected by the system, but not used by pupil_capture.exe...

papr 17 January, 2022, 16:05:07

Can you confirm that the device manager lists the Pupil Cam entries? If yes, in which category are they displayed?

user-7daa32 17 January, 2022, 16:25:31

This helped... Thanks so much

papr 17 January, 2022, 16:27:04

Credit to @nmt who helped me to come up with that answer. πŸ‘

user-6cbd8b 17 January, 2022, 17:16:21

After the test you recommended me, here's the device manager when i connect the eyetracker on the usb. I'm still waiting since approximatively one hour that the pupil_capture.exe responds... because i remerber that the cam whare detected differently than the on the jpg.

Chat image

papr 17 January, 2022, 18:09:22

They were likely detected as "unknown". Please contact info@pupil-labs.com in this regard. This is the second case that I have seen where drivers seem to be installed correctly but Capture is not able to access them.

user-711256 17 January, 2022, 18:59:47

Hi! I have a question about a problem that I started having yesterday and I haven't found a solution. When I open the files of my experiment in the pupil player I can no longer see the video of the world camera, I have been working on this file without problems for a week and suddenly yesterday it stopped working. I can find the video in the folder but even re-importing it into the Pupil Player I still see all gray.

user-b074b6 18 January, 2022, 02:46:39

Hi, I am having issues connecting the pupil camera to the pupil capture software. Every time I try to connect it, grey screens appear. I tried using the device manager and uninstalling the drivers and restarting the computer with no luck. I also logged in as an administrator and ran into the same issue. Please let me know if you have any recommendations in order for my device to recognize my pupil camera.

papr 18 January, 2022, 07:59:43

Could you please check in which category the Pupil Cam devices are listed in the device manager? If you have a binocular headset, there should be 3 devices in total.

papr 18 January, 2022, 08:01:41

Hi, I am sorry to hear that. It sounds like your video file got corrupted and Player has difficulties opening it. If the video file playable in every other media player, please try shutting down Player, deleting the world_lookup.npy file and restarting Player.

user-711256 18 January, 2022, 16:59:01

it worked! thanks!

user-b811bd 18 January, 2022, 13:54:55

Hi, is there any apps needed to be installed on my pc so I can transfer my videos from companion device?!

user-b074b6 18 January, 2022, 15:35:27

The 3 cameras are listed under the camera category and libusbK USB devices category.

papr 18 January, 2022, 15:55:07

Do the ones under the libusbk category look slightly more transperent?

papr 18 January, 2022, 15:53:55

Under both categories at the same time?

user-7daa32 18 January, 2022, 18:28:58

As long as we created a surface before exporting the data ?

papr 18 January, 2022, 18:30:22

correct. the export only exports the current state. you need to make a new export if change anything.

user-d2873b 19 January, 2022, 08:15:26

Hello, how should I read the values of the timestamp column? How do I get from the values to seconds? The same applies to the diameter_3D column? Which unit is used? The values differ very much from each other. See example:

nmt 19 January, 2022, 10:02:12

Hi @user-d2873b πŸ‘‹. The unit of the timestamps is seconds in relation to a fixed starting point. You can read more about that in our online documentation: https://docs.pupil-labs.com/core/terminology/#timing. The unit for diameter_3d is mm.

That said, the formatting doesn't look quite right in your screenshot. Sometimes, spreadsheet software auto-formats the timestamps incorrectly when importing the pupil_positions.csv file. For example, the German meaning of ',' and '.' in floating numbers are opposite to their English meaning. Your spreadsheet software may have parsed the values as integers instead of decimal numbers, and subsequently filled in the 3-digit-separators. There should only be one decimal point within each value!

Please try adjusting the format for those columns in your respective software in the first instance. If that doesn't work, try re-exporting and re-opening the data! Let us know how you get on.

user-d2873b 19 January, 2022, 08:15:42

Chat image

user-205d47 19 January, 2022, 10:08:58

Hello. The latest pupil capture 3.5 doesn't detect the cameras with the latest Pupil Core in Montery despite running with root privilege. "Could not connect to the device'. Has anyone able to run it successfully ? Thanks a lot !

papr 19 January, 2022, 10:25:08

I am able to run it. What exact version of macOS Monterey are you using? To confirm, you are running the application from the terminal to give it the admin. privileges?

user-a07d9f 19 January, 2022, 10:44:32

hi all. I serached through docs and videos, but can't see the info on "best practices" of matching the worldview camera to real "worldview" of user

papr 19 January, 2022, 10:46:28

Could you give an example of what you try to achieve?

user-a07d9f 19 January, 2022, 10:49:15

eye tracking of user observing phisical objects like a handheld product.

user-a07d9f 19 January, 2022, 10:48:24

using monitor as reference leads user to shift head to match the worldview from monitor. - liftng worldview up to ceiling and asking user to comfortably watch the center of monitor and without shifting his head realign worldview camera to match center of worldview image kinda works. but then we get into situation that user can move the object and place it too low in worldview image during experiment.

papr 19 January, 2022, 10:51:20

Pupil Capture assumes that the relationship between scene and eye cameras is fixed during and after calibration. Adjusting the scene camera breaks this relationship and requires recalibration.

user-a07d9f 19 January, 2022, 10:52:48

1 - worldview camera alignment 2 - calibration.

user-a07d9f 19 January, 2022, 10:52:18

so let's break this in two parts.

user-a07d9f 19 January, 2022, 10:51:55

it leads to problem that user calibrates to ~50cm distance, but then can move object closer to 30-40cm distance, or move it lower than worldview was set.

user-a07d9f 19 January, 2022, 10:54:18

with calibration we have no problem - best practice we found is to calibrate using monitor so that monitor is set to same distance as objects the user interacts with

user-a07d9f 19 January, 2022, 10:54:43

problems are with aligning worldview - before calibration

papr 19 January, 2022, 10:55:34

You can always use the 1920x1080 resolution which has a huge field of view.

user-a07d9f 19 January, 2022, 10:55:46

hm.

user-a07d9f 19 January, 2022, 10:57:18

so using lower resolution is not scaling but like digital zoom - cropping smaller part of sensor - and making picture "zoom in" ?

papr 19 January, 2022, 10:57:55

yes, 1280x720 is a crop of the full sensor's field of view

user-a07d9f 19 January, 2022, 11:00:41

but with huge FOV we "lose" ability to do good calibration with small monitor - as you can't put it "in your face" for user. and if you set it to 50+cm you occupy very small part of picture and calibration results are not very good.

papr 19 January, 2022, 11:02:41

That is correct. This is exactly the trade off one needs to do: The bigger the FOV the more objects one can see in the video. But the less pixels are available for these objects.

user-a07d9f 19 January, 2022, 11:02:38

I see that we need to experiment more and find better methd for our usage scenario.

user-205d47 19 January, 2022, 11:04:12

Hi @papr thanks a lot for getting back quickly. Apparently, the problem is in the USB adapter as the new Mac has only USB C. I tried a bunch of adapters. Some work with the new Pupil Core with single camera while the others work with new Pupil core with two cameras. Thanks a lot once again !

papr 19 January, 2022, 11:05:33

In this case I recommend getting a usb-c to usb-c cable that is suitable for data transmission. Then you can connect the headset to the laptop directly. Then is should work properly.

user-a07d9f 19 January, 2022, 11:05:15

I have 2 questions on worldview - 1 why it is not sed directly in the middle of forehead? why it's offset? it causes parallax if calibration is done on longer distance than objects in experiment. 2 - where can I read more on USB-C option for external cameras-sensors

papr 19 January, 2022, 11:08:06

Feel free to write an email to info@pupil-labs.com if the infos on the website are not sufficient regarding your second question.

papr 19 January, 2022, 11:06:34

@mpk might be able to comment on your first question.

user-a07d9f 19 January, 2022, 11:27:49

thanks. I'll wait the reply

wrp 19 January, 2022, 11:09:53

@user-a07d9f the USB-C world cam option is used for prototyping. You will need to "bring your own" sensor and will need to write your own capture backend. Some researchers have used this for adding a depth sensor (previously using Realsense sensors).

user-a07d9f 19 January, 2022, 11:28:24

I understood the idea. but I thought you (as pupil labs) had some ready solutions for this.

user-a07d9f 19 January, 2022, 11:27:36

ok - I'll go with fullhd for now.

user-a07d9f 19 January, 2022, 11:30:30

By the way - is it possible to have reduced resolution but with same FOV as 19201080? like 960540 to have direct 2x downscale?

papr 19 January, 2022, 11:31:07

That is not possible with our cameras. You would need to scale down the image yourself.

user-a07d9f 19 January, 2022, 11:41:00

hm and how zoom with pan and all this works then? is it sw processed on pc ?

papr 19 January, 2022, 11:44:25

Even though some cameras support these UVC controls, they are not implemented in ours. Our cameras provide a fixed set of resolutions that correspond to a specific crop/zoom/scaling of the sensor. This is done on the camera itself to my knowledge. Pupil Capture just requests the target resolution and processes what ever is returned by camera.

papr 19 January, 2022, 11:41:00

We tried that with the Intel RealSense camera. But when Intel dropped support for the R200 from one day to the other, we were forced to do so, too. And this is not the user experience we want for our customers. The usb-c mount is a trade-off where we can provide a flexible hardware and software platform without depending on camera manufactures.

user-a07d9f 19 January, 2022, 11:41:22

I see

user-a07d9f 19 January, 2022, 12:00:38

Ok I'll go test all this now. We worked with old version of Pupil Capture all the time before (2.3) and I wonder if switching it to 1920-1080 will make cpu pload much greater. are there any hw encoding features present in newer releases?

papr 19 January, 2022, 12:09:00

CPU load will increase for detection features like surface tracking or the calibration marker detection. There are no hardware acceleration features.

user-a07d9f 19 January, 2022, 12:31:49

That makes me sad. "so much HW and so little acceleration"

papr 19 January, 2022, 12:34:39

Modern CPUs should be able to handle it without any issues.

user-a07d9f 19 January, 2022, 12:37:19

ok - I'll go check the system recommendations for CPU ram storage and also for Mobile setup.

papr 19 January, 2022, 12:42:17

The most important metric is cpu frequency for Intel CPUs. The Apple m1 chips handle it easily.

user-dee88a 19 January, 2022, 18:46:31

Hello all, is it correct to say that surfaces created with the surface tracker are defined in the undistorted view of the world camera, rather than the distorted view? I am attempting to use surfaces to track gaze on a curved computer monitor (unless this is not recommended) and am trying to address similar issues as this user was https://github.com/pupil-labs/pupil/issues/1930

papr 21 January, 2022, 08:48:26

Hi, I wanted to quickly follow up with a clarification. The "detection error" described in the linked github issue might look like the issue that you are having with the curved monitor but reality it is an additional error source. 1. Scene camera lens distortion - This is being corrected for by defining the surface in undistorted camera space and using it for gaze mapping. 2. Assumption of flat surface - This is where you need to correct for the curvature of the monitor yourself. One workaround could be to setup multiple markers around the monitor and setting up multiple flat surfaces to approximate the curvature. One would need to manually combine the mapped gaze results from all sub-surfaces post-hoc.

papr 19 January, 2022, 20:51:26

hi, surfaces are actually defined in both coordinates. The previewed outline connects the distorted corners. Gaze is mapped within the undistorted space. Nonetheless, the algorithm assumes that the surface is flat. So mapping onto a curved monitor will introduce some mapping error.

user-dee88a 19 January, 2022, 21:29:00

Thanks for the reply - so in the gaze_positions_on_surface file generated by the surface tracker export, on_surf events refer to times when the gaze mapped in the undistorted space is within the surface as defined in the undistorted coordinates?

papr 20 January, 2022, 08:11:05

correct

user-b91cdf 20 January, 2022, 09:58:54

Hi Guys, i want to visualize the gaze angles in horizontal degrees and vertical degrees (not with the angles theta and phi defined by the eye coordinate system). To make it more intuitive. So looking straight would result in 0Β°/0Β°. My current approach is to use the norm_x_pos and norm_y_pos and scale it with respect to the FOV of the world camera and a defined Mid point. Is it possible to do it this way ? I am wondering whether the linear scaling is ok.

I am doing it this way because i want to compare typical gaze angles with my targets angles during the natural Feature calibration. If I subscribe to the calibration data via zmq i get the ref_list of the targets in norm_pos_x and norm_pos_y as well.

thanks, Cobe

Chat image

papr 20 January, 2022, 10:11:33

norm_pos are distorted coordinates, i.e. you need to correct for the lens distortion. You can use the gaze_point_3d instead which uses undistorted 3d coordinates. (0, 0, 1) corresponds to looking straight ahead (from the scene cameras point of view). Subtracting this vector from your actual gaze_point_3d gives you a difference that can be divided into horizontal and vertical components and transformed to degrees.

user-b91cdf 20 January, 2022, 11:11:48

Ok i clearly misunderstood the desciprtion given here: "looking directly into the eye camera: (x=0, y=0, z=-1) (cartesian) phi=-Ο€/2, theta=Ο€/2 (spherical) looking up: decreasing y vector component; increasing theta looking left: increasing x vector component; increasing phi"

What do you mean by "actual gaze_point_3d"? Should i e.g. place a Marker straight 0Β°/0Β° H/V -> measure the vector and subtract it from the other gaze_point_3d data ? Then i would plot everything as follows:

For better understanding, I just recorded a video only looking horizontically: -> Theta = vertical angle.

Chat image

papr 20 January, 2022, 11:15:30

As you said before, you do not want to use theta/phi since they are in eye camera coordinates. Everything within the pupil data is in its respective eye camera coordinate system. What you want to use instead is gaze data which is in scene camera coordinates. Gaze data includes the gaze_point_3d field that I was talking about. Please be aware that you need a successful calibration to receive gaze data.

user-7daa32 20 January, 2022, 12:56:03

Good morning from here

I know I have asked this question before very long time ago. I so sorry if it's kind of boring.

I want to know the importance of the validation steps. How do you know the accuracy has dismissed due to slippage? And why the validation after calibration? Do we take the accuracy values for each participant or trial?

nmt 20 January, 2022, 15:00:49

Hi @user-7daa32. Please see this response for reference: https://discord.com/channels/285728493612957698/285728493612957698/838705386382032957 Don't forget that you can use the search function in the top right to find answers to your previous posts. You can also filter messages from specific users and in specific channels. E.g. from: <username> in: core <search term> πŸ™‚

user-b91cdf 20 January, 2022, 12:56:32

Yes ok, The generated plot is based on gaze data (gaze_point_3d x ,gaze_point_3d x, gaze_point_3d x), i exported via the pupil player. -> gaze_positions.csv . The plot is generated by the pupil-tutorial 5 - Analyzing Gaze Velocity. So the next step would be as I described here: "Should i e.g. place a Marker straight 0Β°/0Β° H/V -> measure the vector and subtract it from the other gaze_point_3d data ?" The previous norm_pos_x/y are also based on gaze data. I did not analysed pupil data before. my fault sry

papr 20 January, 2022, 14:34:58

Ah, my bad, then you are doing everything correctly already. The only thing left is to subtract 90 degrees from both theta and psi signals s.t. looking-front corresponds to 0/0deg. The tutorial's implementation has its center at 90/90 deg.

user-09cebb 20 January, 2022, 21:45:50

Hi - Would it be possible to use the functions to process the eye tracking data in real time on a local processor? I am looking into a project where the data would need to collected and processed on moving vehicle and then the world camera with gaze position imposed is sent wirelessly back to a remote location.

papr 21 January, 2022, 07:53:46

Hi, yes, this can be made possible but requires fairly high processing power from the local unit.

user-09cebb 21 January, 2022, 09:05:17

Do you have some specifications for the level of processing required?

user-09cebb 21 January, 2022, 09:04:14

Thank you

papr 21 January, 2022, 09:14:49

Something around an Intel Core i5 with >2.8. GHz? Giving an exact number is difficult. Alternatively, you could also use something low-powered, e.g. Raspberry PI, and stream scene and eye videos to a computer with more processing power and running Pupil Capture.

user-09cebb 21 January, 2022, 09:16:22

Great - thank you Sounds like the more reasonable approach is to combine the video off car. Thanks for the info

user-679f5a 21 January, 2022, 15:03:46

Hi, does Pupil Labs have multi glint eye tracking systems?

papr 21 January, 2022, 16:10:39

No, we don't. We rely on detecting the pupil outline and fitting an ellipse to it

papr 21 January, 2022, 22:04:14

I will have a look on Monday πŸ‘

user-dee88a 25 January, 2022, 16:21:58

Hi Pablo, did you have a chance to look into this error since the weekend?

user-97fc9e 22 January, 2022, 17:48:43

Hi there! I am setting up my lab and would like to incorporate eye tracking. I have a question regarding my setup and compatibility with pupil labs

user-97fc9e 22 January, 2022, 17:50:20

My existing PC has a custom-made software (using LabView compiler) where the subject must track a moving dot. Would the pupil glasses allow interface with a 3rd party stimulus presentation software?

user-97fc9e 22 January, 2022, 17:53:09

It looks like Pupil Invisible would do this. I am wondering how I would sync up the eye tracking data with the stimulus and other hand movement data I am collecting.

nmt 24 January, 2022, 09:53:37

Hi @user-97fc9e πŸ‘‹. Both Pupil Core and Pupil Invisible have Network APIs that offer flexible approaches to send/receive trigger events and synchronise with other devices (e.g. https://docs.pupil-labs.com/developer/core/network-api/#network-api). We also maintain Lab Streaming Layer (LSL) relays for publishing data that enables unified/time-synchronised collection of measurements with other LSL supported devices (e.g. https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture). So that I can make a concrete product recommendation, it would be helpful to learn more about your moving dot paradigm (and other gaze tasks) – what sort of eye tracking metrics are you looking to extract?

user-a07d9f 24 January, 2022, 12:10:08

Hello all.

user-a07d9f 24 January, 2022, 12:10:53

Are there any ways to troubleshoot the player 2.3.0 startup failure on win10?

papr 24 January, 2022, 12:14:41

Please try deleting the user_settings_* files in the pupil_player_settings folder and start Player again. If I remember correctly, older versions had an issue where the app windows would not appear if they were used on a no-longer-connected computer screen.

user-a07d9f 24 January, 2022, 12:16:00

Sorry to say that - but there is no pupil_player_settings folder there

papr 24 January, 2022, 12:16:56

If the application did start up correctly before, you should be able to find it at C:\Users\<user name>\pupil_player_settings

user-a07d9f 24 January, 2022, 12:18:37

Thank you - this resolved the issue. I wanted to ask if there are faqs and troubleshooting guides anywhere in docs section.

papr 24 January, 2022, 12:20:13

For camera connection issues with Core, we have https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting. We do not keep a troubleshooting list with issues that have been fixed in the latest release. For open issues, see https://github.com/pupil-labs/pupil/issues

user-a07d9f 24 January, 2022, 12:24:49

By the way - we have several cores with different revisions. One has eye cam connectors without plastic thingy which looks like a capon it and we had to repair the wires seveeal times. I wonder if it is possible to order those caps?

papr 24 January, 2022, 12:25:29

That might indeed be possible. Please contact info@pupil-labs.com in this regard.

user-a07d9f 24 January, 2022, 12:26:35

Thank you

user-a68b92 24 January, 2022, 19:56:58

Hi Neil, I'd like to measure pupil dilation, duration of gaze fixation, and saccadic movements (trajectory, distance, speed) as the subjects move their eyes along the line. The dot starts at the center of the screen and moves along a straight line in one of 28 directions.

nmt 25 January, 2022, 10:20:17

Hi @user-a68b92 πŸ‘‹. Pupil Core lends itself better to the measurements you describe. It records pupil diameter both in pixels (observed in the eye videos) and millimetres (provided by 3d eye model). We also implement a dispersion-based fixation filter: https://docs.pupil-labs.com/core/terminology/#fixations. While we don't quantify saccades explicitly, we do expose the user to raw data that can be used for this purpose. Check out this page for details of data made available: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv. With these data it should certainly be possible to examine eye movements in relation to the moving dot. You might also be interested in Pupil Core's our surface tracker that can map gaze to surfaces such as computer screens: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-d1ed67 24 January, 2022, 22:38:24

Hi @papr . I want to manually save and load 3D eye models, so I tried to dump self.detector into a file in the pye3d_plugin, but I got a RuntimeError: SynchronizedArray objects should only be shared between processes through inheritance. I also tried creating a method in detector_3d.py (the detector class in pye3d package) to access the 3d eye model, but I still get the same RuntimeError. It seems like an issue related to multiprocessing, but I cannot find anything related to multiprocessing in pye3d class (in file detector_3d.py) or pye3d_plugin.py. Could you please point out how I should solve this issue? Here is my modified pye3d_plugin.py:

pye3d_plugin.py

user-04dd6f 25 January, 2022, 05:05:30

Hi @papr , got a quick question regarding to the unit interpretation of the saccade length. Assuming I'm obtaining the β€œsaccade length” by using the equation above (by column norm_pos_x & norm_pos_y), then I would get the result of the saccade length like the figure above.

Would like to know how to interpret the result into actual length ( e.g. mm or cm)? Many Thanks~

Chat image Chat image

nmt 25 January, 2022, 10:52:22

Hi@user-04dd6f. If I understand correctly, your approach here is to classify saccades as the inter-fixation change in gaze position. Note that during the time between fixations, other ocular events, e.g. smooth pursuits, PSOs etc. may occur. In addition, if there are multiple rapid saccades in-between two classified fixations, e.g the fixation filter settings weren't optimised for very short fixations, then the inter-fixation change in gaze would not accurately reflect saccadic distance and it would be difficult to spot these just from the raw data. A more robust way to classify saccades, depending on your experiment, might be to use a velocity-based approach

user-80316b 25 January, 2022, 07:55:21

How can be the distortion be calculated if there is no undistorted Video? Using the gaze_point_3D seemd okay but comparing the plottet Datapoints it felt like there is an inversion of the Data an using both eyes, the Datapoints are quite seperated.

Chat image

nmt 25 January, 2022, 10:57:27

Hi@user-80316b. I'm not sure that I understand the question fully. Can you elaborate a bit more on exactly what you are trying to do?

user-80316b 25 January, 2022, 11:33:46

I'm note sure if it helps to elaborate my question further. For me it seemed that the graph I plotted from the normal_gaze data blue = eye0, orange = eye1 that comparing to the real video is inverted. And also got elliptical boundaries. So I did some research and realised working with the "normal" Data doesn't work and we have to use the "3D" Data.

In short I want to define a small area which works as AOI (which didn't work in Pupil Player, due to missing markers) and only want to research time in AOI and outside of AOI during the Measuring time. It seemed to work okay with "normal", but it might not be accurate enough due to distorion. But working with the 3D-Dater doesn't work either

user-d50398 25 January, 2022, 12:01:01

Hello, I would like to ask something related to the calibration process: 1. It turns out that we have 2 ways for calibrating, and I just want to confirm if I did that in a correct way : (1) Screen Marker Calibration Choreography: keep the head still while moving eyes to track the markers, and (2) Single Marker Calibration Choreography: keep the eyes still on the marker while making spiral movements of the head. 2. In terms of accuracy, do you think which calibration approach is better? 3. When I finished the Single Marker Calibration Choreography, I removed the eye-tracker and turned off the Pupil Capture, but when I open them again, it is still able to detect my gaze, is it true that the calibration parameters from the previous process has been saved?

Hope to receive your support!

nmt 25 January, 2022, 14:31:51

Hi @user-d50398. Responses below: 1. These are correct. However, you can also move the single marker around whilst keeping the head still (if using the physical marker) 2. The key difference with a single marker + head movement (or physically moving the marker around) is that you can calibrate in larger area of the visual field, as opposed to the screen-based 5-point calibration, which will only cover the area that the screen occupies. Both approaches should yield consistent accuracy for their respective area. 3. Yes, the previous calibration is saved. Important note: If you have removed the headset from the wearer, we advise you to re-calibrate as the previous calibration might not be valid

user-785b0f 25 January, 2022, 13:58:40

Hi! I'm using Pupil Core to study the glare of athletes during sports competitions. I am trying to find the angle that consists of the eye, the gaze point, and the lighting. Regarding the position of the illumination, the position coordinates (pixels) on the image were obtained by annotating the distortion-free image acquired by iMotios. The gaze position uses Gaze2dX and Gaze2dY acquired by iMotios. The angle is calculated assuming the eye position coordinates are at the center of the image (640,360). Is this calculation method correct? Thank you in advance!

nmt 25 January, 2022, 14:54:08

Hi @user-785b0f πŸ‘‹. The assumption that the eye position coordinates are at the centre of the image is incorrect. I'm not totally familiar with the iMotions export. But, eye_center*_3d_x,y,z (*eye0 or eye1) defines the eyeball centre estimates in 3d camera coordinates in our raw data export.

user-c8e5c4 25 January, 2022, 14:15:08

Hey, I'm trying to extract all frames from the world video to jpg format using ffmpeg as described in this tutorial :https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb. However, trying to account for variable frame rate using the -vsync 0 option gives me the following error:

Encoder did not produce proper pts, making some up. [mjpeg @ 0x7f982370c180] Invalid pts (41) <= last (41)trate=N/A speed=2.12x
Video encoding failed Conversion failed!

Using png as output format or using -vsync 2 (variable frame rate) works, but i'm not sure whether frames are extracted correctly. Could you tell me why this error is produced or maybe whether -vsync 2 is also a valid option for extracting frames? Thanks a lot!

user-c8e5c4 26 January, 2022, 15:27:33

Hey guys, have you maybe had time to look into my problem? Maybe it's better to ask this question on an ffmpeg forum, i know it's quite specific...

papr 25 January, 2022, 14:22:47

Hi, I feel like distortion is less of an issue in this case. The reason one usually needs markers is that gaze is predicated in scene camera space but the latter can move relative to your AOI. You won't be able to map gaze to your AOI unless 1) you assume that there is no head movement relative to the AOI or 2) have some kind of external reference to calculate the relationship between scene camera and AOI over time.

I would also like to emphasize the difference between pupil and gaze data. The former is estimated in eye camera space and does not require a calibration. The latter requires a calibration and is estimated in scene camera space. You seem to have plotted pupil data if I am not mistaken.

user-80316b 25 January, 2022, 14:59:54

Thanks for your fast reply. I think I could approximately calulate the values by defining an area for the lower third of the video frame and the two upper thirds of the video frame. It's like an handheld device wich it held like a mobilephone. So I can assume if the values are in the lower third the person is looking at the device and for values above the person is looking at the "World". Which datapoints should I use for this sczenario? I would guess it's gaze_point_3d_x, gaze_point_3d_y in the gaze.positions.csv and using the 0:0:1 vector calculation as you recommend on your homepage?

Thanks for your help, first time using the core

nmt 25 January, 2022, 15:07:47

The only way to be sure of this would be via surface tracking markers. Personally, I wouldn't be comfortable making the assumption of lower vs upper thirds of the video frame being associated with phone and world gaze, respectively, and would instead adopt manual annotation to discern whether or not participants gazed at the mobile device. Of course, this depends on how critical these insights are! πŸ™‚

user-80316b 25 January, 2022, 15:10:50

okay, had that in mind as well. With the annotations there is a column calld duration, how can I use the annotations to get a value? Couldn't figure it out

papr 25 January, 2022, 15:16:38

The manual annotations in Pupil Player do not have a duration. They are timestamped with the timestamp of the displayed scene video frame. I would rather recommend using two separate events, enter and exit.

user-80316b 25 January, 2022, 15:17:37

okay, thanks a lot! Then I'll do it like that.

user-d50398 25 January, 2022, 15:37:54

Thank you @nmt for your information, it is clear to me know. But when doing the calibration with the physical marker, I am not sure when (how long) should we stop the calibration process? Secondly, after calibration, there is a spiral displayed on the visualization as images below, what does it mean? Does it indicate the religion of confidence for gaze detection?

Chat image

papr 25 January, 2022, 15:43:39

To add to @nmt's explanation: The small orange lines are the residual training errors for the collected pupil data. Your calibration accuracy looks very decent from the shared picture alone!

nmt 25 January, 2022, 15:40:22

There is no set limit for when you should stop. There is usually a trade-off between how much of the visual field you want to cover and how long the wearer can keep their concentration, i.e. staying fixated on the target! The spiral is showing the area that was covered during the calibration

user-785b0f 25 January, 2022, 15:42:28

@nmt Thank you for your fast reply. I also think that the effectiveness of the angle of composition needs to be discussed. Therefore, I am currently studying the relationship between the installation position of the lighting equipment, the pupil contraction rate, and the subjective evaluation. Is it possible to convert eye_center * _3d_x, y, z (* eye0 or eye1) to 2D? I processed the distortion-free 2d image plane output by iMove Exporter in Pupil Player.

nmt 25 January, 2022, 16:06:58

I think the goal would be to unproject the 2d coordinates of the lighting from pixel to camera space. @papr can you offer more insight into this? I do wonder whether calculating an angle between the viewing direction and the lighting would be the best approach. Such an angle would be relative to the scene camera coordinate system, and therefore dependent on the wearer's head position in space. Have you considered this?

user-d50398 25 January, 2022, 15:43:18

Thank you @nmt , then is it true that it is better to let this spiral cover the whole field of view (of the world camera)?

nmt 25 January, 2022, 15:59:43

It is more true that the calibration should incorporate areas you will record in your experiment, which isn't necessarily the whole field of view.

papr 25 January, 2022, 16:44:54

Unfortunately, it fell off my radar. I made a note and will look at it first thing tomorrow morning

user-dee88a 25 January, 2022, 16:50:08

No problem! Thanks for your attention.

user-d1ed67 25 January, 2022, 20:41:04

Hi @papr (Pupil Labs) . I want to manually save and load 3D eye models, so I tried to dump self.detector into a file in the pye3d_plugin, but I got a RuntimeError: SynchronizedArray objects should only be shared between processes through inheritance. I also tried creating a method in detector_3d.py (the detector class in pye3d package) to access the 3d eye model, but I still get the same RuntimeError. It seems like an issue related to multiprocessing, but I cannot find anything related to multiprocessing in pye3d class (in file detector_3d.py) or pye3d_plugin.py. Could you please point out how I should solve this issue?

papr 25 January, 2022, 20:44:36

Hey, sorry, I saw your message yesterday but it fell off my radar. I would recommend writing out the model parameters (x,y,z) explicitly instead of dumping the python object. There should be an attribute to access these values. The object should be a synchronized array, yes, but you should be able to convert it to a tuple and serialize the tuple

user-b074b6 25 January, 2022, 20:44:14

Hi. Sorry for the delay. Multiple people have been using the system. Here is the current state of things: - in Device Manager, the Pupil devices appear as a single device: "Unknown USB Devices (Device Descriptor Request Failed)", under "Universal Serial Bus controllers". - We have another, monocular Pupil headset, which when plugged in appears solely as two libusbK devices, and works fine with Pupil Capture. - The binocular headset works perfectly with Pupil Capture on another machine. This issue started after someone uninstalled the drivers for the binocular headset in Device Manager, and attempted to reinstall them by running Pupil Capture as administrator as recommended on the website. Prior to reinstalling the drivers, the binocular headset worked perfectly. Uninstallation/reinstallation as described on the site does not help.

I guess something about the device configuration isn't getting cleared out upon right-click->uninstall. Since the device is unknown, there's no checkbox to "delete the driver software". Is there a specific directory I should look in to find and remove such files? Thanks again.

papr 25 January, 2022, 20:45:59

I am not sure about this. I will have to discuss this with my colleagues.

user-d1ed67 25 January, 2022, 20:50:38

I see. Thanks! I will study a little more about the modeling code and try to dump things out in other forms.

papr 25 January, 2022, 21:06:58

This is the model attribute I was talking about. https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/eye_model/asynchronous.py#L77

I am actually not sure if you could set the values when recovering the model from disk. There is a bit more to the models state than just its center. I might make sense to write explicit de/serialization functions for the models. But that requires deeper knowledge of the inner workings of pyed3d. I can work on that but unfortunately not until mid/end of February.

user-d50398 25 January, 2022, 21:59:21

May I ask that is it possible for the users wearing eye-lens to wear the eye-trackers? Also in case of wearing eye-glasses?

wrp 26 January, 2022, 06:44:13

Hi @user-d50398 πŸ‘‹ Responses to your questions: - Contact lenses + Pupil Core: Contact lenses should not negatively impact pupil detection with Pupil Core. If you have users who wear spectacles/prescription lenses who also have contact lenses we would recommend asking them to wear the contact lenses - if possible. - Spectacles/Prescription eye glasses + Pupil Core: It is possible to use prescription eye glasses with Pupil Core, but not recommended because it is difficult to set up, does not work for all users due to facial geometry and glasses frame shape. If you do go this route you will need to capture the eye directly from below the glasses frame and not through the lens of the eye glasses as this will introduce reflections that will negatively impact pupil detection and therefore gaze estimation.

user-04dd6f 26 January, 2022, 10:36:07

Hi, another quick question in terms of the distance between the eye and fixation, is it correct if I apply the "eye_center0_3d_z" in "gaze_positions.csv" & average of ”gaze_point_3d_z" in β€œfixation_csv" to obtain the linear distance ?

Many Thanks

nmt 26 January, 2022, 10:42:28

The depth component of gaze_point_3d is prone to inaccuracies, particularly at longer viewing distances. gaze_point_3d is defined as the intersection (or nearest intersection) of the left and right eyes visual axes. Predicting depth from these is inherently difficult. For a more accurate measurement of eye-to-fixation distance, I would recommend using the head pose tracker plugin: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking

user-04dd6f 26 January, 2022, 10:48:48

got it, but the head-pose-tracking is applicable for those who still got the raw data, right? (cuz I only have the export data with me)

papr 26 January, 2022, 10:44:10

I fixed the typo in the docs and but was not able to reproduce the issue. Hence, I created this debug plugin https://gist.github.com/papr/98636456fd4b4d0621835ac0d3cffc77 Use it instead of the official plugin. It will log an error message when it runs into the issue above instead of crashing. Please share the message with us should you run into the error again!

Here is how you can install the plugin https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin

user-dee88a 26 January, 2022, 20:21:49

I opened up Pupil Capture and ran your debug plugin - the intrinsics estimation worked with no errors. So, frustrated that I could now not reproduce the error I was having, I ran the regular camera intrinsincs estimation and it crashed as it had been before.

I then closed Pupil Capture, opened it again, and ran your debug plugin for intrinsics estimation. The system recorded error messages (the same ones as before) and spiked CPU usage like it was going to crash, but it was able to finish intrinsics estimation. Emailed the logs.

nmt 26 January, 2022, 10:53:18

If your exported data has no head pose tracker results, then that option is off the cards. How accurate does your eye-to-fixation point distance need to be? I wouldn't rely on it for anything important, but rather use it as a rough estimate

user-04dd6f 26 January, 2022, 10:58:14

I need the distance between the eye and fixation for calculating the "saccade amplitude (visual angle), so I assume that it's not accuracy demand for the purpose, just to get a rough estimate for the distance~

nmt 26 January, 2022, 10:54:24

Note that there is no way to check the quality of the distance measure without access to head pose data

nmt 26 January, 2022, 11:08:52

You can work directly with the gaze_normals to calculate saccade amplitude in degrees

user-04dd6f 26 January, 2022, 12:12:28

@nmt I just found the explanation of the "gaze_normals", so I assume that the way of obtaining the saccade amplitude is to get the distance directly by ""gaze_normal_z""?

please correct if I'm [email removed]

user-04dd6f 26 January, 2022, 11:58:09

Could you please elaborate how to use the "gaze_normal_z" to get the "saccade amplitude" (i.e if the following equation correspond to your answer: saccade amplitude = 2atan(Size/2Distance) )

So is it correct if I just fit the "gaze_normal_z" & "eye_center0_3d_z" into the equation above (wondering the unit for the "gaze_normal_z" & "eye_center0_3d_z")?

Thanks~

Chat image

user-8ed000 26 January, 2022, 11:12:58

Hello. I want to export some data again because I changed something. During the eport I get an error message: "surface_tracker.surface_tracker_offline:Surface Gaze mapping not finished. No data will be exported". Any idea how to fix this? I tried recalculating the Gaze Mapper, but to no success.

nmt 26 January, 2022, 13:15:59

Hi @user-8ed000. How long is your recording? This error indicates that the Surface Tracker is still mapping gaze to the surface when you hit export

user-9e0d53 26 January, 2022, 11:22:13

Hi, I would like some advice on entering areas of interest into a video within pupil player. Unfortunately using surface tracking is not possible in our case (or we are not able to place markers in the scene). The issue is that we are doing interviews and need to compare the level of eye contact within the interview. Is there any way to manually insert the areas of interest into the video? Unfortunately I have not found a manual anywhere that addresses this issue. Thanks

nmt 26 January, 2022, 13:27:14

Hi @user-9e0d53 πŸ‘‹. It isn't possible to add AOIs or define surfaces in Pupil Player unless markers were placed in the scene. There are a few things you could try instead: 1) Manually annotate important events in the recording, e.g. fixations on eyes: https://docs.pupil-labs.com/core/software/pupil-player/#annotation-player (easy to perform but time consuming) 2) process the world video + gaze overlay with a face detection utility and automatically relate gaze with the facial features (more technically challenging, but could save time in the long run!)

nmt 26 January, 2022, 12:36:13

Thanks for sharing the figure – this approach is more useful for defining the size of stimuli, e.g. that are presented on a screen, in terms of visual angle. If you are interested in saccades, Pupil Core already provides theta and phi for each eye in the pupil_positions.csv export! Their unit is radians. Note that theta and phi are relative to the eye camera coordinate system.

user-04dd6f 26 January, 2022, 14:38:13

Thanks for the kind response.

Refer to your answer, I found a description on the docs, which is the figure below. Does the marked sentense mean that the "looking down: decreasing y vector component; increasing theta" & "looking right: increasing x vector component; increasing phi" if I only got the eye0 data?

Chat image

user-8ed000 26 January, 2022, 13:21:29

The recording is 28 minutes long. Is there an indicator for the process of the surface tracker mapping the gaze?

nmt 26 January, 2022, 13:32:43

A 28 minute recording could take some time to complete, depending on computer specs. You can check the progress of the Marker Cache and mapping for each surface defined within the Player Window (see screenshot: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker)

user-4eef49 26 January, 2022, 13:26:58

Hi, how do I calculate common gaze from gaze 0 2d and gaze 1?

nmt 26 January, 2022, 13:36:12

Hi @user-4eef49. Please clarify what you mean by "gaze 0 2d" πŸ™‚

user-4eef49 26 January, 2022, 13:38:01

gaze from individual eyes. So I have gaze information from gaze.2d.0. and gaze.2d.1. How do I calculate common gaze from gazes for individual eyes?

papr 26 January, 2022, 13:39:45

Hi, binocular gaze (gaze combined from both eyes) is published under the gaze.2d.01. topic. Please let us know more about your setup/calibration process if you don't receive such data.

user-4eef49 26 January, 2022, 13:47:55

Its working brilliantly. Waht is the meaning of two numbers in norm_pos ?

nmt 26 January, 2022, 14:14:26

I'm glad to hear it's working! The gaze datum format is described in detail here: https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format

user-04dd6f 26 January, 2022, 15:25:12

And also, would like to know how to measure the saccade amplitude between "fixations" rather than "gaze", what I understand is that the parameters "theta"&"phi" is based on the gaze

papr 26 January, 2022, 15:32:06

Hi, what video are you trying to extract frames from exactly? The intermediate scene video or the exported one?

A simple way to check if -vsync 2 works as expected would be to use this Pupil Player plugin https://gist.github.com/papr/c123d1ef1009126248713f302cd9fac3 It renders the frame index into the exported world video. After transcoding you can check if all frames are there as expected.

user-c8e5c4 26 January, 2022, 15:34:12

I'm using the exported one, thanks for the suggestion, I'll check it out!

papr 26 January, 2022, 20:31:51

Thank you. I will have a look at it tomorrow

user-dee88a 26 January, 2022, 20:37:37

Thank you, will talk to you tomorrow. I have sent the logs to data@pupil-labs.com in the interest of privacy and removed them from the previous message. Let me know if you cannot access them that way.

nmt 27 January, 2022, 12:27:16

I can recommend reading "Identifying fixations and saccades in eye-tracking protocols": https://doi.org/10.1145/355017.355028 The most basic filter suitable for detecting saccades is probably the I-VT (section 3.1). Section 4 of this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb shows how to calculate angular velocity from theta and phi. Prior that operation you might want to convert theta and phi from radians to degrees

user-04dd6f 27 January, 2022, 12:30:12

Thanks for the kind reference, I would take a look at it!

user-b074b6 27 January, 2022, 18:48:11

just wanted to follow up to see if your colleagues had any advice for how to go about fixing this issue, thanks!

papr 27 January, 2022, 18:51:04

Hey, thanks for following up. Unfortunately, I do not have anything new on that. Could you please reach out to [email removed] regarding this problem?

user-8ed000 28 January, 2022, 12:28:18

Hello. Is it possible to create different forms of Surfaces in the Surface Tracker then an rectangle? For example an U shape? There are only four points available for changing the form, is it possible to add more?

papr 28 January, 2022, 13:46:28

Hey, unfortunately it is not possible to use other shapes. But you can always define multiple surfaces based on the same markers.

user-a07d9f 28 January, 2022, 16:08:20

Helo All. Is it possible to set surface mapper to include a bit more surface beyond mark boundary? or to use surface markers and adjust the shape (keeping it rectangular) of area we check?

wrp 31 January, 2022, 02:41:38

You can edit surfaces to drag the boundaries beyond the boundaries of the markers as long as the new boundary area stays co-planar with the markers.

user-1bda7f 30 January, 2022, 18:31:41

@&288503824266690561 Hello, I'm currently trying to build from source, but I am getting an ImportError for uvc. Is there a way to fix this issue?

papr 01 February, 2022, 08:51:24

Hi, apologies for the delayed response. You are missing one of our dependencies: https://github.com/pupil-labs/pyuvc

We offer Python 3.6 wheels on Windows [1] to simplify the install. On other platforms, please follow the corresponding source-install instructions [2].

Please be aware that we also offer easy-to-install application bundles for the Pupil Core applications here [3]. Their functionality can be extended via our powerful plugin API [4]. I can give you more pointers and recommendations on how to implement specific functionality if you let me know your use case.

[1] https://github.com/pupil-labs/pyuvc/releases [2] https://github.com/pupil-labs/pupil/tree/master/docs [3] https://github.com/pupil-labs/pupil/releases [4] https://docs.pupil-labs.com/developer/core/plugin-api/

End of January archive