core


user-467cb9 03 January, 2021, 18:46:18

@nmt so should I compare timestamps of video frame and gaze?

nmt 05 January, 2021, 08:11:47

To get the gaze point moving like it should, you need to plot all of the available gaze positions for each world frame. Have a look at this code in the vis circle Plugin https://github.com/pupil-labs/pupil/blob/c68d50df6d47aa88f8fbc00950edf617aceb4f8b/pupil_src/shared_modules/vis_circle.py#L41

user-83ea3f 05 January, 2021, 05:30:40

Hey guys, I want to connect intel realesense d435 on my pupil core. The insturction about RealSense2 Backend for Pupil Capture v1.22+: "Now you need to get pyrealsense2 into the plugin folder as well, such that the realsense backend can import pyrealsense2. For this you can either try installing it into the plugins folder, or install it in a local python installation/environment and symlink the library into the plugins folder." Just like the instruction, I want to install the plugin of d435 on local PATH, and want to install the pyrealsense under my pupil_capture_settings. What i've done is - installed pip pyrealsense2 $pip install pyrealsense2 - downloaded realsense2_backend(from gist at: https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29 ) on location at: C:\Users.........\pupil_capture_settings\plugins - installed source file under the plugins direction $pip install -t .\plugins\pyrealsense2\ pyrealsense2

when i turn on the pupil capture 2.6.19, I can see the Realsense2 Source toggle under the Plugin Manager. However, when I click the toggle, the program is freezed and have no response.

the error code that I've got is "AttributeError: module 'pyrealsense2' has no attribute 'context'"

file directory i've got is under: └─plugins └─pyrealsense2 ├─bin │ └─pycache ├─pyrealsense2 │ └─pycache └─pyrealsense2-2.41.0.2666.dist-info please let me know the error that I've got.

Thanks.

papr 05 January, 2021, 09:54:19

I think there is pyrealsense2 module is a level too deep. Please extract the content of plugins/pyrealsense2 into plugins

user-da621d 05 January, 2021, 09:42:11

hello, If I don't have a world camera, only have an eye camera. Does it mean I can't calibration.

papr 05 January, 2021, 09:53:03

@user-da621d correct. The point of calibration is to find a mapping from the eye cameras to the scene camera. If there is no scene camera, the mapping is not needed.

user-da621d 05 January, 2021, 09:55:26

@papr Now I have an eye camera, can I use only this one to calculate eye rotation angle?

papr 05 January, 2021, 09:56:23

for relative movements, yes, since the absolute position of the eye camera in relation to the subject's field of view is unknown

user-da621d 05 January, 2021, 10:02:50

Does it mean I don't know absolutely eye position, only get current eye position?

user-da621d 05 January, 2021, 10:01:13

If I don't calibration, what would be happened when I am running eye tracking system.

papr 05 January, 2021, 10:07:47

@user-da621d You know the eye position and visual axis in absolute eye camera coordinates but you do not know the relation between the eye camera coordinate system and the subject's field of view. In other words, it is unknow which eye direction cooresponds to "looking front"

user-da621d 05 January, 2021, 10:12:15

Thanks dear papr, I got it. I want to get eye rotation angle using through only one eye camera, can I do it in theory?

papr 05 January, 2021, 10:14:43

Yes, for relative eye rotations, i.e. the rotation between eye directions at timepoints t0 and t1

user-6e3d0f 05 January, 2021, 11:23:50

Happy new year everyone, I'm currently working on a experiment, where I want to use the Picture of a Surface to do post hoc caluclations. The problem im encountering is that the aspect ratio of the Surface (From "open Surface in Window" Button in Player) is not the same as it is in the world camera View. Does anyone in here has an idea on how to accurately change the aspect ratio so the dimensions are the same? Problem illustrated in: https://i.imgur.com/cdgJafL.jpg

user-da621d 05 January, 2021, 12:54:05

hello, I found a weird thing about my pupil-labs hardware. when my pupil stares at the left position, I draw the right circle. Stares at middle and right, I respectively draw green and blue circles. But the circles' position is not at the correct position. Why appear this phenomenon?

Chat image

user-da621d 05 January, 2021, 12:55:17

Does it mean the hard ware testing isn't correct?

papr 05 January, 2021, 13:05:30

@user-da621d Am I assuming correctly that you are visualizing norm_pos data? Please be aware that your scatter plot axes have limited ranges. It might be easier to fix then to a (0,1) range to make x and y comparable

user-da621d 05 January, 2021, 13:19:56

all my pupil coordinates within (0,1), so my coordinate system is 2D Normalized Space. Is that right?

user-da621d 05 January, 2021, 13:10:55

Because the green point should be located at the center

user-da621d 05 January, 2021, 13:10:06

hello papr, I mean the detection of coordinates seem not correct.

papr 05 January, 2021, 13:12:01

I think your assumption about the coordinate system is not correct.

papr 05 January, 2021, 13:12:32

Read more about it here https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-da621d 05 January, 2021, 13:28:35

How to change the coordinate system make the detection correct.

user-200ca9 05 January, 2021, 13:41:37

Hey! I've a small question regarding the camera. It seems that the world camera with 100 FOV has a positive radial distortion. Does the data that we get from the software fixes this said distortion? if not is there a way to compensate for that (a code in github or something of this sort)?

papr 05 January, 2021, 14:11:45

@user-200ca9 data in image coordinates is not corrected for distortion. Data in 3d camera coordinates is compensated.

user-200ca9 05 January, 2021, 18:27:37

@papr thanks

user-83ea3f 06 January, 2021, 05:49:28

Thanks @papr However, when I changed the source to plugins/pyrealsense2 into plugins, I've got the error like the picture. How can I solve this problem?

Chat image

papr 07 January, 2021, 08:35:24

This looks like your version of realsense2_backend.py is trying to import pyrealsense.pyrealsense. The original does not attempt that. Did you modify the code?

user-da621d 06 January, 2021, 06:27:51

hello, in the source code, where is the calculation process of ellipse angle, and I want to know whether this angle can represent pupil rotation angle.

papr 06 January, 2021, 08:55:07

The ellipse angle is the rotation of the 2d ellipse as explained here: https://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html#ellipse

user-da621d 06 January, 2021, 06:39:11

why my data about ellipse XY and angle are all empty?

Chat image

papr 06 January, 2021, 08:58:58

In your case, you want to subscribe to pupil, remove low confidence and 2d data points. What remains is high confidence 3d pupil data for both eyes. Separate the data for each eye, as data comparison is only valid within each eye. Using the circle_3d -> normal field, you get a 3d direction for a given timestamp. Use the 1 - <cosine distance> to calculate the angle between consecutive vectors.

papr 06 January, 2021, 08:54:10

Are you subscribing to 2d or 3d pupil data? If you are subscribing to both, are you differentiating between them? (You can do this by looking at the topic or the method field) Are you removing low confidence data points (e.g. <0.6 confidence)?

user-da621d 06 January, 2021, 06:39:55

need help, thank you

user-da621d 06 January, 2021, 10:04:04

ok, I see, thanks

user-da621d 06 January, 2021, 10:18:31

Actually, I try many times about the eye position. The coordinates are in 2d normalized coordinate system. When I stare at the left position, I draw a red circle, when stare at center I draw a green circle, when at the right position, I draw a blue one. But the positions on the figure seems not correct.

Chat image

user-da621d 06 January, 2021, 10:22:08

I want to according to 2d normalized coordinates to calculate eye rotation angle. That my thought. But now the collect coordinates seems wrong. so I cant do next stage.

papr 06 January, 2021, 10:29:40

Do you press a button to trigger the drawing? Do you draw a single datum? What language do you use to process the data?

user-da621d 06 January, 2021, 10:50:27

Yes I should press three buttons to record right center left coordinates. I draw three points on one figure using python.

papr 06 January, 2021, 10:52:02

would you mind sharing your script? I feel like a short review of the code might be more fruitful than trying to guess what is going wrong

user-da621d 06 January, 2021, 10:54:29

thank you, I am about to share it

user-624fa4 06 January, 2021, 19:08:30

Hello everyone! We are developing research that involves several surface trackers like AOIs on a website. The markers will define different areas of the site so that we can identify the fixations in each AOI.

Is there a minimum size for markers? We did a test and it didn't work. I believe it is because of the size, as we also included four larger size markers glued to the edges of the notebook screen and with these it worked.

We will generate the data only after the experiment in the pupil player, that is, we will not generate heat maps online while.

papr 06 January, 2021, 22:24:21

@user-98789c PTS stand for presentation timestamps and refer to the media file's internal timestamps. They are used to identify and extract specific frames from the video.

user-98789c 06 January, 2021, 22:38:48

About question 12: what is the difference between diameter and diameter-3d?

papr 06 January, 2021, 22:42:17

Diameter is in pixels (as measured from the pupil detection ellipse; major axis). diameter_3d is in mm and corrected for perspective using the 3d eye model

user-73a5fe 07 January, 2021, 03:07:46

Hi, am new to using the pupil labs eye tracker. When I loaded the video into the pupil player without any audio captured, it works well without any problems. But when I capture audio with the video, this happens. Any idea why?

Chat image

papr 07 January, 2021, 08:37:19

Actually, could you please clarify if this is a Pupil Capture or Invisible Companion recording?

papr 07 January, 2021, 08:31:52

Could you share the raw recording folder including the audio with [email removed] such that we can try to reproduce this issue?

user-83ea3f 07 January, 2021, 06:44:06

Can I have a recommendation for world view camera? Unfortunately, D435 is still not working

user-83ea3f 07 January, 2021, 08:36:42

Hey @papr , thanks for reply. haven't modified the code at all.

user-73a5fe 07 January, 2021, 08:37:40

Invisible Companion

papr 07 January, 2021, 08:39:33

Great, thank you for this information. No need to share the recording in this case. Can you reliably reproduce the issue with recordings including audio?

papr 07 January, 2021, 08:38:32

Could you please try again and share the ~/pupil_capture_settings/captue.log file? It should contain further debug messages.

user-83ea3f 07 January, 2021, 08:38:55

one sec please

user-73a5fe 07 January, 2021, 08:40:26

The 2nd time I tried, the issue didn't appear. Not sure why.

papr 07 January, 2021, 08:42:24

Ok, in this case it is not related to the audio recording but to an other known issue. Please keep a copy of the original Invisible recording. We will release a work around for this issue within the next few weeks. The recording content is not lost, it just cannot be accessed by the current Pupil Player version.

user-83ea3f 07 January, 2021, 08:41:48

@papr Here is the log of the capture!

capture.log

user-73a5fe 07 January, 2021, 08:43:10

👍 got it. Thank you

papr 07 January, 2021, 08:54:14

So good news, the log shows

Imported: <module 'pyrealsense2' (namespace)>
[...]
Imported: <module 'realsense2_backend' from 'C:\\Users\\goqba\\pupil_capture_settings\\plugins\\realsense2_backend.py'>
Added: <class 'realsense2_backend.Old_Base_Source'>
Added: <class 'realsense2_backend.Realsense2_Source'>

which indicates that the plugin and pyrealsense were installed successfully!

Unfortunately, later this happens:

Process Capture crashed with trace:
Traceback (most recent call last):
  File "launchables\world.py", line 733, in world
  File "launchables\world.py", line 484, in handle_notifications
  File "shared_modules\plugin.py", line 398, in add
  File "C:\Users\goqba\pupil_capture_settings\plugins\realsense2_backend.py", line 231, in __init__
    self.context = rs.context()
AttributeError: module 'pyrealsense2' has no attribute 'context'

It is possible that a newer pyrealsense version removed the context() api call. I cannot confirm this yet though

user-83ea3f 07 January, 2021, 09:00:24

Let me delete the capture.log and restart the whole process!

papr 07 January, 2021, 08:59:47

Actually, looking at the pyrealsense2 structure, it looks like your initial file structure was indeed correct. Could you please restore it, try running it again, and share a fresh capture.log?

user-83ea3f 07 January, 2021, 09:07:37

this is the directory of current process

tree.txt

user-83ea3f 07 January, 2021, 09:09:14

and this is the log i've got of the log when I click the Realsense2 Source on "plugin Manager" tab. And when I click the toggle, the Pupil Capture - World program freeze and returning no response.

capture.log

papr 07 January, 2021, 09:49:22

Ah, I see the issue. The bundle uses Python 3.6 while you installed pyrealsense with 3.7. This causes the failure to load the *.pyd files. Please install Python 3.6.1 or higher and rerun the pyrealsense installation.

user-b4120d 07 January, 2021, 10:57:35

Gotta love Python versioning causing issues 🙂

user-1529a4 07 January, 2021, 12:36:14

@papr is there a data sheet that the software exports that gives us "clean" values of pixels and not relative ones? (meaning when i take a video of a laptop and i want to transform in later from pixels to real values such as mm or cm)

papr 07 January, 2021, 13:18:58

@user-1529a4 the 3d data in pupil_positions.csv and gaze_positions.csv is "cleaned" and in mm.

user-cb599e 07 January, 2021, 14:09:10

Hi, a technical question regarding the pye3d detector: does the position of the camera relative to the eye could influence the accuracy/stability of the 3d model. We are using a camera through a semi-transparent mirror, and for now the camera is almost aligned with the eye, contrary to the pupil headset. From what I understood from the papers describing the method, it looks like having a wide range of elliptical pupils in the frames allows estimating properly the eyeball size and the gaze. With the current camera position most pupil are close to a circle and the model estimate of the eyeball (green circle) is showing large variations (about 2x). With that setup, is it better to stick to the 2d model (but more slippage sensitive)? Or would you suggest changing the camera position to not be aligned with the eye? I think it would be possible in our constrained setup (MRI headcoil) to angulate it a bit. Thanks!

papr 07 January, 2021, 14:15:52

it looks like having a wide range of elliptical pupils in the frames allows estimating properly the eyeball size and the gaze Correct, but you get these from eye rotations/movements relative to the camera. The absolute camera position does not play much of a role (other than your camera position should yield generally better pupil visibility).

We suggest including an eye model fitting phase to your experiment block/trial where you ask the subjects to roll their eyes for a few seconds (2-3 seconds should be sufficient). The more often you do it the more robust the model will be against slippage as it gets high-quality data to adjust if ncecssary.

user-cb599e 07 January, 2021, 14:25:32

Thanks for your fast answer. I will try to include such blocks in the experiment. Due to the screen position the default eye movement amplitude might not be that large so that would explain the poor accuracy of the model in our pilot data.

papr 07 January, 2021, 14:27:20

Yes, people generally tend to look straight ahead and rather make head movements than large eye movements. So this is a general problem. If you do not have much head movement, I can also recommend freezing the eye model between fitting phases to generate more stable data. This assumes there is no slippage though.

user-cb599e 07 January, 2021, 14:34:42

Yes, we are trying to control head movement as much as possible (custom head-cases fit to the MRI coil but these are not 100% perfect). Even the MRI sequence create vibrations of the mirror on which the camera is mounted, which are visible in the video. We are looking at way to improve this but if I cannot control motion at the source, I was thinking of maybe implementing a plugin to do eye video stabilization, maybe even sticking markers on the participant's face or tracking the corner of the eye.

user-1529a4 07 January, 2021, 17:50:42

@papr We are trying to map reference locations onto a surface, to get values like in the file "gaze_positions_on_surface_<surface_name>" Is there a specific transformation we can use? We have the surface corner positions in the world camera image. Also, regarding the previous question, what is the meaning of values (eg gaze_point_3d_x)? And what does the negative sign mean?

papr 08 January, 2021, 14:20:34

Also, regarding the previous question, what is the meaning of values (eg gaze_point_3d_x)? And what does the negative sign mean? It is the cyclopian gaze ray originating from the scene camera's origin. It is calculated by intersecting two gaze rays starting from eye_center0/1 in direction gaze_normal0/1. Sometimes, due to inaccuracies, especially when looking far away, the intersection can lie behind the scene camera. It is valid to flip the gaze direction in this case. Pupil does this automatically starting with version v2.6, see https://github.com/pupil-labs/pupil/pull/2043

papr 08 January, 2021, 14:17:30

We are trying to map reference locations onto a surface, to get values like in the file "gaze_positions_onsurface<surface_name>" You can use the surf_positions_<surface_name>.csv. It includes img_to_surf_trans transformation matrices. See https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker and this example for the inverse problem (surface to scene image transformation) https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb

user-3cff0d 07 January, 2021, 19:23:55

Hello! So after developing custom 2d pupil detection plugins, how would I "tell" the 3d detector/gaze detector to use that plugin? If I had multiple pupil detection plugins installed, could I have it run the 3d and gaze detectors for each?

papr 08 January, 2021, 14:26:23

Checkout this part of the pye3d detector plugin https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/pye3d_plugin.py#L105-L140

You need to publish a 2d datum similar to the one published by the built-in detector. Specifically, it's method field needs to have the value 2d c++. This datum is then passed to pye3d itself where the pupil ellipse data (in OpenCV ellipse definition) is extracted: https://github.com/pupil-labs/pye3d-detector/blob/829720a69c42571585e9886ea11152d506909c7b/pye3d/detector_3d.py#L324

Also checkout our new Pupil Detectors Plugins community section if you have not already. It includes two examples that also show how to disable other detectors. https://github.com/pupil-labs/pupil-community#pupil-detector-plugins

user-83ea3f 08 January, 2021, 05:32:45

Hey papr, have changed and set my environment as

python:3.7.9 Pupil Capture v3.0.7 intel Realsense version: 2.41.0.2657 and the part of the error is this:

2021-01-08 14:27:12,997 - world - [DEBUG] pupil_apriltags: Found working clib at C:\Program Files (x86)\Pupil-Labs\Pupil v3.0.7\Pupil Capture v3.0.7\pupil_apriltags\lib\apriltag.dll
2021-01-08 14:27:13,044 - world - [DEBUG] plugin: Scanning: pyrealsense2
2021-01-08 14:27:13,045 - world - [DEBUG] plugin: Imported: <module 'pyrealsense2' (namespace)>
2021-01-08 14:27:13,046 - world - [DEBUG] plugin: Scanning: realsense2_backend.py
2021-01-08 14:27:13,054 - world - [WARNING] plugin: Failed to load 'realsense2_backend'. Reason: 'cannot import name 'load_intrinsics'' 
2021-01-08 14:27:13,055 - world - [DEBUG] plugin: Traceback (most recent call last):
  File "shared_modules\plugin.py", line 494, in import_runtime_plugins
  File "importlib\__init__.py", line 126, in import_module
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "C:\Users\goqba\pupil_capture_settings\plugins\realsense2_backend.py", line 18, in <module>
    from camera_models import load_intrinsics
ImportError: cannot import name 'load_intrinsics'

I am assuming it is related with camera module?

papr 08 January, 2021, 14:11:26

You are using the plugin version suitable for v1.22 - v2.2. Please use this version which is suitable for v2.3+ https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29

papr 08 January, 2021, 14:10:36

Do I assume correctly that the vibrations are of relative high frequency? pye3d assumes slippage to be occur over longer periods of time. Not sure it could compensate high frequency movements. A video stabilisation sounds like a good solution. Make sure two keep the exact number of frames and pts in the video, else the externally recorded timestamps will not match the video anymore.

user-cb599e 08 January, 2021, 16:38:59

Yes, it's high frequency (~10Hz) compared to the 3d model update timescale. But slippage can also occur if head is not properly fixed. I will first try to improve that and maybe after code a plugin for video stabilization.

user-3cff0d 08 January, 2021, 16:44:17

It seems like that part of the pye3d detector looks through each of the datums returned and selects the first one with a 2d c++ method field, and performs the 3d detection based on that one exclusively (the break on line 112). Will custom detectors always be "first in line" over the default 2d detector?

papr 08 January, 2021, 16:54:23

Alternatively to turning it off, you can set your custom plugin's order to something < 0.1. The base plugin appends results to "pupil_detection_results" depending on the order of the plugins https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/detector_base_plugin.py#L131

papr 08 January, 2021, 16:45:54

The assumption is that there is only one. I suggest turning the built-in detector off instead.

user-3cff0d 08 January, 2021, 17:26:03

Is there a graceful method to turning off the built-in detector, without altering its code?

papr 08 January, 2021, 17:47:29

From the example linked in the community repo https://gist.github.com/papr/ed35ab38b80658594da2ab8660f1697c#file-artificial_2d_pupil_detector-py-L73-L82

user-3cff0d 08 January, 2021, 17:47:50

👍

user-3cff0d 08 January, 2021, 17:47:54

Thanks!

user-7daa32 08 January, 2021, 19:47:00

While looking at the video on player, I noticed that the confidence turns to red. Does that mean the data is wrong or can be used ?

user-7daa32 08 January, 2021, 20:22:59

The confidence values are always changing to red on player

user-7daa32 08 January, 2021, 20:23:04

What would I do ?

user-3cff0d 08 January, 2021, 21:35:42

Putting a datum entry method with value 2d c++ doesn't seem to be working. None of my custom plugins seemed to be passing info to the 3D detector even when they're the only ones enabled, and to be sure I downloaded the example artificial_2d_pupil_detector.py and set its method to 2d c++ and, likewise, it isn't performing any 3D detection. When I allow the vanilla 2D detector to operate it does begin 3D detection, so it is installed correctly at least.

papr 08 January, 2021, 21:40:54

Maybe the custom plugins are overwriting recent_events which is responsible for adding the datum to the events dictionary. I do not know for sure though.

The 3d detector is running though? The example code above deactivates all other detectors. Did you adapt it to only disable the 2d detector?

user-3cff0d 08 January, 2021, 21:45:19

I am using the example _stop_other_pupil_detectors method. Does that mean that the 3d detectors also inherit from PupilDetectorPlugin? Is there a better class that only encompasses 2d detectors?

papr 08 January, 2021, 21:47:23

correct. all detector plugins do. You can check for plugin.name != "Pye3D"

user-dfb164 09 January, 2021, 00:16:29

Hi, I had a test of the Pupil Mobile yesterday recording for about 15 minutes. the eye cameras were recorded well but the world camera seems to only dump video for about the first minute. Does anyone know if there is any setting on the phone that I need to change to fix this? I am aware that mobile is not maintained, but any clue of the possible cause will be helpful. Also, is there any plan to develop a new mobile app for recording in motion?

papr 11 January, 2021, 12:15:32

Unfortunately, I do not have any idea regarding the cause of your issue. Pupil Mobile has been replaced conceptually by the Pupil Invisible product. Pupil Core performs best in a controlled environment, where the subject does not have to move much.

user-7cbec7 09 January, 2021, 03:42:32

Hi guys, just want to make sure is pupil core using Intel RealSense R200 RGBD in its world camera?

papr 11 January, 2021, 12:20:38

There was a time, when you could configure your Pupil Core device to use a R200 camera as a scene camera. Intel deprecated software support a while back. I do not know on which operating systems the R200 camera still works. Do you have a headset with a R200 camera?

user-7cbec7 09 January, 2021, 03:42:38

👀

user-7cbec7 09 January, 2021, 03:44:02

because I saw this issue and it mentioned Intel RealSense R200 RGBD https://github.com/pupil-labs/pupil/issues/767

user-7cbec7 09 January, 2021, 03:54:06

I find R200 only works in Ubuntu14.04 and 16.04 not 18.04; If pupil core uses R200, does it mean pupil core can't be used in Ubuntu18.04? Appreciate any help.

user-19f337 10 January, 2021, 10:10:12

Hello, I wondered how exactly the recorded calibration is stored in the recording directory. It doesn't appear in the folder but once I open it in pupil player and then close for the first time it comes in the "calibrations" sub-folder. Can you please briefly explain me how it works ?

user-a50a12 11 January, 2021, 11:54:57

I have a question regarding recording. Am I correct that I always need a seperate computer for capturing/recording the data? Are there any options for mobile situations if a desktop computer can not be used (similar to the Invisible System)?

papr 11 January, 2021, 12:21:38

It is stored in the notify.pldata file as a notification. When you open the post-hoc calibration plugin, it searches the file for recorded calibrations.

user-19f337 12 January, 2021, 11:37:41

oh I see, thank you!

papr 11 January, 2021, 12:23:57

Currently, we only support Pupil Core usage via the Pupil desktop applications. These are supported on Windows 10, Linux 16.04 and later, and macOS High Sierra 10.12 - macOS Catalina 10.15.

user-a50a12 12 January, 2021, 10:10:55

Thanks, I saw the Pupile Mobile mentions in this forum, but now I get that Pupil invisible would be the product to goto for mobile recording possibilities.

user-aaa87b 11 January, 2021, 12:45:21

Hi there! I’ve just installed last Core soft release on Ubuntu 20.04.1 and I’ve tried to plug in our old DIY headset to be used in a didactic lab. Unfortunately, cannot get access to both world and eye cameras, although the username has been added to plugdev. Any suggestions? Any specific driver to add? Thanks a lot.

papr 11 January, 2021, 12:46:52

Please go to the Video Source menu, select manual camera selection and check the "Activate Camera" selector. Are any cameras listed? If yes, are they listed as unknown?

user-aaa87b 11 January, 2021, 12:48:41

Yes! unknown @ Local USB.

papr 11 January, 2021, 12:51:57

In this case, the user is not yet correctly part of the plugdev group. Did you restart after adding the user to the group? Could you share the output of the groups command (when running the command in the Terminal)

user-aaa87b 11 January, 2021, 13:53:50

Yep! After reboot both cameras are recognized! Thanx a lot Papr.

user-98789c 11 January, 2021, 14:13:06

In Pupil Capture, what is the difference between T (validation) and C (calibration)?

papr 11 January, 2021, 14:14:37

A calibration creates a new gaze mapping function, while the validation is only meant for verifying the accuracy of a previous calibration.

user-f22600 11 January, 2021, 15:50:39

Hi, is it possible to run pupil sw from source on Mac Big Sur?

papr 11 January, 2021, 15:51:15

We cannot confirm it yet, but it requires at least Python 3.9.1. We should be able to confirm it in the coming days.

user-7cbec7 11 January, 2021, 18:51:11

Hi, thanks for your reply. I don't have a headset with R200 camera. So, Pupil Capture is not supported in Ubuntu18.04 since R200 is not compatible with that OS?

papr 11 January, 2021, 19:05:03

Pupil Capture is supported on Ubuntu 18.04. It does not support the R200 camera itself though.

user-7cbec7 11 January, 2021, 19:24:44

Sorry, I am confused, R200 camera is the world camera used in Pupil Core binocular eye tracker. In order to make Pupil Capture work on Ubuntu 18.04, it's necessary to make R200 camera work to get world capture video input. Do I miss something?

papr 11 January, 2021, 19:39:32

And no, the R200 camera is not necessary to get world video output. The high-speed option does provide scene video as well. The USB-C mount option requires an additional camera of your choice that is not sold with the headset.

papr 11 January, 2021, 19:38:02

The Pupil Core headset currently only comes with two different scene camera configurations: High-speed scene camera and USB-C mount. Earlier configurations allowed the use of a R200 as a scene camera. You mentioned that your headset does not have such a R200 camera, correct? (I could confirm this statement if you wanted, by sharing an image of the headset.) Therefore, you should not have any R200 related problems. Your issue might be related to something else. Could you share details on the exact issue that you are having?

user-7cbec7 11 January, 2021, 19:31:09

Let me clarify a little bit, my problem is that I can't get world capture video and I think it's likely that R200 is not supported in Ubuntu 18.04

user-3cff0d 11 January, 2021, 20:30:17

Hello! Is it possible to obtain data from the 3d model as an eye video is processed? Like, is there a file exported containing frame-to-frame data on the eyeball model?

papr 11 January, 2021, 20:32:20

For debugging purposes, you can also drop a recorded eye video onto the Pupil Capture eye window to use the video as realtime input.

papr 11 January, 2021, 20:31:16

Pupil Player exports pupil detection data to pupil_positions.csv via the Raw Data Exporter.

user-3cff0d 11 January, 2021, 20:37:17

Does using Pupil Capture also export a pupil_positions.csv file?

papr 11 January, 2021, 20:38:59

What is your exact use case? I can let you know my recommendation for it.

papr 11 January, 2021, 20:38:26

No, Pupil Capture stores recorded pupil data in an intermediate format which can be read by Player. If you are interested in real-time access, it might be easier to use the Network API https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py

user-3cff0d 11 January, 2021, 20:40:12

I'm actually looking to get gaze_positions.csv rather than pupil_positions.csv. I have a couple hundred short eye videos that I would like to extract those gaze positions from

papr 11 January, 2021, 20:40:51

Are those recorded with Pupil Capture?

user-3cff0d 11 January, 2021, 20:41:49

They are just mp4 files that I assembled from collections of frames as images

papr 11 January, 2021, 20:43:24

In order to get valid gaze data, you need a calibration. Just to make sure that we use the same terminology: Pupil data is the output of the pupil detectors in eye camera coordinates; gaze is the result of mapping pupil data to the scene camera coordinate system.

user-3cff0d 11 January, 2021, 20:45:20

To give a bit more context, I am not interested in obtaining gaze data relative to any scene. I would like non-calibrated per-eye gaze vector data from the 3d detector's model of the eyeball

user-3cff0d 11 January, 2021, 20:46:06

Essentially, orientation of the generated 3d model

papr 11 January, 2021, 20:56:49

To be honest, since you are using custom data, it might be easier to write a script that iterates over the video frames and calls the pupil detectors directly.

pip install pupil-detectors pye3d

import cv2
import typing as T

from pupil_detectors import Detector2D
from pye3d,camera import CameraModel
from pye3d,detector_3d import Detector3D

detector2d = Detector2D()

focal_length = ...  # TODO: read from eye cam intrinsics
width = ... # TODO: read from eye cam intrinsics
height = ...  # TODO: read from eye cam intrinsics
camera = CameraModel(
  focal_length,
  (width, height),
)
detector3d = Detector3D(camera)

video = ...  # TODO: read video
for ts, img in video:
  gray = ...  # TODO convert img to gray frame
  result2d = detector2d.detect(gray)
  result2d["timestamp"] = ts
  result3d = detector3d.update_and_detect(result2d, gray)
user-3cff0d 11 January, 2021, 20:59:44

Thank you, I'll look into that!

user-3cff0d 11 January, 2021, 22:29:07

It's giving me a KeyError: 'timestamp' at line 338 of pye3d\detector_3d.py, the line being

pupil_datum["timestamp"]
papr 11 January, 2021, 22:31:08

I adjusted the example such that the loop gets a timestamp for each frame and passes it to the result2d

papr 11 January, 2021, 22:30:01

You will have to set that field to something sensible before passing it to the 3d detector.

user-3cff0d 11 January, 2021, 22:31:38

Gotcha

user-3cff0d 11 January, 2021, 22:32:00

The timestamp can just be an integer, right?

papr 11 January, 2021, 22:33:27

It should be a float in seconds. It is used for the different timescales explained here https://docs.pupil-labs.com/developer/core/pye3d/

user-3cff0d 11 January, 2021, 22:36:04

👍

user-3cff0d 11 January, 2021, 22:36:10

Thanks again!

user-3cff0d 11 January, 2021, 23:08:33

Do you have a specific method/library for calculating a gaze vector from the 3d result, or does it have to be done manually

papr 11 January, 2021, 23:09:47

I think you are referring to result3d["circle_3d"]["normal"] since you said:

I am not interested in obtaining gaze data relative to any scene

user-3cff0d 11 January, 2021, 23:11:12

Is circle_3d the pupil, or the circle through the center of which the gaze vector travels?

papr 11 January, 2021, 23:11:43

Both

user-3cff0d 11 January, 2021, 23:12:52

Okay, great! Also, what's the difference between result3d["sphere"] and result3d["projected_sphere"]?

papr 11 January, 2021, 23:14:06

sphere is in 3d camera coordinates, projected_sphere is the projected of that sphere into image coordinates (green circle in Pupil Capture eye windows)

user-3cff0d 11 January, 2021, 23:14:41

Aaah okay, that makes sense

user-3cff0d 11 January, 2021, 23:28:02

Hopefully just one more question: regarding the output of gaze_positions.csv, what are the eye_center1_3d columns as opposed to the eye_center0_3d columns, and the same for gaze_normal_1 as opposed to gaze_normal_0? Also, what does the base_data column represent?

papr 11 January, 2021, 23:29:52

Again, gaze_positions.csv contains gaze as in "pupil data mapped to scene camera coordinates". gaze_normal_0/1 are circle_3d->normal of each eye, in scene camera coordinates. eye_center0/1_3d is the sphere->center in scene camera coordinates. gaze_positions.csv is not what you are looking for. base_data refers to the pupil data from which the gaze datum was constructed.

user-7cbec7 12 January, 2021, 01:01:12

Hi, I might need your help to find out the exact issue. The problem is that I can't get world capture video input and world scene keeps grey. I thought it's the problem because the pupil core is using the R200 camera sold with the headset, which is not compatible with Ubuntu 18.04. (Why I think that's a R200 camera? Just because I saw a similar issue in github https://github.com/pupil-labs/pupil/issues/767)

Chat image

papr 12 January, 2021, 10:28:07

I don't have a headset with R200 camera. There must have been a misunderstanding since your headset does have a R200 scene camera. Your original issue description is therefore correct. You need a Pupil Capture version with support for R200 cameras on Ubuntu.

user-7cbec7 12 January, 2021, 01:01:19

Chat image

user-7cbec7 12 January, 2021, 01:04:11

From your reply, I guess I can just use high-speed option to get world scene from the camera sold with headset; If that's the case, I am not sure why I can't get world scene after installing librealsense and everything else from source.

user-7cbec7 12 January, 2021, 01:09:55

Another thing is that, after I installed the librealsense, I connected the Pupil Core with my lab and tested the example code, however, the example code returned "No device connected".

papr 12 January, 2021, 10:29:31

This does not sound good. It is a prerequisite that librealsense recognizes the R200 camera. Please contact info@pupil-labs.com in this regard, referring to our conversation.

user-3cff0d 12 January, 2021, 01:37:38

Got everything working great; thanks a bunch for all your help, papr!

user-2ab393 12 January, 2021, 07:18:00

@papr I have a question to ask you, why can't the results of blink detection be visually displayed on the screen?

papr 12 January, 2021, 10:13:50

We disabled the blink visualization in Pupil Capture due to a technical problem and its limited use. You can build a custom plugin that visualizes real-time blink data if you need it. If you are referring to something else, please let me know.

papr 12 January, 2021, 10:39:27

My colleagues from the hardware team have also pointed out that you should make sure to connect the headset directly to the computer using a USB 3.0 cable without USB hubs or cable extenders or adapters.

user-7cbec7 12 January, 2021, 16:12:36

Thanks again for your reply. I will check that.

user-19f337 12 January, 2021, 11:45:03

and if I understand well, the optional pupil.pldata contains the pre recorded pupil detection right ?

papr 12 January, 2021, 11:45:41

Correct

user-19f337 12 January, 2021, 11:46:55

Perfect, thanks !

user-7daa32 12 January, 2021, 16:01:16

Hello the new released version is being downloaded in a Pdf format. Please can anyone remind me of the compressing platform we have been using ?

papr 12 January, 2021, 16:03:26

The latest windows release is a rar file. You can use WinRaR or 7-Zip to decompress it, as it is noted at the bottom of the release notes.

https://www.win-rar.com/predownload.html?spV=true&subD=true&f=winrar-x64-590.exe

https://www.7-zip.org/

user-7daa32 12 January, 2021, 16:04:38

Thanks so much

user-7daa32 12 January, 2021, 17:42:14

See something likes this in the algorithm mode for the first time. The blue refused to erase. Please do you know what's it?

Chat image

papr 12 January, 2021, 17:43:09

Could you please post a picture in normal camera mode, disabling the algorithm mode?

user-7daa32 12 January, 2021, 17:52:02

Chat image

papr 12 January, 2021, 17:54:41

I cannot see an immediate reason for the detected edges at the right. You can try decreasing the intensity range. Do you encounter false positive 2d pupil detections (blue circle) in that right area?

user-7daa32 12 January, 2021, 17:57:03

I don't think I did

papr 12 January, 2021, 17:57:54

Then in this case, these lines are nothing to worry about.

user-7daa32 12 January, 2021, 17:58:40

What of the white striated lines ?

user-7daa32 12 January, 2021, 17:59:08

Chat image

papr 12 January, 2021, 17:59:58

These are automatic areas of interest. These are only active in resolutions larger than 200x200

user-3cff0d 12 January, 2021, 18:00:01

Hello again! When experimenting with Pupil Labs detection with AI-created eye images, I've noticed that it seems to be unable to detect a pupil without the presence of an iris. Is this known behavior?

papr 12 January, 2021, 18:01:03

This must be an implicit behavior of the algorithm. The algorithm does not explicitly expect an iris.

user-3cff0d 12 January, 2021, 18:07:05

I see, thank you

user-7daa32 12 January, 2021, 18:12:54

What's about the red background. And the whole things are not smaller

Chat image

user-7daa32 12 January, 2021, 18:15:49

Maybe I should change the resolution to 400*400. And auto mode. I just realized

papr 12 January, 2021, 18:16:31

This is the debug view of the previous 3d detector in Pupil v2.6 and earlier. Didn't you say that you downloaded the latest release? The debug view displays intermediate data that is used internally by the detector. It does not have any special meaning outside of the detector.

user-7daa32 12 January, 2021, 18:18:52

Yes I downloaded the new release. I have a data gotten from the use of 2.6. I am just trying to establish confidence in my work before proceeding

user-7daa32 12 January, 2021, 18:20:00

You mean the debug visualizer don't have special meaning?

papr 12 January, 2021, 18:20:57

Not to the user, no. "debug" relates to software debugging in this case, not data debugging.

user-7daa32 12 January, 2021, 18:24:26

Sorry . I'm seeing 30- 60 frame rate on the eye window. I have seen papers use the different options. Any implication? Thanks

papr 12 January, 2021, 18:26:41

More data is always better, but requires more resources. It is a trade off that you need to make depending on your computer. Try running it at max speed and check the fps graph at the top left. If the displayed number is lower than your setting, your resources are too limited

user-7daa32 12 January, 2021, 18:24:58

Chat image

user-7daa32 12 January, 2021, 18:29:21

Lower but fluctuates. Which mean I set it at the max seen?. The FPS is up to 90 for one eye and less than 60 for the other. Both set to 120 frame rate

papr 12 January, 2021, 18:33:58

are they running at the same resolutions?

user-7daa32 12 January, 2021, 18:34:45

Yes .. 400x400

papr 12 January, 2021, 18:36:12

Decreasing the resolution can help with performance.

user-7daa32 12 January, 2021, 18:36:36

Seems the FPS go higher on minimizing the ROI

papr 12 January, 2021, 18:36:57

yes, because the area considered for detection is lower

user-7daa32 12 January, 2021, 18:37:50

Chat image

papr 12 January, 2021, 18:39:31

Yeah, looking at the CPU graphs, you are running at the limits of your resources. I would recommend getting a faster laptop for more stable frame rates.

user-7daa32 12 January, 2021, 18:40:40

Thanks. Can you recommend?

papr 12 January, 2021, 18:42:08

Unfortunately, nothing specific. I would recommend something with 4 CPU cores @ ~3GHz. More cores are not as beneficial.

user-7daa32 12 January, 2021, 18:53:03

Does that sum up to 3GHz?

Chat image

papr 12 January, 2021, 18:53:54

No, the sum is not important. You need around 3 GHz per core.

papr 12 January, 2021, 18:53:30

Yeah, that is pretty low for pupil detection

user-7daa32 12 January, 2021, 18:55:18

Thanks. I will let my boss know.

user-7daa32 12 January, 2021, 19:59:58

Presently having issue with calibration. Please what's that ?

Chat image

papr 12 January, 2021, 20:02:43

As the error message says, there was not sufficient reference data. What calibration were you using?

user-7daa32 12 January, 2021, 20:05:40

Chat image

papr 12 January, 2021, 20:06:35

Depending on the distance, that might be too small

user-7daa32 12 January, 2021, 20:05:49

Chat image

user-7daa32 12 January, 2021, 20:08:19

We are trying to do preliminary study to know the best distance from Target. I am presently starting with ~60 inches from target

papr 12 January, 2021, 20:10:20

During the calibration you should see if the marker is being detected in the scene video preview. You do not need to have the headset on for that

user-7daa32 12 January, 2021, 20:12:56

The targets are on the wall. I just calibrated after many trials...got 0.179 Angular accuracy and 0.06 Precision... Are they too small or high?

papr 12 January, 2021, 20:13:43

Please checkout the best practices for tips in this regard https://docs.pupil-labs.com/core/best-practices/

user-7daa32 13 January, 2021, 04:23:19

Thanks

user-83ea3f 13 January, 2021, 09:03:44

Hi, thanks for your answer, and finally, it works!. And now, I have one question to pupil team. Does Pupil Core able to detect pupil activity without pupil worldview camera?

papr 13 January, 2021, 09:57:29

You can start the application with the --hide-ui flag to hide all windows.

user-a98526 13 January, 2021, 14:00:29

Hi@papr,i have question to you. What algorithm the head pose tracker plug-in uses to estimate the head pose?

papr 13 January, 2021, 14:02:50

We use Apriltag detection and bundle adjustment for the head-pose tracking.

user-b7849e 13 January, 2021, 16:23:22

Hi, I have 2 questions to pupil team. Q1. Does Pupil Core give 3D coordinates of the eyes with respect to a computer screen to be able to compute the distance between the eyes and the computer screen? Q2. If the answer is yes for Q1, I need information on how to calibrate Pupil Core to make it possible to compute the 3D distance. Since your answer was delayed, I got an additional question Q3: Is it possible to install the library or APIs that return the eye images, world image, and 2D & 3D coordinates locally or should we have access to the internet? Thanks! /Hamed

user-7daa32 13 January, 2021, 17:41:57

Hello

The values show for Angular accuracy and precision are they for error or as the names imply? What are the values for best practice? How would you know if the errors are too high?

user-7daa32 14 January, 2021, 01:05:12

Why are the annotation timestamps negative and decreasing? Why are there negative timestamps?

Chat image

user-a98526 14 January, 2021, 02:48:20

Hi, thanks for your answer. I'm trying to use the PNP method combined with Apriltag to detect head movement, but this requires Apriltag size information. I think the method you use is very interesting and useful.

user-83ea3f 14 January, 2021, 07:25:39

hey team, I am trying to use powerpoint slideshow to check the activity of pupil movement. However, when I turnon the slideshow of the Microsoft Power Point, the surface tracker plugin does not function. Do you know the reason why?

Chat image

user-83ea3f 14 January, 2021, 07:29:43

This is the picture when in close the PowerPoint Slide Show mode

Chat image

user-26fef5 14 January, 2021, 10:17:13

@user-83ea3f you have zero white border when you start the fullscreen slideshow - you need some white border around the full marker for the apriltag node to detect it. Your second screenshot Shows exactly that - there is sufficient white border.

papr 14 January, 2021, 10:21:19

Pupil time has an arbitrary beginning, which can also be negative. These numbers are not decreasing but increasing. Bare in mind that they are negative. -2 is smaller than -1.

user-7daa32 15 January, 2021, 00:41:00

Okay. Thanks

user-7daa32 15 January, 2021, 00:41:38

Can pupil capture measure the distance between two target points ?

papr 15 January, 2021, 12:01:44

Could you give an example of what you mean by target point and what type of distance you are talking about?

user-7daa32 15 January, 2021, 14:20:28

I have two objects on the wall. The distance in meters or centimeters or in whatever units between these objects. And the distance in that same unit between the wall and the user. I actually measured that using the meter rule.

papr 18 January, 2021, 10:17:47

That is only possible if you use the head pose tracker and mark your objects with apriltag markers https://docs.pupil-labs.com/core/software/pupil-player/#analysis-plugins

Please be aware that the head pose tracker assumes that all markers are fixed in relation to each other. Therefore, you cannot use it to measure distances of moving markers.

user-0dec4c 15 January, 2021, 15:32:45

Hey all,

I recorded some data through LSL using Pupil Capture 1.23.4 and the current pupil_lsl_plugin. When analysing the data, I got negative timestamp differences and a not perfectly even sampling rate. Any suggested workarounds to this?

Best, Julius

user-562b5c 17 January, 2021, 21:47:43

Hi All, I have just recently started using Pupil Core, I still have the version 2.6 software. While analysing the data using Pupil Player, I created an export and am trying to understand the data in the generated csv files. For example in the pupil_positions csv file I want to understand the meaning of the different coloumn headers; is there some documentation explaining the data in these csv files? I checked out https://docs.pupil-labs.com/core/ and https://docs.pupil-labs.com/core/terminology/#pupil-positions but it is only partially helpful. Thanks.

papr 17 January, 2021, 21:48:50

See this for more information https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

user-562b5c 17 January, 2021, 21:50:59

thanks!, guess I missed that page

papr 18 January, 2021, 10:25:17

LSL records the realtime gaze streams (monocular left, monocular right, binocular). Each is monotonically increasing in time by itself, but not when mixed. We recommend to sort the data post-hoc by timestamp. Pupil Player employs the same strategy.

Our cameras do not guarantee even sampling rate. To work around it, we timestamp everything in order to align data streams post-hoc.

user-0dec4c 18 January, 2021, 12:12:23

Perfect, thanks for the quick response

user-7daa32 18 January, 2021, 16:36:55

Chat image

papr 18 January, 2021, 16:40:33

Not sure what you are referring to by "feature". The orange lines visualize the calibration error. The green line visualizes the calibration area (area of highest gaze accuracy).

Based on this image, I would recommend extending the calibration area vertically.

Btw, I would recommend using larger apriltag markers at that distance.

user-7daa32 18 January, 2021, 16:37:11

Can you please check what this feature mean ?

user-7daa32 18 January, 2021, 16:52:20

Thanks. Sorry how can one extend the calibration area? The picture was taken when the world camera was not focused on the stimuli. The apriltag markers are not really important for the study. I placed those markers there to see how the head pose tracking works. I have the bigger ones in case I want to create surfaces. The calibration area is always like that and not cover the entire parts of the stimuli. Do I need to make it cover the whole place ? Thanks for your response. If pupil lab has a teaching workshop, my University can pay me to come for that. Please let me know if you have that . Thanks

papr 18 January, 2021, 16:58:43

We offer support contracts that include video support sessions. You can find them here: https://pupil-labs.com/products/support/

Could you let me know which calibration choreography you are using? You do not need to cover the complete field of view of the camera, but a bit more vertical coverage should result in a better result if the subject moves their head up or down.

user-7daa32 18 January, 2021, 16:56:25

Before getting to know that we can use the head pose tracking to measure distance between targets, I was using meter rule to measure the actual distance between targets and distance from Target to participants. I am still looking for papers that recommend these metrics as best practices

user-7daa32 18 January, 2021, 17:02:04

Thanks you. For now, the eye movements are wholly horizonal. I am using the single marker . This is for a stimulus placed on the wall

Chat image

papr 18 January, 2021, 17:11:19

Ok, so you are mimicking the eye movement of your experiment during the calibration. In other words, you move the calibration marker in the same way as you would move your stimulus. Do I understand that correctly?

user-7daa32 18 January, 2021, 17:47:42

Stimuli are placed horizontally in a straight line

papr 18 January, 2021, 17:48:40

ok, and how do you move your calibration marker? Do your subjects use a head rest?

user-7daa32 18 January, 2021, 17:50:14

I have it placed on the wall

Chat image

papr 18 January, 2021, 17:50:34

So you are your subjects to fixate it and move their head?

user-7daa32 18 January, 2021, 17:51:52

Subject will move head during calibration but not during recording... We are not still sure if the head could be moved during actual experiment. We are still in the preliminary stage. No chin rest for now in case we need it

papr 18 January, 2021, 17:54:54

Ok, since your subjects could move their head in relation to your stimulus, it is important that you increase the vertical calibration area. You can do so by instructing the subjects to fixate the calibration marker and to move their head outwards in a spiralling motion (more horizontal than vertical movement though due to the camera's aspect ratio).

user-7daa32 18 January, 2021, 19:22:52

The spiral motion is for calibration while the horizontal is for validation

user-7daa32 18 January, 2021, 18:01:11

Thanks

user-23a5ae 18 January, 2021, 18:16:48

Hi All, I am new to this forum. I work with an app to measure hand-eye coordination. Users are asked to trace boundaries of objects on a tablet screen while tracking their gaze. Tracing is done by touch gestures or stylus. We currently use a Tobii screen-based tracker that often creates problems due to blocking by user hands. We are looking for a wearable tracker and are considering pupil core. Our current app is built on top of .net. Greatly appreciate any inputs on whether pupil core is suited for gaze tracking in our app and compatible with .net. Thanks in advance.

papr 18 January, 2021, 19:29:29

In any case you would need to setup your screen as a surface/AOI, see our surface tracking documentation https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

Should you run Windows on the tablet, you might be able to run Pupil Capture (our software for realtime gaze estimation) on the tablet itself (assumes that you can connect the headset to the tablet via USB). This will likely result in limited sampling rates due to the limited resources of the tablet. Instead, you can connect the headset to a dedicated computer running Pupil Capture and receive the realtime gaze via our Network API https://docs.pupil-labs.com/developer/core/network-api/

papr 18 January, 2021, 19:24:07

What operating system are you running on the tablet?

user-23a5ae 18 January, 2021, 20:09:47

Thanks. This is great news. The tablet is using windows OS.

user-7daa32 18 January, 2021, 20:05:32

How can one stop the Annotation from showing in pupil player twice ? For example the "start" Annotation from. Showing twice or three times.

papr 18 January, 2021, 20:06:10

This means that you are sending it multiple times. Is that intended?

user-7daa32 18 January, 2021, 20:06:35

Nope

user-7daa32 18 January, 2021, 20:07:03

Not even displaying once since I have been using it

papr 18 January, 2021, 20:08:38

I misunderstood. A notification shows every time you seek to the frame that this annotation is closest to.

user-7daa32 18 January, 2021, 20:08:40

I need to record the time at each annotation

papr 18 January, 2021, 20:09:30

Each annotation should have a timestamp already. Either the one from pressing the button or from the sending source in case of remote annotations.

papr 18 January, 2021, 20:09:10

I thought you saw the same annotation multiple times in Capture.

user-7daa32 18 January, 2021, 20:09:28

Player not capture

user-7daa32 18 January, 2021, 20:10:29

I expect to get just one Annotation for one target after viewing it. Each target should have just one Annotation notification

papr 18 January, 2021, 20:11:43

Could you share an example recording with [email removed] please? Ideally, also add a screen recording of the issue such that it is clear what the issue is.

user-7daa32 18 January, 2021, 20:13:15

I try if I can create a link. The recording file are usually very large

papr 18 January, 2021, 20:14:32

The recording does not need to be long or super accurate. For sharing, I have made good experiences with https://wetransfer.com/ or Google Drive.

user-7daa32 18 January, 2021, 20:25:15

I'm trying we transfer

papr 18 January, 2021, 20:29:35

This seems to have worked well

user-7daa32 18 January, 2021, 20:39:17

yes. I am sending one to see if that recording is okay.

papr 18 January, 2021, 20:41:55

How do you send annotations to Capture during a recording? It looks like you are sending multiple ones in quick succession which is why Player displays multiple ones. Also your scene video frame rate is super low. Are you running something else on the computer or a custom plugin?

user-7daa32 18 January, 2021, 20:47:05

sent by clicking the keyboard. That is an almost a minute video for one block of the experiment. Maybe I should do the whole blocks in one video. But I will need to set up again and recalibrate. I am not running something

papr 18 January, 2021, 20:49:19

Ah, we talked about that your laptop was too slow a few days backs, I remember now.

papr 18 January, 2021, 20:48:18

So, you have Capture focused and use the built-in keyboard shortcuts?

user-7daa32 18 January, 2021, 20:49:36

I looked at the wall. I have the screen covered and my fingers on the shortcut keys

papr 18 January, 2021, 20:50:42

Well, somehow, hitting the key triggers multiple events. You are not holding the button down for multiple seconds, are you?

user-7daa32 18 January, 2021, 20:52:52

maybe I held it down for too long, I don't know. As for the slowness of the laptop, does it affect it too? I am yet to get a new one

papr 18 January, 2021, 20:53:30

It should not affect the button presses. But we have never tested the UI at so slow frame rates.

papr 18 January, 2021, 20:54:19

Nonetheless, you can simply post-process the annotations.csv file and remove duplicates after the export.

user-7daa32 18 January, 2021, 20:55:25

Okay. The first annotation should be the one to use

user-7daa32 18 January, 2021, 20:56:02

Thanks

user-562b5c 18 January, 2021, 23:00:45

Hi All, has someone had experience with using Pupil Core on participants wearing prescription glasses. Did you ask them to wear the Core on top of their prescription glasses? because it doesn't seem stable and slips often or is there any other trick? Thanks.

user-83ea3f 19 January, 2021, 01:55:39

Hi, I have a question. 1. Is there any ways to synchronize gaze on surface timestamp with my local computer time? 2. what is the difference between x_norm and x_scaled?

papr 19 January, 2021, 17:42:17

All timestamps are in pupil time (https://docs.pupil-labs.com/core/terminology/#timing). See this tutorial on how to shift pupil time to unix epoch post-hoc: https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-a98526 19 January, 2021, 01:58:11

Hi@papr, I can get eye images and world images through recv_ world_ video_ frames_ with_ Visualization.py, but because there is no time stamp, I can't match these images with the gaze data. Is there a way to solve this problem?

papr 19 January, 2021, 17:43:19

the msg object has a timestamp field which you can use for that purpose. msg["timestamp"]

user-f87a58 19 January, 2021, 09:37:02

Hello everyone, apologies for posting here a potentially off topic question - could you recommend a market place where I could sell my Pupil Headset?

user-234ee8 19 January, 2021, 17:36:15

Hello, is link to the new core update that was to be released on 1/18 available yet?

papr 19 January, 2021, 17:44:10

Our latest release can be found here https://github.com/pupil-labs/pupil/releases/tag/v3.0

What feature/bugfix are you looking for?

user-0ee84d 19 January, 2021, 17:41:48

After going through the documentation, it’s understandable that the optical centre of the scene camera is the origin. How do I map the origin in the world space of the pupil core to the model space? @papr

papr 19 January, 2021, 17:45:19

Could you please remind me what "model space" is in your case?

user-0ee84d 19 January, 2021, 17:45:29

Let’s say a 3d model

papr 19 January, 2021, 17:46:22

A 3d model in which context? A 3d model of the environment in which the user is moving? Are you talking about the head pose tracker?

user-0ee84d 19 January, 2021, 17:48:21

I would like to map the origin of the gaze_3d in the world space to model space with reference to the 3d model of the environment in which the user is moving

user-0ee84d 19 January, 2021, 17:46:41

3d model of the environment in which the user is moving .

papr 19 January, 2021, 17:50:02

And just to clarify, were you planning on using a VR environment or the head pose tracker feature?

user-0ee84d 19 January, 2021, 17:50:54

I’m using the head pose tracker feature in pupil core which basically gives me a gaze_3d_ and norm_position_2d

papr 19 January, 2021, 17:52:46

This means, in your export you get head_pose_tacker_poses.csv which includes the head pose for each world frame. You can use the pose to transform the gaze_point_3d from gaze_positions.csv (contains an index column to match world frames) into 3d model coordinates.

user-0ee84d 19 January, 2021, 17:54:38

Thanks!

user-0ee84d 19 January, 2021, 17:56:44

I was manually calculating the head positions using solvepnp which basically gives me the camera position (homogeneous matrix). Could you please give me some additional information on how I can use this to transform the gaze_point_3d? So I have the camera position with respect to the 3d models origin and gaze position from pupilcore @papr

papr 19 January, 2021, 17:58:53

I do not know the necessary math to do the transformation, unfortunately. I will relay your question to a colleague who should be able to respond to it 🙂

user-0ee84d 19 January, 2021, 17:59:35

Thank you! I’m looking forward to knowing it! 😊

user-234ee8 19 January, 2021, 18:07:50

I was told there would be a 3.1 release on 1/18 for Big Sur comparability?

papr 19 January, 2021, 18:20:37

I was not aware that we announced a fixed release date for that. We are still working on Big Sur compatibility for the bundled application. A few hours ago, we had the first success to run Capture from source on Big Sur using the develop branch https://github.com/pupil-labs/pupil/tree/develop

user-10fa94 19 January, 2021, 18:55:03

hi! I am using the pupil core to extract pupil diameter based on the 3d model - how much do eyelashes affect model stability and is there a way to help correct for errors caused by this? I struggle to get good readings on my own eyes (confidence mostly remains below 0.7) but am able to get stable readings on a colleague's eyes (confidence consistently above 0.8). In looking at the eye videos, the main difference i see is the increased presence of eye lashes in my video vs. my colleagues. Thank you in advance for your help 🙂

user-562b5c 19 January, 2021, 21:13:01

What a coincidence!! Just 45 minutes ago I was doing a pilot and faced a similar if not the same issue. The other pilots went perfect with confidence in the 0.9s but this participant had real long upper and lower eye lashes and after trying to adjust the eye tracking camera from all possible angles it was impossible to get confidence above 0.5. The participant was writing something on a paper, so looking down in the scenario. From the eye cam I could see that both the upper and lower eye lashes were mostly touching each other and that's why the eye tracker couldn't focus on the pupil. Whereas, while looking forward or looking at a PC screen sitting on a chair the confidence level came back to the 0.9s. I would be grateful if someone could help in this matter.

nmt 20 January, 2021, 10:06:55

@user-10fa94 @user-562b5c Eye lashes are one of several physiological parameters that can negatively affect pupil detection (necessary for core's eye modelling). Put simply, they can obscure the view of the pupil from the eye camera (e.g. as @user-562b5c described when the eye lashes almost touch each other). In this case, the confidence values can decrease leading to model instability. This affects all eye trackers that employ pupil detection, but with Core, you can make use of the adjustable eye cameras, e.g. by positioning the cameras more 'underneath' the eyes, so they effectively look up into the eyes. The orange camera mount extenders might come in useful. We can also recommend wearers to avoid eye makeup.

nmt 20 January, 2021, 10:12:59

@user-1b930f @user-562b5c It is also worthwhile spending time prior to data collection to ensure you get good pupil detection at all ranges of eye movements you will be assessing. E.g. if the subject will be writing, check your pupil detection during writing and adjust the eye cameras as necessary.

user-562b5c 20 January, 2021, 20:58:22

[email removed]

user-562b5c 20 January, 2021, 20:59:20

@nmt also do you have any tips about using the Core on participants that were prescription glasses

nmt 21 January, 2021, 12:04:29

Other members of the community and our team have put on the Pupil Core headset first, then eye glasses on top. You can adjust the eye cameras such that they capture the eye region from below the glasses frames. This is not an ideal condition, but does work for many people. Core will also work with contact lenses! So these are a preferable option for vision correction when using Core.

user-562b5c 20 January, 2021, 20:59:34

wear*

user-be53ea 21 January, 2021, 08:47:17

Hello? I have a question. Do you use 'camera-eye distance' data as an input to calculate theta and phi on 3d eye model? If camera is an origin and it faces positive z direction, I thought that a distance between a camera and an eye will be needed to convert 2d camera image to 3d polar coordinate point.

papr 21 January, 2021, 09:46:28

For phi and theta, the center of the eye ball is the origin. They represent the eye ball orientation, independent of its location within the scene camera coordinate system.

user-a334dc 21 January, 2021, 09:10:13

Hi everyone, is the pupil core eye tracker compatible with .Net as I am working on an applications which allows to draw on the screen with eye tracking the gazed events . As I saw from the developer site they are mostly packages related to python. Could someone please help me with this?

user-0dec4c 21 January, 2021, 09:25:53

Hey, I tried recording data using LSL and Pupil Capture 3.0.7 when the following error occurs:

"World: Error extracting gaze sample: 0", which shoulf stem from the push_gaze_sample function in the pupil_LSL_relay plugin. Any ideas how to solve the issue?

The Labrecorder does show a stream and and when I start the recording after calibration the error occurs.

Best Julius

papr 21 January, 2021, 09:50:44

Thanks for letting us know. We will push a fix latest by next week.

The issue is related to https://github.com/pupil-labs/pupil/pull/2068/files#diff-1db06b7632c7441082988af78c11f0d28b6371a07779e1db309fef09c4622d04L296-L297

papr 21 January, 2021, 09:49:24

To access data in real time, it is best to use our network api https://docs.pupil-labs.com/developer/core/network-api/

Even though, that our examples use Python, the API can be used by any language that has bindings for zeromq and msgpack.

user-a334dc 21 January, 2021, 22:34:01

Thank you.

user-0dec4c 21 January, 2021, 11:04:10

Perfect, thanks

user-562b5c 21 January, 2021, 12:24:09

thanks @nmt

user-a334dc 22 January, 2021, 00:01:49

Is there a possibility to draw on the screen while gazing from the eye tracker? Like , one should be able to look at a particular picture accordingly and draw on the screen or write something on the screen ?

wrp 22 January, 2021, 04:52:36

@user-a334dc you could draw with your mouse while looking at a picture on screen with something like this: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py

user-a334dc 22 January, 2021, 15:06:33

thank you

user-0dec4c 22 January, 2021, 10:56:35

Hey, the new LSL-plugin works with Capture 3.0.7. Thanks for the fix

papr 22 January, 2021, 13:40:21

Great, thanks for testing 👍

user-0dec4c 22 January, 2021, 10:57:14

Is the new plugin backward compatible with older Capture versions?

papr 22 January, 2021, 13:40:00

It should be

user-a98526 22 January, 2021, 11:37:57

hi@papr, I want to use fixation information in my customized plugin, so I used the following code : * if ('frame' in events)&("depth_frame" in events)&('gaze' in events): print('events::::::::::::::::::::::,',events) frame = events['frame'] depth = events['depth_frame'] self.depth_img = depth.depth if ('fixations' in events):

            fixations=events['fixations']
            print('fixations*********************:',fixations)

* But the fixations displayed is an empty list like this:fixations***: [] My plugin order is 0.5, which is greater than the order of fixation dector plugin 0.19.

papr 22 January, 2021, 13:41:29

Do you see fixations in the world video preview? They are visualized as yellow circles. If not, did you calibrate? Fixations require gaze and gaze is only available after calibration.

user-a98526 22 January, 2021, 13:49:57

I see fixations in world video preciew.

papr 22 January, 2021, 13:54:06

Does the fixation ID go up as well? (Displayed next to the circle)

user-a98526 22 January, 2021, 13:54:15

Chat image

user-a98526 22 January, 2021, 13:54:19

yes

papr 22 January, 2021, 13:55:42

I can't see an immediate reason from your or our own code why this does not work. We will have to investigate this.

papr 22 January, 2021, 13:57:40

Could you please try reducing your plugin order to 0.191?

user-a98526 22 January, 2021, 13:59:01

I get the fixation information,I think this problem case becasue i use duration time to compute, if there is one fixation is empty, it will cause the program to fail, so I think I didn’t get the gaze information. Just now I got a non-empty fixation information, and then got an empty fixation information again, which made me understand the error of my program.

user-a98526 22 January, 2021, 13:59:41

just like this

Chat image

papr 22 January, 2021, 14:03:22

Actually I am not sure what I am seeing. It looks like you remove fixations from the list and then the list is empty. I see the print out of a fixation in the screenshot,just before you print the list. This means that your code does more than in your example.

user-a98526 22 January, 2021, 14:00:39

Is there any way to filter these empty fixations?

user-a98526 22 January, 2021, 14:05:19 confidence=gp['confidence']
        if ('fixations' in events):

            fixations=events['fixations']
            print('fixations*********************:',fixations)
            time=fixations['duration'] #
user-a98526 22 January, 2021, 14:06:52

This is part of my code. I mean, if there is no gaze, the fixation I get is an empty list, which will cause the code to fail.

papr 22 January, 2021, 14:07:49

Remember fixations is a list in this example with potentially multiple fixations. You are accessing it as if it was not a list but a dictionary. Try time=fixations[0]["duration"]. This will only work if there is an actual fixation though

papr 22 January, 2021, 14:09:15

Also, let's take this discussion to software-dev

user-331121 22 January, 2021, 18:09:27

Hi @papr, I was wondering if there is any way we can get a camera pose (position and orientation) for Pupil-core

user-331121 22 January, 2021, 18:11:31

@user-3cff0d

user-26fef5 22 January, 2021, 18:59:03

@user-331121 you can use the build in apriltag plugin. This plugin tracks headpose using fiducial marker detection.

user-ecbbea 22 January, 2021, 19:38:03

Hello, I'm getting the following error when launching Pupil Capture - can anyone help me out? It was doing this on 2.6 and I just upgraded to 3.0 and it still does it. The capture window is just blank white and doesn't respond.

Chat image

user-ecbbea 25 January, 2021, 19:59:35

just wondering if I could get some help with this when someone has some time

user-4e51e6 23 January, 2021, 18:11:37

Hi, I have a question PI recording format in this document: https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793

user-4e51e6 23 January, 2021, 18:12:28

I was trying to read the gaze data with Python. What is the character encoding format for gaze data in "gaze ps1.raw"? I tried standard latin-1 and utf-x. They result in decoding error or unreadable text.

user-a334dc 24 January, 2021, 04:33:21

Hey, Can you help with the price of the pupil core in USD and how many days does it take for delivery to United States ? .

papr 25 January, 2021, 10:20:40

Please contact info@pupil-labs.com in this regard.

user-10fa94 24 January, 2021, 04:56:03

I have been following https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py for remote control, is there a way to run both calibration as well as validate remotely? i found a previous thread from last year saying it needs to be done through notifications, is this still the case or has there been an update with some of the new releases? Thank you in advance for your help! 🙂

papr 25 January, 2021, 12:40:27

This should be still the case. If you could link the thread, I could check.

user-4f103b 25 January, 2021, 03:56:11

Hi, can anyone can help me with "Annonation" please.

user-4f103b 25 January, 2021, 03:57:41

I am adding annonations manually in Player software. The expoerted file shows timestamp but duration is 0..

papr 25 January, 2021, 12:37:26

All annotations added via the Player UI have a duration of 0.0 as they only associated with a single frame. If you want to model durations, I suggest creating two explicit annotations, one for start and one for end and using them instead.

user-37e73e 25 January, 2021, 10:20:15

Does anyone know how to read the surface_definitions_v01 file? I'd like to add a surface by editing this file instead of clicking on tags in a frozen image.

papr 25 January, 2021, 10:21:27

See "Other Files" https://docs.pupil-labs.com/developer/core/recording-format/#other-files

user-37e73e 25 January, 2021, 10:48:24

That was really helpful. There I get a dict with reg_markers -> containing marker id's and corner coordinates and also registered_markers_dist -> with id's and corner coordinates. How are these distances computed and distances to what exactly are they?

marc 25 January, 2021, 12:42:36

@user-37e73e The dist actually stands for distorted. The locations of the marker corners are calculated both in the original distorted scene camera image as well as the undistorted scene image.

user-37e73e 25 January, 2021, 12:43:11

ah that makes way more sense ^^ Thanks a lot!

user-37e73e 25 January, 2021, 12:52:03

is it possible to define a surface from a static picture of another undistorted camera and safe it into such a file?

user-f0127b 25 January, 2021, 14:22:53

Hello, We have this issue during the installation of the Source code. We already saw a person getting the same issue. We tried to do the same thing as him, and reinstall everything. We followed the instructions from the GitHub of pupil-labs for Windows but we still got the same issue. Do you have any idea what could be the problem ? Also, when we copy the .dll files to pupil external from ffmpeg there isn't the postproc-55.dll, is This normal ?

Chat image

papr 25 January, 2021, 18:57:35

Please see this message https://discord.com/channels/285728493612957698/446977689690177536/798573545377497138

user-0ee84d 25 January, 2021, 18:55:28

@papr @marc @papr how much accurate is the gaze_3d position? After applying the camera position retrieve from solvepnp over gaze_point_3d position, the positions are fluctuating and are inaccurate? How much is the tolerance level of the gaze 3d position?

papr 25 January, 2021, 18:56:45

You can measure the accuracy in degrees using the Accuracy Visualizer plugin.

user-0ee84d 25 January, 2021, 18:57:23

Thank you!

user-f0127b 25 January, 2021, 19:40:27

we already did that, sorry for taking your time

papr 25 January, 2021, 19:42:10

And postproc-55.dll was not part of download? How did you check that it is missing?

user-f0127b 25 January, 2021, 19:45:13

I don't see postproc here or anywhere is the directory, i'm sorry to annoying you

Chat image

papr 25 January, 2021, 19:49:29

Don't worry 🙂 Sorry if my question was not clear. How do you know that the file postproc-55.dll is missing? Did you read about the file when you searched for the error message[1]? Or did you use a tool to find out?

[1] Import Error: DLL load failed: ...

user-f0127b 25 January, 2021, 19:59:22

oh, okey. No, there is not the specified file in the error but I just saw that is missing in the folder contrary to the manual in Github. But maybe the error come from of somewhere else

user-ecbbea 25 January, 2021, 19:59:03

Just as a note: postproc-55.dll is not included in the lgpl release - only the gpl

papr 26 January, 2021, 13:12:18

@user-f0127b did you download the gpl release? It should contain the file.

papr 25 January, 2021, 19:59:44

Ah, that makes sense

user-f0127b 26 January, 2021, 13:03:32

Hey, so do you have a solution ? Or i need to give you furthers informations ?

papr 25 January, 2021, 20:02:04

mmh, I am not sure. It sounds like there is no network interface in your computer. Can you confirm that? Maybe, alternatively, there is security software blocking it?

user-ecbbea 25 January, 2021, 20:21:57

I can confirm there is a network interface, and I allowed the firewall exception that popped up when the application ran. I don't have any extraneous security software running on my PC, beyond what is baked into Windows 10.

user-ecbbea 25 January, 2021, 20:22:54

The base installation of python on this machine is 3.8.3. Is that a problem? I don't know what version Pupil Labs software runs on

papr 25 January, 2021, 20:28:34

No this is not problem. The firewall pop up is expected, too. This is the lines that fails. https://github.com/zeromq/pyre/blob/fa0db96ddc43fe68d922ddf945f7bda3bc064605/pyre/zhelper.py#L527

Can you share the capture.log file such that we can check if it reveals anything? Looks like one of the installed interfaces does not support unicast. But I cannot tell for sure.

if you are running from source, you could try commenting out the print statements around that line and check what information you get.

user-ecbbea 25 January, 2021, 21:34:51

here's my capture.log file in pupil_capture_settings

capture.log

papr 25 January, 2021, 21:36:03

sadly, it does not contain any information to further identify the problem 😕

user-ecbbea 25 January, 2021, 21:43:23

Hmm no worries - i'll keep poking around and see if I can figure anything out

user-f0127b 26 January, 2021, 13:13:43

I did, but unfortunately it doesn't work...

papr 26 January, 2021, 13:21:56

OK, then it is time to check what is actually missing. Please see this message https://discord.com/channels/285728493612957698/446977689690177536/796747481693028443 and apply it to the detector_2d file

user-f0127b 26 January, 2021, 14:22:30

@papr I can see that the file: opencv_world320.dll is missing with the app, but I can find it in the .package_data directory, what should I do?

user-0ee84d 27 January, 2021, 01:08:48

Does pupil core has imu inbuilt? Just curious to access raw sensor data if it is...

nmt 27 January, 2021, 09:16:04

Pupil Core doesn't have an onboard IMU

papr 27 January, 2021, 09:27:16

I checked. It definitively comes with the wheel. Please run pip install --only-binary=:all: pupil-detectors --force-reinstall and check the output for errors

user-f0127b 27 January, 2021, 09:42:26

I don't see errors in the reinstallation... I still get the same errors when I try to launch it

Chat image

papr 27 January, 2021, 09:48:49

ok, great. Could you please check the site-packages folder again and check if .package_data includes the dll file now?

papr 27 January, 2021, 09:56:02

@user-f0127b I think it is normal that the Dependencies.exe marks the opencv dll as missing as we add the .package_data directory dynamically to the PATH https://github.com/pupil-labs/pupil-detectors/blob/master/src/pupil_detectors/init.py But the directory must contain the dll file.

Can you check Dependencies.exe again if anything else is missing?

user-f0127b 27 January, 2021, 10:00:22

yes it does

papr 27 January, 2021, 10:00:59

ok, perfect

user-f0127b 27 January, 2021, 10:03:27

I start DependenciesGUI.exe, and i pulled the detector_2d.cp36-win_amd64.pyd file, the opencv_world is still missing for it

papr 27 January, 2021, 10:04:12

Yes, this is expected. See my message above. Is there anything else marked as missing?

user-f0127b 27 January, 2021, 10:04:29

no, it's the only thing missing

papr 27 January, 2021, 10:05:01

ok, can you open the opencv dll in the dependencies application? Maybe one of its dependencies is missing.

user-f0127b 27 January, 2021, 10:10:03

it says that : in combase.dll imports are missing, and api-ms-win-core-winrt-string-11-1-0.dll, api-ms-win-core-wow64-11-1-0.dll, api-ms-win-core-wow64-11-1-1.dll are missing in OLEAUT32.Dll

papr 27 January, 2021, 10:12:18

These are all Windows dll files. They should be present.

user-f0127b 27 January, 2021, 10:28:22

I just updated windows, it don't work. I'll try to find a solution and tell you if I found, thanks a lot for your help !

user-be53ea 27 January, 2021, 13:31:19

is 'phi' and 'theta' the same as this spherical coordinate in the eyeball? Is the center = the center of the eyeball? I would also like to know what is 'z' axis and how 'x', 'y' axis are defined.

Chat image

nmt 28 January, 2021, 14:47:26

'phi' and 'theta' indeed represent the eyeball orientation in spherical coordinates.

The origin corresponds to the centre of the eyeball

The coordinate system of the eye model is the eye camera coordinate system: https://docs.pupil-labs.com/core/terminology/#coordinate-system

x: horizontal, y: vertical, z: optical axis. So if you looked directly at the eye camera, you would get phi=-π/2 and theta=π/2

user-98789c 28 January, 2021, 11:23:34

Is Pupil Core compatible with the open source software "OpenSesame"?

user-0ee84d 28 January, 2021, 13:11:53

@papr is pupil player compatible with network api? Or will I be able to retrieve the images and the gaze data through network api using the pupil player that plays my offline recordings?

papr 28 January, 2021, 14:52:05

No, that is not possible.

papr 28 January, 2021, 14:52:49

There is no official support form our side. It is possible that community members have build scripts/tools that allow you to import Pupil Core data.

user-0ee84d 29 January, 2021, 00:05:52

I have tried to search but I couldn’t find one. Any examples would be highly appreciated

user-a09f5d 28 January, 2021, 22:47:43

Hi... I have a question about timestamps. I will be using the pupil core to record eye movement during an experiment in which subjects are asked to fixate on a series of visual stimuli generated by python (PsychoPy). For analysis I need to be able to precisely match the time of stimulus presentation with the correct frame(s) from the pupil core recording.

How does pupil capture save the timestamp that is included in all of the exported data files? Do you have any recommendations for the best/most accurate way to save the timestamp within my experiment's python code so that I can easily match it with the corresponding frame from the eye tracker recording?

user-ac85c2 28 January, 2021, 23:23:18

Hello, When I am trying to calibrate at the end it shows the message: plugin Gazer3D failed to initialize

papr 29 January, 2021, 10:30:56

That are several possible causes for that. Could you provide a bit more information?

user-be53ea 29 January, 2021, 02:05:20

if z-axis is optical axis, why is theta=π/2? Shouldn't it be zero? If you could add little memo of the picture of the coordinate, I would really appreciate it!!

nmt 29 January, 2021, 11:17:05

Remember that the z axis of the camera points along the viewing direction of the camera, and the coordinate system of the eyeball model is the eye camera coordinate system.

Now think of the viewing direction of the eyeball as a vector in cartesian coordinates. When looking directly into the camera, the vector would be close to (x=0, y=0, z=-1)

If you transform this vector to spherical coordinates, you get:

phi: -1.5707963267948966, theta:  1.5707963267948966
user-da621d 29 January, 2021, 03:09:42

Is there c++ version of pupil position receive code from hardware

user-da621d 29 January, 2021, 05:15:50

I use c++ to receive data from eyetracking hardware and got these data, but how to decode it.

Chat image

papr 29 January, 2021, 10:17:36

You will need to decode the message content with msgpack. Check https://msgpack.org/ for c++ bindings.

user-da621d 29 January, 2021, 05:36:31

Hi, thank you, is anybody here😀

user-da621d 29 January, 2021, 10:16:29

hi😩

user-da621d 29 January, 2021, 10:28:18

thank you I am about to try it

user-da621d 29 January, 2021, 10:33:37

thank you very much, I solved it

user-da621d 29 January, 2021, 10:33:46

😘

user-be53ea 29 January, 2021, 10:52:39

may I get a detailed(or rough) doc of spherical coordinate? I'm really sorry!

papr 29 January, 2021, 10:55:49

This is how we convert cartesian to spherical coordinates https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/geometry/utilities.py#L14-L19

user-0ee84d 29 January, 2021, 11:07:45

@papr is there any open source implementation to parse saved pupil records? I couldn’t find one

papr 29 January, 2021, 11:09:00

If you are referring to the intermediate recording files, see our docs for details: https://docs.pupil-labs.com/developer/core/recording-format/

user-da621d 29 January, 2021, 12:48:45

magpack::objet, how to transfer it to string or json, because I want to extract pupil. position

papr 29 January, 2021, 12:51:12

Please consider the c++ msgpack bindings' documentation on how to access fields within the msgpack::object.

user-da621d 29 January, 2021, 12:49:33

magpack::object is really same as json but I cant decode it.

papr 29 January, 2021, 12:53:01

The data that you receive depends on your subscription. In this case, it is a log message. If you only want to receive pupil data, subscribe to pupil

user-da621d 29 January, 2021, 12:56:36

this is the data I only subscribe the pupil

Chat image

user-da621d 29 January, 2021, 14:06:45

I still cant decode this msgpack::object

papr 29 January, 2021, 14:14:17

Please understand that I do not have any c++ experience. I can answer questions about the content of that message but not how to extract it in c++.

user-da621d 29 January, 2021, 14:07:04

could you give me some methods

user-da621d 29 January, 2021, 14:11:28

I want to transform msgpack::object to vecter, but false

Chat image

papr 29 January, 2021, 14:12:23

See this tutorial https://github.com/msgpack/msgpack-c/wiki/v2_0_cpp_unpacker

user-da621d 29 January, 2021, 14:17:11

thank you very much 😋

user-be53ea 29 January, 2021, 14:54:28

so, like this? I thought theta should have z axis as a base, but your explanation seems like this is what you intended to do..!

Chat image

nmt 02 February, 2021, 12:16:30

Sorry for the delay. This looks correct. Theta describes the elevation angle from the z axis to the eyeball direction vector. Phi describes the angle from the z axis to the eyeball direction vector's orthogonal projection on the zy plane

user-7daa32 29 January, 2021, 18:40:05

Hi

I will like to know the importance of "validation" after calibration. How would you know you get a better angular accuracy and precision?

I started using the Verdin 3.7 yesterday and noticed changes in the algorithm mode and debug visualizer. I saw two colored lines (yellow and maybe red) not in the previous versions. The debug visualizer now has a larger image of the pupil unlike before. Do you have range of values for good accuracy and precision?

nmt 02 February, 2021, 12:27:21

Required accuracy and precision will depend on your use case. For example, in a highly controlled study where a subject is reading very small text, the limits of each region of interest (ROI) are small. In this case you would want the highest accuracy possible, close to the physiological limits. On the other hand, for a less controlled investigation involving a subject gazing at objects in a room, the limits around each ROI are bigger. In this case, less accuracy is needed. Check out the best practice docs here: https://docs.pupil-labs.com/core/best-practices/#best-practices. In particular look under the 'Choose the Right Gaze Mapping Pipeline' heading.

user-7daa32 29 January, 2021, 18:41:56

Another question please. What is the best practices in setting up values for max dispersion and minimum duration for fixation detector?

user-0ee84d 31 January, 2021, 14:57:30

@papr apart from using markers, is there any other way of calibrating if I have the image points and the object points that’s in the line of sight of the scene camera?

papr 01 February, 2021, 09:10:00

I am not sure if I understand the question. Could you give an explicit example?

End of January archive