core


user-b8f52e 01 December, 2020, 11:24:55

Hi there, is there any chance to make the Networking API plugin available for the Player? We want to do recordings and thereafter connect via ZMQ to the playback.

user-b8f52e 01 December, 2020, 13:45:41

@papr Sorry for being impatient, but do you have a quick answer on that?

user-594d92 01 December, 2020, 12:03:25

Hi @papr, The coordinate of the gaze point in the first picture shows a large error in the drawing of the second picture. Is this because the screen captured by the plug-in(surface tracking) is not a rectangle.

Chat image

user-594d92 01 December, 2020, 12:03:32

Chat image

papr 01 December, 2020, 12:08:09

@user-594d92 could you please specify what you have visualized in the second picture?

user-594d92 01 December, 2020, 13:35:17

I want to use the coordinates of the gaze points on the first image and then draw them on the second image. As you can see, the gaze point is on the cup, but the point drawn by the coordinates is away from the cup.

papr 01 December, 2020, 14:05:37

@user-b8f52e This is currently not planned and I am not able to assess how much work that would be at the moment.

user-b8f52e 01 December, 2020, 14:08:19

thank you

papr 01 December, 2020, 14:07:09

@user-594d92 are you drawing all mapped gaze points? Or only a few? What do you use as your data source?

user-594d92 01 December, 2020, 14:19:49

Sure, I drawed all mapped gaze points. The data used is from the file. The file path is as follows: C:\Recordings\2020.11.19\004\exports\001\surfaces\gaze_positions_on_surface_Surface 2\

papr 01 December, 2020, 14:30:15

@user-594d92 This was the information I was looking for. Thank you. This is the correct file to use. Would you be able to share this recording with [email removed] From the pictures alone I am not able to tell what is going wrong.

In the mean time, could you please check if this tutorial works for you? https://github.com/pupil-labs/pupil-tutorials/blob/master/02_load_exported_surfaces_and_visualize_aggregate_heatmap.ipynb You should be able to replace the cover image with your own.

user-594d92 03 December, 2020, 14:25:45

https://drive.google.com/file/d/1LNDB0FpyU95y3OrALct6kYhC9bJUKX6b/view?usp=sharing I have uploaded the Recording. What I want to do is to draw the mapped gaze points in the background picture.But I don't know why there is a large error in the drawing of the second picture.Thank you very much!

user-594d92 03 December, 2020, 14:04:53

@papr Thanks for your advise,but I don't konw how to share the recording video and I know I can share the exported data here.

user-74c497 01 December, 2020, 15:10:43

Hello the community, I have an issue with my eye tracker... One of the camera is disconnected from the device. I tried to change the resolution or switch the cameras but i have still the problem. I got a reconnection when i touched the cables 😬 maybe it's a mechanical issue? Thanks for your advice.

Chat image

papr 01 December, 2020, 15:32:04

@user-74c497 Please contact info@pupil-labs.com in this regard

user-a09f5d 01 December, 2020, 20:26:14

Hi

Quick question about the eye cameras on the core head set. What is the wavelength band used by the infra red cameras? I want to try recording eye movements through LCD shutter glasses but before buying any glasses I wanted to check that the glasses transmit the IR light from the eye cameras.

Thanks!

user-f1866e 01 December, 2020, 21:23:28

Is there any way to use raspberry pi for Pupil-Core or use raspberry pi instead of cellphone?

user-d444bc 01 December, 2020, 23:34:40

Hi @papr, I have cloned pupil-2.6 and tried to execute run_capture.bat with the pupil-core headset, but the world process shutdowns due to an ERROR as shown in the picture. Do you have any hint for this error?

Chat image

user-a98526 02 December, 2020, 03:25:07

@papr Hi, A version error occurred when I run pupil2.6 from source. I configured the environment according to “Dependencies for Windows”. But the page "Download FFMPEG v4.0 Windows shared binaries" show not found , so I used these .dll files from the previous version(2.1)

user-a98526 02 December, 2020, 03:26:40

(pupil36) E:\pupil\pupil2.6\pupil\pupil_src>python main.py capture world - [INFO] numexpr.utils: Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. world - [INFO] numexpr.utils: NumExpr defaulting to 8 threads. world - [INFO] launchables.world: Application Version: 2.6.19 world - [INFO] launchables.world: System Info: User: YB, Platform: Windows, Machine: MSI, Release: 10, Version: 10.0.18362 world - [ERROR] pupil_detector_plugins.detector_3d_plugin: This version of Pupil requires pupil_detectors >= 1.1.1. You are running with pupil_detectors == 1.1.0. Please upgrade to a newer version! world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "E:\pupil\pupil2.6\pupil\pupil_src\launchables\world.py", line 164, in world from pupil_detector_plugins.detector_base_plugin import PupilDetectorPlugin File "E:\pupil\pupil2.6\pupil\pupil_src\shared_modules\pupil_detector_plugins__init__.py", line 16, in <module> from .detector_3d_plugin import Detector3DPlugin File "E:\pupil\pupil2.6\pupil\pupil_src\shared_modules\pupil_detector_plugins\detector_3d_plugin.py", line 42, in <module> raise RuntimeError(msg) RuntimeError: This version of Pupil requires pupil_detectors >= 1.1.1. You are running with pupil_detectors == 1.1.0. Please upgrade to a newer version!

world - [INFO] launchables.world: Process shutting down.

user-2ab393 02 December, 2020, 09:27:14

Hello @papr After opening the plugin "Surface Tracker“, how do I get suface data in real time ?

papr 02 December, 2020, 09:30:04

@user-a98526

RuntimeError: This version of Pupil requires pupil_detectors >= 1.1.1. You are running with pupil_detectors == 1.1.0. Please upgrade to a newer version! You can do this with: pip install --upgrade pupil-detectors Please follow these instructions and let us know if continue having issues.

I can also recommend running the bundle instead from source. This will avoid any of these setup issues.

papr 02 December, 2020, 09:32:01

@user-2ab393 Are you familiar with the network api? https://docs.pupil-labs.com/developer/core/network-api/ This example uses it to receive the surface-mapped gaze in realtime: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py

papr 02 December, 2020, 09:34:17

@user-d444bc Which Python version are you using?

Did you know that you can run from bundle and do nearly everything that you can do with running from source? Similarly to @user-a98526, you could avoid these type of setup issues. You can download it here: https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads

user-d444bc 02 December, 2020, 12:52:31

I am using python 3.6.0., and I am just trying to add some functions to test my experiments.

user-f195c6 02 December, 2020, 10:38:18

Hi everyone! I'm doing a study which involves Pupil Core and Psychopy. I want to do the match of both data, so I can analyze eye's movements according to specific stimulus. However, it has been a bit complicated for me to convert their timestamps to the same format... Also, I added a plugin in pupil player to visualize the recording time and I noticed that it doesn't match to the time on info.player file... These times are referred to what?

papr 02 December, 2020, 10:39:50

@user-f195c6 I guess the first resource to look at would be https://docs.pupil-labs.com/core/terminology/#timing

papr 02 December, 2020, 10:40:18

@user-f195c6 Afterward, I would recommend having a look at this: https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-f195c6 02 December, 2020, 11:31:12

Thank you very much!!! 🙏

user-19f337 02 December, 2020, 11:15:38

Hello, I've got a problem. I recently changed my fisheye camera to the normal camera provided. I also recalibrated the world intrinsincs, as described on pupil labs website.

My problem here is that I have this recording with surfaces described as on my screenshot. You can clearly see that the gaze and fixations are inside the surface. But on the export csv file, I have almost all of my data that has bigger and smaller values. I'll send you the corresponding export file so you can check what I mean. But I don't really know where it comes from and I didn't have this problem using the fisheye camera. I'm copy pasting the surface definitions from other files which use the same layout. I'm using pupil player 2.6 and I can give more information if required. It happens on other recordings using the same surfaces definitions too. Do you have an idea about where this problem could come from ?

Chat image

user-19f337 02 December, 2020, 11:16:06

Chat image

user-19f337 02 December, 2020, 11:16:25

gaze_positions_on_surface_RB06_BouleBill_15_P39.csv

papr 02 December, 2020, 11:24:39

@user-19f337 I have two questions: Did you record the recording with the new intrinsics? Also, did you define the surface after changing the intrinsics or before?

papr 02 December, 2020, 11:30:03

@user-19f337 If you want, I can have a more detailed look. For that, please share the recording with [email removed]

user-19f337 02 December, 2020, 11:36:40

I recorded using the new intrinsics but the surface definition is a copy paste from an other recording which maybe didn't have the correct intrinsics when defined. I will redefine the surfaces and I'll update you when done. 🙂

user-19f337 02 December, 2020, 15:26:23

Works fine now, thanks!

user-1d05f7 02 December, 2020, 12:27:08

Hi community! I am new to Pupil Core and am using two Core headsets in a social interaction experiment with a mother and a baby. I am having trouble recording both participants simultaneously in Pupil Capture. Could anyone help me with this?

papr 02 December, 2020, 12:45:34

@user-1d05f7 The recommended setup would be to use two separate computers, one for each headset. Additionally, you should activate the Time Sync and Pupil Group plugin. This allows you to record synchronized recordings.

user-1d05f7 02 December, 2020, 12:46:51

if it's not possible to record from two computers, can I record from the same computer?

papr 02 December, 2020, 12:49:58

Yes, theoretically, it is possible when running the software twice on the same computer. But for one, the computer might not have sufficient computational resources, and secondly, the camera auto-selection does not work in this case. You will have to manually setup the headset cameras for each software instance. In other words, the software is not designed to run twice on the same computer with two separate headsets.

user-1d05f7 02 December, 2020, 12:53:17

Ok, thanks. Are there instructions for manual setup of the cameras?

papr 02 December, 2020, 13:22:42

@user-d444bc Please be aware that we require Python 3.6.1 or higher. The difference between 3.6.0 and 3.6.1 is that the latter comes with a newer typing version which could be the reason for your issue.

user-d444bc 02 December, 2020, 13:31:52

Thank you very much for your answer. I wasn't aware of that requirement. So, after updating python version, I can run it without the problem.

papr 02 December, 2020, 13:32:35

@user-d444bc hopefully, yes 🙂

user-a09f5d 02 December, 2020, 14:50:53

Hi @papr My apologies if you have already seen my earlier question, but wanted to ask again just in case you missed it. Are you able to tell me what is the wavelength band used by the infra red eye cameras please? I want to try recording eye movements through LCD shutter glasses but before buying any glasses I wanted to check that the glasses transmit the IR light from the eye cameras. Thanks!

papr 02 December, 2020, 14:51:52

@user-a09f5d Apologies for not responding yet. I forwarded the question to our hardware team. I will forward their response to you as soon as I hear back from them.

user-a09f5d 02 December, 2020, 14:53:01

Perfect! Thank you very much. I figured that was the case but wanted to check on the off chance. Thanks again.

user-3d024c 02 December, 2020, 15:07:49

Hi, I have a question regarding the "gaze history" feature (in vis polyline plugin). When I try to use this, it runs the preprocessing fast/fine, but gets to about 20-23% of calculation and then the app stops responding. It reliably gets stuck at around the same percentage regardless of the duration I select, so any insight would be appreciated. I'm on Mac OS.

papr 02 December, 2020, 15:08:01

@user-a09f5d The wavelength is 860nm

user-a09f5d 02 December, 2020, 15:19:59

Thanks for that @papr . I assume that 860nm is the peak wavelength? Were you able to find out the wavelength range from the eye camera (eg. 830nm-890nm with the peak at 860nm)?

papr 02 December, 2020, 15:10:03

@user-3d024c could you please share the ~/pupil_player_settings/player.log file after reproducing the issue? I hope it has hints about the issue.

user-3d024c 02 December, 2020, 15:13:53

It's currently stuck at 20%, not responding, but hasn't crashed yet so there's nothing relevant in the file but I believe I attached it.

player.log

papr 02 December, 2020, 15:16:51

@user-3d024c you are right that the log does not indicate a crash. I was not able to reproduce the issue with a recording of mine. Could you please share the recording with [email removed] such that we can try to reproduce the issue?

user-3d024c 02 December, 2020, 15:27:07

Update: I let the app not respond for a while instead of quitting and it spontaneously completed calculation/began responding again. Sorry to bother you, thanks!

papr 02 December, 2020, 15:21:19

@user-a09f5d That is the wavelength that our IR leds emmit. From my understanding, this is more or less a constant.

papr 02 December, 2020, 15:29:53

@user-a09f5d I will try to find out more information about the eye camera filter range though.

user-a09f5d 02 December, 2020, 15:34:38

Sorry, my bad! I was actually asking about the distribution of wavelengths around the peak at 860nm since most LEDs don't only emit at a single wavelength, but rather a narrow band of wavelengths around a peak. For example light spectra for LEDs usually something like this (see attached). In this example you would say the LED emits between 900nm-1000nm with a peak at approx. 940nm.

Chat image

user-a09f5d 02 December, 2020, 15:35:42

*....usually look something....

papr 02 December, 2020, 15:36:35

@user-a09f5d ok, I think I understand. I do not think that I will get this information for our LEDs but I have again forwarded your question, pointing out that you are actually interested in the wavelength range of the IR filter, not the IR LED wavelength range. Can you confirm that I understood your question correctly?

user-a09f5d 02 December, 2020, 15:43:17

So I'm actually interested in both, as they both effect the light that is actually emitted and used by the eye cameras. Thanks for forwarding my question on. Even if you can't find out the extra information knowing that the LED emits at 860nm is extremely useful so thank you for that.

user-a09f5d 02 December, 2020, 15:46:54

In the future I would like to measure the emission spectra of the eye camera LED for myself. If I am able to borrow the equipment and take the light measurements I would be more than happy to send you the results in case you find them useful or to help with any future questions from others.

papr 02 December, 2020, 15:47:27

@user-a09f5d An independent verification of the wavelength numbers would be very helpful!

user-a09f5d 02 December, 2020, 15:50:09

No worries! If I am able to measure it I will be sure to let you know. Thanks again for your help. Please let me know if you are able to find out any more information.

user-52b71c 02 December, 2020, 23:04:07

Hello, apologies if this has been asked before. I just wanted to verify that the max dispersion slider when detecting fixations does not refer to a radius (contrary to other algorithms). Thank you!

papr 03 December, 2020, 07:53:45

That is correct.

user-a98526 03 December, 2020, 01:26:32

Hi@papr,a new problem occurred after I used PIP install -- upgrade pump detectors. The reason why I use "run from source" is that I am still using the Realsense plugin. Thank you very much for your help.

(py36pupil26) E:\pupil\pupil2.6\pupil\pupil_src>python main.py capture Traceback (most recent call last): File "main.py", line 33, in <module> from version_utils import get_version File "E:\pupil\pupil2.6\pupil\pupil_src\shared_modules\version_utils.py", line 8, in <module> import packaging.version ModuleNotFoundError: No module named 'packaging'

user-a98526 03 December, 2020, 01:28:41

I am trying to install these packages to solve the problem.

user-a98526 03 December, 2020, 01:58:38

I found that the error was due to my incorrect installation of PIP install - R requirements.txt 。 But when I use this command, the following happens:

user-a98526 03 December, 2020, 01:58:53

Chat image

user-a98526 03 December, 2020, 01:59:55

As shown in the figure, I have installed Shadow

Chat image

user-a98526 03 December, 2020, 02:28:11

I tried to solve these problems by manually installing different WHL files. Finally, I successfully ran pupil_ capture。 However, it is still unable to connect to realsense. The plug-in error message given is as follows:

user-a98526 03 December, 2020, 02:28:50

world - [INFO] numexpr.utils: Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. world - [INFO] numexpr.utils: NumExpr defaulting to 8 threads. world - [INFO] launchables.world: Application Version: 2.6.19 world - [INFO] launchables.world: System Info: User: YB, Platform: Windows, Machine: MSI, Release: 10, Version: 10.0.18362 world - [WARNING] plugin: Failed to load 'realsense2_backend'. Reason: 'cannot import name 'load_intrinsics''

papr 03 December, 2020, 14:05:57

@user-594d92 the easiest way to share it is to zip and upload the recording folder to a file sharing service, e.g. Google Drive and share the link to the data in the email

papr 03 December, 2020, 14:31:53

@user-594d92 could you also share the image that is displayed on the monitor? i.e. the undistorted surface target image

user-594d92 03 December, 2020, 14:35:41

This is the picture displayed on the monitor

Chat image

papr 03 December, 2020, 14:45:45

@user-594d92 It looks like it is not using the correct intrinsics. You said, that you changed the lens and reestimated the camera intrinsics, correct? Somehow they did not get applied to this recording.

papr 03 December, 2020, 14:47:31

You can check this by exporting the recording with the iMotions exporter. It should export an undistorted video but instead it is completely distorted inwards. This is why the surface mapper maps the gaze towards the middle.

Chat image

papr 03 December, 2020, 14:48:38

In Capture, you can check the current intrinsics by enabling the "show undistorted image" button in the intrinsics estimation plugin menu

user-594d92 03 December, 2020, 14:59:50

Thank you very much for your suggestion. I will try again according to your suggestion

user-5ef6c0 03 December, 2020, 20:01:26

Hello. One of my eye cameras stopped working. Can I still get a robust eye detection and gaze point with just one eye?

papr 04 December, 2020, 12:36:55

It will be less robust, especially if you look away from the working eye camera.

user-5ef6c0 03 December, 2020, 20:03:28

Also, what is the best way to troubleshoot the eye camera? A study participant touched the frame and it stopped working. It seems there was a little static discharge. If I disconnect and connect the camera, the windows usb sound sounds, but then pupil capture says it cannot connect to the device

papr 04 December, 2020, 12:36:17

Please contact info@pupil-labs.com in this regard.

user-5ef6c0 03 December, 2020, 20:03:57

This has happened in the past, and usually closing pupial capture, unplugging the eye tracker, and disconnecting and reconnecting the camera did the trick.... but not this time

user-a98526 04 December, 2020, 04:28:06

HI@papr ,I want to know if there is a way to use realsense plugin in pupil2.6. I found that there are differences between the two versions of the code, can I directly add load_intrinsics to pupil2.6.

papr 04 December, 2020, 12:35:39

@user-764f72 Please create fork of the realsense plugin gist that is compatible with the latest Pupil release and make a PR to the community repo, linking both gists, indicating which version to use with which Pupil release. I think you only need to update the camera_model imports. Please also link the version here for @user-a98526 to test.

user-e94c74 04 December, 2020, 05:17:53

Hi @papr , first thank you for kind replies that I've asked so far. I am currently analyzing eye captures with about an hour - and it is hard to interpret timelines (ex. blink detection) in Pupil Player. I want to ask if you have future plan to make a adjustable timeline window. Thanks 😄

user-908b50 04 December, 2020, 22:00:46

I am getting the following error on my computer when loading pupil player:

message.txt

user-895483 05 December, 2020, 14:48:49

ohhhhh

papr 04 December, 2020, 23:14:05

@user-908b50 You are running a newer version of the software on an older recording. this should have been updated automatically. I guess that you did not pull the tags correctly. Nonetheless, simply delete the lookup files. they will be regenerated correctly

user-895483 05 December, 2020, 15:54:42

can you share with me what you did?

user-908b50 18 December, 2020, 06:17:07

Sorry for the delay in getting back to you! Busy with final exam and term paper. Just getting back to my data now. You'll have to go in and delete and eye.lookup & world.lookup files. That would get your files processed.

user-8711fc 06 December, 2020, 11:12:25

Help. I did run_capture after installation it gives error File "main.py", line 33, in <module> from version_utils import get_version ImportError: cannot import name 'get_version'

papr 07 December, 2020, 08:47:31

@user-8711fc I see you also wrote an email to [email removed] I will respond to you via email from there.

papr 07 December, 2020, 11:21:01

@user-a98526 @user-764f72 updated the community repository with an updated version of the RealSense plugin. Please use the appropriate plugin based on your installed Pupil version: https://github.com/pupil-labs/pupil-community

user-a98526 07 December, 2020, 13:38:08

Thanks for your help.

user-98789c 07 December, 2020, 20:59:30

Hi everyone, I need to send a trigger from my MATLAB script to Pupil Core about my trial number, stimulus onset/offset, etc. Do you know what this trigger should be, in terms of MATLAB functions and its arguments?

papr 07 December, 2020, 21:26:09

@user-98789c Please have a look at our matlab helper scripts https://github.com/pupil-labs/pupil-helpers/tree/master/matlab Triggers are called annotations in Pupil Core software. You can read about them in our documentation. https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations

papr 07 December, 2020, 21:27:19

Alternatively, there are ways to call python functions from matlab. We will be able to support you much better in case of problems if you use Python instead of Matlab.

user-a68470 08 December, 2020, 02:52:27

Hi everyone. I want to know if the Pupil Core can record eye movements in videos rather than static images? I notice that most of the research is focusing on the analysis of static images in my research domain. Also, I DIY a product; however, the world camera cannot work even though I tried the method by deleting the Pupil Core device and install it again in ' device manager'. When I test the world camera with a media player, it can work. BUT it doesnot work in Pupil Core

papr 08 December, 2020, 10:38:38

Pupil Capture is designed to predict gaze on real-time video. Pupil Player can predict gaze for recorded video. Is this what you are looking for?

Regarding the driver issue, it looks like the cameras are not correctly installed in the libusbk driver category. Please make sure to run the latest Pupil release and to connect the headset before starting Pupil Capture, for the automatic driver installation to work.

user-a98526 08 December, 2020, 09:53:59

Hi@papr, I want to know what I should do to get Realsense‘s depth_ Frame, similar to revc_ world_ video_ frames_ with_ In visualization:

papr 08 December, 2020, 10:41:54

The depth_frame is not being published on the IPC, therefore it is currently not possible to subscribe and receive it. You would have to modify the plugins code to publish it.

user-a98526 08 December, 2020, 09:54:08

if topic == 'frame.world': recent_world = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) elif topic == 'frame.eye.0': recent_eye0 = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) elif topic == 'frame.eye.1': recent_eye1 = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3)

user-98789c 08 December, 2020, 10:57:29

URGENT: Pupil Capture says camera is already in use or blocked. What have I done to it? What should I do to fix it?

papr 08 December, 2020, 10:58:32

@user-98789c Which operating system do you use? Have you tried restarting the computer and make sure that there is no other program accessing the camera?

user-98789c 08 December, 2020, 10:59:11

Windows 10 Pro on Lenovo ThinkPad

papr 08 December, 2020, 11:00:03

@user-98789c Please try restarting the computer as a first step. Make sure the headset is connected and start Capture. Does the issue remain?

user-98789c 08 December, 2020, 11:18:11

I restarted the laptop and it still does not work. I have been trying to interact from MATLAB with Pupil Capture, maybe there I have done something wrong?

papr 08 December, 2020, 11:19:49

@user-98789c Could you please go to the Video Source menu in Capture, enable "Manual camera selection, click "Activate camera" and check which cameras are listed? Do you see 3x "unknown"?

user-ae4005 08 December, 2020, 12:13:01

Hi there, I have an experiment running in psychopy to with I connected the pupil labs eye tracker via a network API. This all works. But so far I have been doing the calibration before I started recording and I haven't used any validation (I read the "best practices" way too late). So now I've added code so that calibration starts after recording starts. Now my two questions: 1) I don't see how I can remotely start the validation. When I look here (https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote), it only shows other commands but none that would start the validation process. Is it possible to do it via the remote control? 2) I'm still very new to eye tracking, so it's not really clear to me which values are good and which aren't. What should the validation look like? And how can I improve the accuracy if it's not good enough?

user-ae4005 09 December, 2020, 07:22:20

Is there anyone who could help? I found the right accuracy values now (1.5 - 2.5 degrees for 3D). But I'm still not sure how to run the validation via remote control and how I could improve the accuracy if it's not good (above 2.5 degrees)...

user-7daa32 08 December, 2020, 18:15:40

Hello, I think I have asked this question before.

It is not working whenever I click the Annotation hotkeys using the keyboard. Why ?

papr 08 December, 2020, 18:18:34

@user-7daa32 Have you frozen the surface again? Remember, that logs are hidden while the surface is frozen.

user-7daa32 08 December, 2020, 18:19:46

I need to press the annotation hotkeys while recording. I didn't freeze anything. I am recording

papr 08 December, 2020, 18:20:10

The window needs to be active in order to register the key.

user-7daa32 08 December, 2020, 18:21:47

I want to click the S and F using the keyboard while I am recording

Chat image

papr 08 December, 2020, 18:22:18

@user-7daa32 are those upper-case? You might need to hit shift+S

user-7daa32 08 December, 2020, 18:24:45

Wow! Thank you. Better to use lower cases

user-f195c6 08 December, 2020, 18:22:39

Hello! Hope everyone is ok! I have a question which might be stupid, but is there any problem of travelling with the pupil core glasses? I mean, if I need some kind of document to show at the airport which declares the type and purpose of this device... I'm a bit afraid of the inspection zone

mpk 08 December, 2020, 18:31:54

Hi, pupil core has ce and fcc certification. You should be able to import it without issues.

papr 08 December, 2020, 18:24:09

@user-f195c6 I do not think so. But I will forward this question to our operations team. They know all about customs.

user-f195c6 08 December, 2020, 18:25:11

Thank you! Let me know, then 😊

papr 08 December, 2020, 18:28:41

I am being told to please contact info@pupil-labs.com in this regard. 🙂

user-e8e825 08 December, 2020, 18:26:05

Hello, I have a query about the licensing. Is the pupil core software GPL or LGPL

papr 10 December, 2020, 17:02:48

@user-da7dca then this is probably an intrinsics issue. Please estimate the intrinsics https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation

papr 08 December, 2020, 18:29:59

@user-e8e825 LGPL 3 https://github.com/pupil-labs/pupil/blob/master/COPYING.LESSER

user-7daa32 08 December, 2020, 19:54:38

https://photos.app.goo.gl/Q1h8eJHpnmy5C1Hy9

I only added START Annotation once but it appears twice on player. Why?

papr 09 December, 2020, 08:07:37

Unfortunately, you do not seem to have granted viewing access for people with the link. Therefore, I am currently not able to access the image.

user-7daa32 09 December, 2020, 01:10:12

There is overlapping fixations

user-3df524 09 December, 2020, 00:22:30

Any experience in combining the VR add-on with the Valve Index - is this possible?

user-7daa32 09 December, 2020, 02:03:36

Does the right and left eyes has their own individual data?

papr 09 December, 2020, 08:13:12

Pupil data is generated for each eye separately. Gaze data tries to combine it (not always possible). See our documentation for details about the contained data: - https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format - https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv

user-292135 09 December, 2020, 04:29:11

Hi, I am annotating data with Pupil Player plugin. Pupil Player allows me to annotate at roughly 1/world camera frequency. However, I am annotating based on eye camera images and want to annotate with more frequency. Is it possible for me to annotate more frequency than the frequency at which a world camera image is recorded? I assume only eye camera images would be updated when no new world cam image exist.

papr 09 December, 2020, 08:28:15

You have not been forgotten. I had to look up how it works. Unfortunately, it is not as easy as starting a calibration. You will have to send a {'subject': 'validation.should_start'} notification. https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py#L56-L62

user-ae4005 09 December, 2020, 11:02:54

Thank you! That really helps.

Do you also have any tips on how I can improve the angular accuracy if I get values above 2.5?

papr 09 December, 2020, 08:45:01

Player will run the playback frame-by-frame for the world video but not for the eye video. Even if there was no world video, the artificial frames would use interpolated timestamps. I think your best chance is to pretend that the eye video is one of the eye videos. Switch the videos, do the annotations, and switch back. When replacing, make sure to replace all files (*.mp4, *_timestamps.npy, *_lookup.npy, and *.intrinsics). This will give you eye frame accurate annotations but it requires some manual renaming between sessions.

user-292135 09 December, 2020, 23:34:49

Thanks!

user-835f47 09 December, 2020, 09:26:09

Hi, when i connect the device it keeps making the connecting and disconnecting sound, and when i run the pupil_capture.exe it detects the eyes cameras but not the world camera. do you have an idea what is happening? is it hardware issue? my set up is intel realsense d415 camera mounted on top of the eye tracking device and i'm using windows 10. thanks in advance!

papr 09 December, 2020, 09:29:45

@user-835f47 I am sorry to hear that. Are you using the Realsense plugin for Capture already?

user-835f47 09 December, 2020, 09:38:12

I don't think so. Can you explain me how to add this plugin?

papr 09 December, 2020, 09:40:50

@user-835f47 See our documentation on how to add plugins in general: https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin

Afterward, download the corresponding RealSense D400 Source plugin for your Pupil Capture version: https://github.com/pupil-labs/pupil-community#plugins

Please read the readme contained with the plugin link. It contains important information.

papr 09 December, 2020, 09:43:07

If you have questions, @user-a98526 is using the plugin as well.

papr 09 December, 2020, 09:43:59

Regarding the disconnect sounds, I would start by using Intels Realsense preview program to check if the camera works as expected.

user-835f47 09 December, 2020, 09:44:37

Thanks a lot!

papr 09 December, 2020, 11:06:59

@user-ae4005 This is often due to inaccurate pupil detection. Unfortunately, it is not easy to give improvement tips without a negative example. If you have one, please share it with us and we will be able to give recommendations.

user-ae4005 09 December, 2020, 11:10:12

Okay, I don't have any examples right now and it also only happened a few times so far. If it comes up again I'll save the values and will get back to you! Thank you!

papr 09 December, 2020, 11:12:28

Specifically, we would need the recording. But you are already recording all calibrations, so you are on the safe side.

user-ae4005 09 December, 2020, 11:13:37

Yes, making the changes right now!

user-ae4005 09 December, 2020, 13:48:23

@papr I'm struggling with the automatization of the calibration in psychopy... The calibration starts but it skips right to the next routine (without finishing the calibration). I think I just need to keep checking if the calibration has finished so that I can tell psychopy to only continue to the next routine once the calibration is finished. Do you know how I can do this?

papr 09 December, 2020, 13:50:18

@user-ae4005 you can wait for the "calibration.successful" or "calibration.failed" notification and continue afterward.

user-ae4005 09 December, 2020, 13:55:06

How exactly do I get this notification? Do I first need to send a notification? Or do I just check "calibration.successful" in psychopy?

papr 09 December, 2020, 13:58:10

@user-ae4005 You subscribe to "notify.calibration" similar to this script https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py

user-ae4005 09 December, 2020, 14:50:26

I followed the code from the link you send me but I'm getting stuck with subscribing to the notifications: pupil_remote.setsockopt_string(zmq.SUBSCRIBE, 'notify.calibration') with the following error: zmq.error.ZMQError: Invalid argument

My "sub" is called "pupil_remote" and the eye tracker connects well and I receive other notifications (using "notify") so I'm not sure why this is not working... Do you have any idea what I'm doing wrong?

user-ae4005 09 December, 2020, 14:01:26

@papr Thank you! I'll try it out now 🙂

papr 09 December, 2020, 14:52:18

@user-ae4005 How do you initialize pupil_remote? Please be aware that you need an additional SUB socket compared to previously, where you only used a REQ socket.

user-ae4005 09 December, 2020, 14:55:33

Hmm, okay. I guess that might be the problem. This is how I'm initializing pupil_remote now:

# Setup zmq context and remote helper
ctx = zmq.Context()
pupil_remote = zmq.Socket(ctx, zmq.REQ)
pupil_remote.connect('tcp://127.0.0.1:50020')

pupil_remote.send_string("PUB_PORT")
pub_port = pupil_remote.recv_string()
pub_socket = zmq.Socket(ctx, zmq.PUB)
pub_socket.connect("tcp://127.0.0.1:" + str(pub_port))
user-ae4005 09 December, 2020, 14:56:16

@papr So that means I need to set up another sub, just like in the example you sent? And not work with pupil_remote

papr 09 December, 2020, 14:56:26

similar to the pub_socket, you need a sub_socket. See the linked script for details.

user-ae4005 09 December, 2020, 15:26:12

So I did that now but I'm still getting the same error.

open a sub port to listen to pupil

pupil_remote.send_string("SUB_PORT") sub_port = pupil_remote.recv_string() sub_socket = zmq.Socket(ctx, zmq.PUB) sub_socket.connect("tcp://127.0.0.1:" + str(sub_port))

sub_socket.setsockopt_string(zmq.SUBSCRIBE, 'notify.calibration')

And the error: sub_socket.setsockopt_string(zmq.SUBSCRIBE, 'notify.calibration') File "C:\Program Files\PsychoPy3\lib\site-packages\zmq\sugar\socket.py", line 201, in set_string return self.set(option, optval.encode(encoding)) File "zmq\backend\cython\socket.pyx", line 435, in zmq.backend.cython.socket.Socket.set File "zmq\backend\cython\socket.pyx", line 279, in zmq.backend.cython.socket._setsockopt File "zmq\backend\cython\checkrc.pxd", line 25, in zmq.backend.cython.checkrc._check_rc zmq.error.ZMQError: Invalid argument

Any mistakes you can see here?

user-ae4005 09 December, 2020, 14:56:59

Okay, my mistake. I thought I could use the socket I already set up. Thanks again!

papr 09 December, 2020, 14:56:52

You will still need pupil_remote as entrypoint

user-a98526 09 December, 2020, 14:57:03

@user-835f47 I have been using the Realsense plug-in for a long time, and currently I am customizing the plug-in to obtain depth information about Realsense. If you have any questions, I think we can communicate with each other.

user-835f47 12 December, 2020, 15:51:38

Thanks! if I will have problems or questions I will ask you.

papr 09 December, 2020, 15:26:58
- sub_socket = zmq.Socket(ctx, zmq.PUB)
+ sub_socket = zmq.Socket(ctx, zmq.SUB)
user-ae4005 09 December, 2020, 15:27:46

Oh no... Sorry. And thanks!

user-b2640a 09 December, 2020, 17:18:54

Is Discord the best place to ask for help, or is there a separate support email? vr-ar seems a bit quiet.

user-83ea3f 10 December, 2020, 02:57:21

Hi, I am newbie of developer, and have questions of the process of the program is running. I am looking at source files, and have a question. So, how does it really runs? In my idea, in pupil-master/pupil_src/main.py, starting from launcher(). But after that, I didn't get how does the pupil window(World Window) is turning on. on luanch() function, I think there isn't a GUI inside, so I am wondering how could the World Window is open

user-da7dca 10 December, 2020, 16:03:28

hello everyone, i have some weird issue with the calibration step...there seems to be a huge disparity between horinzontal gaze tracking (which works really well after calibration) and vertical gaze tracking (which doesnt seem to sense vertical movement at all). I already played with the outlier threshold to check if vertical calibration dots are recognised. Which seems to be the case ... what would be your suggestion to solve this problem? thanks in advance!

papr 10 December, 2020, 16:05:57

@user-da7dca Did you change the lense by any chance?

user-da7dca 10 December, 2020, 16:46:28

not really... also pupil detection is really stable in all eye rotation scenarios

papr 10 December, 2020, 16:06:57

@user-83ea3f The launcher launches any of the launchables in the respective folder (which ones depend on the application)

user-da7dca 10 December, 2020, 16:57:05

i use a custom solution for the worldcam (realsense 305) however i also tried with a logitech of the shelf webcam which suffer from the same error. Also one weird thing is that when using 2D model instead of 3D everything seems to work fine

user-da7dca 10 December, 2020, 17:20:05

Will try tomorrow👍 however I already got an calibration from matlab read in so I’m not sure if this will fix it

papr 10 December, 2020, 17:27:36

@user-da7dca I do not see how that relates to each other. Could you provide a bit more context?

user-da7dca 10 December, 2020, 17:31:31

I did an full calibration of the realsense via matlab and exported the distortion parameters to pupils calibration settings file. Which did also worked out, since I could visualize the Undistorted image in pupil capture

papr 11 December, 2020, 12:45:57

Ah, thanks for the clarification. Makes sense.

user-7daa32 10 December, 2020, 23:58:37

Why is the start timestamp in fixation file different from that on the annotation file ?

papr 11 December, 2020, 12:45:18

Because they are based on different time source. Annotations use world timestamps. Fixations use gaze timestamps.

user-7daa32 11 December, 2020, 17:14:16

Hello,

That also means that there will be two calculated result for each eye?

user-7daa32 11 December, 2020, 13:16:13

The difference is minimal

user-8f35b9 14 December, 2020, 12:06:43

Hi what type of data is produced when we use Pupil Invisible for shopper research? Do we get heat maps?

user-8f35b9 14 December, 2020, 12:07:27

its not clear what data will be available if we buy and use Pupil Invisible

nmt 14 December, 2020, 14:23:52

Hi Zak, please see my response from [email removed]

user-a98526 15 December, 2020, 13:25:26

Because I don't know where to modify to generate the depth-image topic, I modified realsens2_backend.py to send depth_image(not depth_frame). I used the socket and UDP transmission methods. After testing, it will not lose packge when sending on the one computer. At the same time, the release start command is added to ensure the end integrity.

The depth image is compressed and processed by other ways to make the transmission fast enough.

user-a98526 15 December, 2020, 13:25:30

The accuracy of the received depth image remains basically unchanged (100 points are randomly sampled without error)

user-a98526 15 December, 2020, 13:28:29

This is my modified backend.py.

realsense2_backend.py

user-a98526 15 December, 2020, 13:29:37

This is the receive .py for test

receive2.py

user-a98526 15 December, 2020, 13:31:53

This is the corret file

receive2.py

user-a98526 15 December, 2020, 13:32:59

You must modify the depth image to 1280*720 before using it. If there is a sending error, you can modify the compressed part of the split code (img_split).

user-a98526 15 December, 2020, 13:37:37

This is the results

Chat image

user-a98526 15 December, 2020, 13:38:16

I test it on pupil2.1

papr 15 December, 2020, 13:39:16

@user-a98526 Feel free to provide the code via a repository and link it in https://github.com/pupil-labs/pupil-community with a description

user-a98526 15 December, 2020, 13:44:53

It is a great honor to submit my code to the repository. I uploaded the wrong receiving code again, this will be the final correct receiving code.

reveive3..py

user-300817 17 December, 2020, 09:26:28

Hi There. I'm new to the pupil labs software and I'm currently setting up an external eye camera as there is a sight hardware problem preventing me from using the pupil labs glasses camera. I am interested in getting the 3d eye normal x,y,z data from the post hoc algorithm in pupil player, and I have two questions.

1) Is there information on lens distortion correction for the pupil labs eye camera that should be considered when passing videos from external camera through the pupil player for pupil detection?

2) We record the eye in the dark so the pupil would be quite big. I'm trying to get a feel for how much the 3d model algorithm struggles when the pupil is obscured by the top eye lid. i know we try to get the user to open their eyes as wide as possible, but when the pupil is occluded say 1/5? is that going to be a problem? or other factors is going to make it difficult to say. Would adjusting the ROI to not include the obscured region help?

user-594d92 17 December, 2020, 11:34:29

@papr Hi,when i am reestimating the camera intrinsics ,although I have tried a lot of times ,I find it is difficult to get a undistorted image by enabling the "show undistorted image" button in the intrinsics estimation plugin menu.I want to know whether there are some skills in reestimating the camera instrinsics .Maybe my operation is wrong. Besides I want to know what the "move your headset at different angles" means in https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation. Sometimes I try to keep my head static when reestimating the camera intrinsics.

user-430fc1 17 December, 2020, 11:52:13

Hello, I've noticed the pupil data for each eye in a binocular recording doesn't always have the same timestamp. I've been using pd.merge_asof(left_eye, right_eye, left_index=True, right_index=True, suffixes=('_l','_r'), direction='nearest') to keep binocular data in a single dataframe. Is this a valid approach or would you recommend keeping the data for each eye separate, with its original index?

papr 17 December, 2020, 12:40:49

@user-430fc1 depends on what your index is. If it is timestamps it is likely correct. If it is just the sequence number it is definitely not correct, as the eye cameras operate independently of each other. I will have to look up that function in detail though to confirm the above.

user-430fc1 17 December, 2020, 12:43:41

@papr I've been using this with pupil_timestamps as the index. Seems to work OK. The 'direction' argument has 'backward', 'forward' and 'nearest' options. I have assumed that 'nearest' would make the most sense, and that it should not matter which eye is on the left.

papr 17 December, 2020, 12:44:29

What happens if multiple timestamps are matched as nearest?

user-430fc1 17 December, 2020, 12:46:26

Hmmm, good question. The resulting index always seems to be without duplicates

user-908b50 18 December, 2020, 13:04:38

So my pyglui has started to freeze again. I have been at it for hours. Starting and re-starting but it freezes on the surface tracker plugin & mid-way during fixation detector processing.

user-908b50 18 December, 2020, 13:05:09

Kindly suggest ways to troubleshoot. I pulled my tags correctly.

papr 18 December, 2020, 13:06:29

@user-908b50 how do you know it is pyglui freezing? Did you inspect the logs? Could you share them with us?

user-908b50 18 December, 2020, 13:18:52

This the terminal output. Will share the log too!

message.txt

papr 18 December, 2020, 13:27:36

@user-908b50 There is no sign of an python exception, i.e. it is likely the legacy marker detection setup issue again. Make sure the cv2 module you are using has been compiled with TBB=ON. The easiest way to do that is to compile from source and symlink it:

git clone https://github.com/opencv/opencv
cd opencv
# checkout a stable version, e.g. 4.5.0
git checkout 4.5.0
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=RELEASE -DWITH_TBB=ON -DWITH_CUDA=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON ..
make -j2
sudo make install
sudo ldconfig
user-908b50 18 December, 2020, 13:35:38

Can't access my log atm from the e-drive. But okay I can re-install! i had to uninstall the above and re-install the old way because of cv2 import errors.

papr 18 December, 2020, 13:36:56

The old-way is by installing opencv via apt-get or pip install opencv-python? Unfortunately, for Ubuntu, these install methods do not provide tbb-support. 😕

user-908b50 18 December, 2020, 13:37:52

No, the instructions for ubuntu 18 with sudo apt lib. Yeah, unfortunate! Fingers crossed!!

user-a98526 19 December, 2020, 07:40:03

Hi@papr, I want to know how to get gaze information in Realsense or other plugins. I don't know if one plugin can use the results of other plugins.

user-292135 22 December, 2020, 23:36:53

Hi. I recorded 400x400 pix images with the eye cameras and pupil identification results look better. However, I found a "WARNING No camera intrinsics available for camera eye1 at resolution (400, 400)!" message. So my questions are A. do I get any degradation, not upgrade, of gaze accuracy using a higher resolution? B. if so, why there is no default intrinsic for 400x400 px?

user-3cff0d 23 December, 2020, 03:32:19

Does anyone know where the user-specific plugin directory might fall on Windows 10?

nmt 23 December, 2020, 11:21:59

Hi Kevin. You can find detailed information on where to find the plugin directory here: https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin As an example, my Capture plugins are located here: "C:\Users\Neil\pupil_capture_settings\plugins".

user-3cff0d 23 December, 2020, 11:29:59

Hi, thank you! I did end up locating the folders. Regarding PupilDetectionPlugin-extending plugins, how do I actually have the data collected in the separate "Eye" windows in, say, Pupil Player use that plugin? I've put my custom pupil detection plugin in all 3 plugin folders (service, player, capture) and while I can enable it in the main "World" window I do not see any effect in the"Eye" windows

nmt 23 December, 2020, 15:01:30

@user-3cff0d were you able to run the example pupil detector plugins, e.g. https://gist.github.com/papr/ed35ab38b80658594da2ab8660f1697c ? Note that you will need Pupil v2.6 or higher to run these custom pupil detectors.

user-3cff0d 23 December, 2020, 15:41:47

I am currently running from source on the latest version pushed to master. I'll try the example plugins, thanks!

user-42b39f 24 December, 2020, 11:38:23

Hi all, capture 2.6.19 will not open (here is the log file). I am starting it from a macbook with big sur 11.0.1

nmt 24 December, 2020, 11:42:38

There are several issues preventing Pupil desktop apps running on Big Sur that relate to PyOpenGL. These depend on Python 3.8.7, which is not released yet. As soon as Python 3.8.7 is officially released, we will begin the update process

user-42b39f 24 December, 2020, 11:39:25

capture.log

user-e69bd5 24 December, 2020, 13:15:29

Hi When I run the pupil capture, The pupil detection and world cam did not work. whereas the decet eye0 and detect eye 1 work correctly

1.txt

nmt 29 December, 2020, 14:35:00

Hi @user-e69bd5, please contact info@pupil-labs.com in this regard

user-467cb9 25 December, 2020, 12:55:19

Merry Christmas for everyone! @papr i have question to you, in my app I get video frame and message in msgpack, and there is some kind of data delay with displaying norm_pos gaze, but video frame response in realtime. I think it isn't problem with counting gaze position to pixel in app. Is there any algorithm which can help me to get and display good realtime gaze pointer something like in Pupil Capture?

user-467cb9 27 December, 2020, 09:56:41

for getting gaze point position should I only multiply norm_pos[0] with frame width and norm_pos[1] with frame height? 😉

user-42b39f 25 December, 2020, 16:42:59

Thanks, I have installed Python 3.9.1 via brew. Has it something to do ? Should I downgrade to previous versions ?

nmt 29 December, 2020, 14:32:14

Hi Chris. Downgrading in this instance will not help. We will release an update in due course, but this is dependent on the fact that Python 3.8.7 is not yet fully supported on macOS 11 Big Sur

user-d8853d 26 December, 2020, 12:10:52

Hello, Precision is calculated as the Root Mean Square (RMS) of the angular distance (in degrees of visual angle) between successive samples during a fixation. What does here successive samples during a fixation mean? Does it mean visual angle between estimated gaze and actual targets? So RMS is square root of the mean square of accuracy?

user-594d92 26 December, 2020, 14:11:39

Hi @papr After reestimating the camera intrinsics,I check the current intrinsics by enabling the "show undistorted image" button in the intrinsics estimation plugin menu.But I get a picture like this.I want to know whether it makes a difference in my result in getting the right positions of my gaze .Besides how can I get the right intrinsics.

Chat image

user-98789c 27 December, 2020, 07:12:26

Hi everyone, I am going through the data structure in my recordings, and I have some questions, all of which might be so basic, but I'd appreciate some clarification about them. Thank you.

  1. What is the relation between relative and absolute time range?
  2. How long is each world frame?
  3. How can gaze point coordinates be negative? Should they not always be positive? Is it in accordance to some surface?
  4. What is base-data in fixation.csv? What can be extracted from it?
  5. How does the 3d c++ method work? Are there other options to choose for the pupil size calculation methods, used by Core? Where can I read about them?
  6. What is the relation between timestamp and pts?
  7. How is confidence calculated?
  8. How are x and y positions in eye image frames normalized?
  9. What information can be extracted from ellipse-..., sphere-..., circle-3d-..., projected-sphere-...?
  10. What are the infomration included in the .csv file for each surface? What do img-to-surf-trans, surf-to-img-trans, etc. mean?
  11. What are tokens?
  12. In an experiment where we want to investigate people's confidence in their decision making using their pupil size (based on some hypotheses about an existing relationship), what exact .csv files (or other types of files) should information be extracted from?
  13. Are there ready python scripts available, only for the purpose of demonstrating what can be done with the data?
user-98789c 29 December, 2020, 13:10:55

I would appreciate some guidance about my questions 🙂

user-d8853d 27 December, 2020, 17:00:15

pixel_x = int(norm_pos_x * pixel_width) pixel_y = int((1.0 - norm_pos_y) * pixel_height)

user-467cb9 28 December, 2020, 11:04:56

norm_pos_x and norm_pos_y are in norm_pos array here?

Chat image

user-339920 28 December, 2020, 08:35:07

hello, I have some problem installing 'pyglui', on Windows10, python3.6(32bit)

ERROR: Command errored out with exit status 1: command: 'C:\Anaconda3\envs\pupil32\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\IVCLAB\desktop\pyglui\setup.py'"'"'; file='"'"'C:\Users\IVCLAB\desktop\pyglui\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps cwd: C:\Users\IVCLAB\desktop\pyglui\

is there any solution for this??

nmt 31 December, 2020, 08:39:57

Please see reply from [email removed] in this regard

user-467cb9 28 December, 2020, 11:19:28

or here?

Chat image

user-d8853d 28 December, 2020, 11:25:12

Yes, the second one is for the gaze data. That gives you the position with respect to the world view.

user-467cb9 28 December, 2020, 11:25:56

ok, I will try use it

user-467cb9 28 December, 2020, 11:32:59

the frequency of sending gaze.3d.01. is 30 fps?

user-467cb9 28 December, 2020, 13:14:55

it doesn't work correctly, there are any method/algorithm to display correct gaze position (like in Pupil Capture) in realtime?

user-a98526 29 December, 2020, 02:03:23

Hi@nmt, I want to know how to get gaze and fixation information in Realsense or other plugins. I don't know if one plugin can use the results of other plugins. In my plugin, I use gaze = events [" gaze "], but it doesn't seem to work.

nmt 29 December, 2020, 16:27:58

Hi @user-a98526. You should be able to access gaze and fixation data (if the online Fixation Filter Plugin is running) in your custom Plugin with the recent events method. E.g.

class access_pupil_data(Plugin):
    ...

    def recent_events(self, events):
        fixation_pts = [
            pt["norm_pos"]
            for pt in events.get("fixations", [])
        ]
        gaze_pts = [
            pt["norm_pos"]
            for pt in events.get("gaze", [])
        ]
user-83ea3f 29 December, 2020, 08:41:24

Hi there, I have installed intel realsense d400 series. But at some reason, World: Calibration requires world capture video input. But I've checked through Intel RealSense Viewer, and it works. Would you let me know the process of using the world view camera? Thanks, advance

Chat image

user-a98526 29 December, 2020, 09:24:56

@user-83ea3f You'd better use the pulse version of 1. X, which provides realsense support plug-ins

user-a98526 29 December, 2020, 09:30:32

https://github.com/pupil-labs/pupil-community, here you can find some useful information

user-a98526 29 December, 2020, 09:30:47

Chat image

nmt 29 December, 2020, 17:51:19

Hey @user-98789c. Apologies for the delay over the xmas period.

  1. What is the relation between relative and absolute time range? Look at this discord message for an overview of relative and absolute time ranges. https://discord.com/channels/285728493612957698/285728493612957698/729648460449579028

  2. How long is each world frame? There is a small variance for the duration of each world frame. However, the average over all frames should be equal to ~1 / FPS. This will differ as the FPS can be set by the user. E.g. if the world camera FPS is 50Hz, each frame will be ~0.02 s.

  3. How can gaze point coordinates be negative? Should they not always be positive? Is it in accordance to some surface? Which gaze points are you referring to? For example, norm_pos_x/y coordinates will not be negative as they are defined as normalised coordinates in the world camera coordinate system, where (0,0) is the bottom left corner, (1,1) is the top right corner, and (0.5, 0.5) is the image centre.

  4. What is base-data in fixation.csv? What can be extracted from it? This is the "timestamp-id timestamp-id ..." of pupil data that the gaze position is computed from.

  5. How does the 3d c++ method work? Are there other options to choose for the pupil size calculation methods, used by Core? Where can I read about them? The 3d C++ method is a novel approach to single camera, glint-free3D eye model fitting including corneal refraction. You can read about it here: https://perceptual.mpi-inf.mpg.de/files/2018/04/dierkes18_etra.pdf

For pupil size estimation, the other option provided by Pupil Capture is 2d Pupil Size measured in pixels. This, obviously, is not as robust as the 3d eye model.

  1. What is the relation between timestamp and pts? Please elaborate on what you mean by pts.
user-98789c 06 January, 2021, 22:22:41

About question number 6: in world-timestamps.csv we have one column for timestamps in seconds and another column for pts. I'm guessing it stands for pupil time stamp? and what is their relation?

nmt 29 December, 2020, 17:53:15
  1. How are x and y positions in eye image frames normalized? See here for details of the coordinate systems of each camera: https://docs.pupil-labs.com/core/terminology/#coordinate-system. The x and y positions in the eye image frames are normalized using the height and width of each respective frame.

  2. What information can be extracted from ellipse-..., sphere-..., circle-3d-..., projected-sphere-...? You can see explanations for all of these data here https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

  3. What are the infomration included in the .csv file for each surface? What do img-to-surf-trans, surf-to-img-trans, etc. mean? You can read about the surface tracker export files and what they contain here https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker

nmt 29 December, 2020, 17:53:50
  1. What are tokens? You likely will not need to interact with these files. But here is a brief description: available data is announced between different plugins. Data (e.g. pupil positions or gaze) is produced by some plugins and consumed by others. Data consumers need to know if data producers provide new data or if the data producer changed (e.g. the user switched from Pupil From Recording to Offline Pupil Detection). All announced data has a unique token, so Listeners know if they receive multiple announcements for the same data and can ignore redundant announcements.

  2. In an experiment where we want to investigate people's confidence in their decision making using their pupil size (based on some hypotheses about an existing relationship), what exact .csv files (or other types of files) should information be extracted from? Look under the heading ‘diameter_3d’ in pupil_positions.csv

  3. Are there ready python scripts available, only for the purpose of demonstrating what can be done with the data? With the wealth of raw data that Pupil Core exposes to the user, there are many options for analysis with Python Scripts. It would be worth checking out the Pupil Community repository for ideas: https://github.com/pupil-labs/pupil-community

nmt 29 December, 2020, 17:58:33

@user-98789c I will need to liaise with the team about question 7 - confidence

user-467cb9 29 December, 2020, 18:05:35

@nmt is somewhere code/algorithm which allow displaying correct gaze position with video from world camera.? I wrote some code in C#, I can connect with Pupil Capture through ZeroMQ, but I cannot get the same gaze position in real time as in Pupil Capture. I'm reading norm_pos array in gaze.3d.01. message.

nmt 29 December, 2020, 18:21:10

Hi @user-467cb9. The method that @user-d8853d described should work. It essentially describes the denormalize function. What are you drawing/rendering the gaze point with?

user-467cb9 29 December, 2020, 18:23:31

I'm using EmguCV (OpenCV for C# library)

user-467cb9 29 December, 2020, 18:24:04

method circle() on Mat class object

user-467cb9 29 December, 2020, 18:24:18

i can see video from main camera in real time

user-467cb9 29 December, 2020, 18:25:19

but my gaze pointer moves slow, and don't have the same position like in Pupil Capture...

user-467cb9 29 December, 2020, 18:25:41

Pupil Capture send gaze position in real time?

user-467cb9 29 December, 2020, 18:26:02

with minimum 30 fps frequention?

nmt 30 December, 2020, 08:52:07

@user-467cb9 depending on your settings, the gaze positions could be available at more FPS than the world camera. So you could get multiple gaze points for every world camera frame. Because your gaze point is moving slow, it could be that you aren't drawing all of the gaze points available at each world index. But rather spreading them out over the world camera frames

nmt 30 December, 2020, 15:49:38

@user-98789c 7. How is confidence calculated? The confidence value is a weighted average of 2d confidence and a confidence value that measures how well the model fits to the current pupil observation. 2d confidence is measured from the number of pixels on the circumference of the fitted ellipse.

End of December archive