core


user-e1d9a6 01 January, 2020, 21:18:47

Hey! I'm using this software to do the same thing as https://github.com/pupil-labs/pupil/issues/1500

user-e1d9a6 01 January, 2020, 21:19:54

But I have a small problem. I have left a msg below. Would anyone please help me with this? Thank you so much for your help!!!

user-c5fb8b 02 January, 2020, 08:50:20

Hi @user-e1d9a6 I answered your issue on GitHub: https://github.com/pupil-labs/pupil/issues/1780

user-dae891 02 January, 2020, 21:52:51

Hi, I get two gaze in a same frame when I use Pupil Player to do offline gaze calibration. Does anyone have any idea?

Chat image

wrp 03 January, 2020, 02:39:52

Hi @user-dae891 there are likely multiple gaze positions per world video frame, especially if the world video is capturing at 30Hz and eye cameras at 200Hz - this is to be expected. BTW what version of Pupil Player are you using?

user-dae891 03 January, 2020, 23:15:52

Seems like if I press the bottom "activate gaze", two gaze position appears. I a, using version 19

user-dae891 03 January, 2020, 23:16:05

@wrp

user-abc667 04 January, 2020, 00:53:55

@wrp @papr I'm using the headset with its wide FOV camera. The camera intrinsics estimation -> show undistorted image does a very nice job of straightening lines. Is this something that can be applied offline/during playback as well, or does the original recording need to be done with this on? I ask mainly because it appears to be a sizable load on the CPU and we have to run with a laptop that is adequate but not amazingly powerful. Thanks.

user-5ef6c0 05 January, 2020, 00:49:55

Hello everyone. I received my pupil core a couple weeks ago, but just now got the chance to use it. It seems one of my eye cameras is upside down. Is there an easy way to fix this?

user-5ef6c0 05 January, 2020, 00:57:01

Fixed

papr 06 January, 2020, 09:36:05

@user-dae891 It looks like you are referring to the "activate gaze" button in the gaze mapping menu of the Offline Calibration plugin. This plugin allows you to calibrate multiple gaze mappers for the same time span, e.g. one with manual correction and one without. This can be used for comparison between two different gaze mappers. But this is a special case. Usually you only want one gaze mapper per time span. You can deactivate or delete gaze mappers from the same menu to remove the additional gaze data.

papr 06 January, 2020, 09:39:23

@user-abc667 The Camera Intrinsics Estimation plugin has the option to undistort the live video stream, yes. But the undistorted images are only overlayed, they are not written to the video file. You can use the iMotions Exporter in Pupil Player which exports a video file containing the undistorted video.

papr 06 January, 2020, 09:57:08

@user-b37f66 1) The human eye uses multiple mechanisms to gaze at a specific distance. Unfortunately, most of these cannot be observed directly using cameras. Our depth estimation is based on only one of these mechanisms: vergence (https://en.wikipedia.org/wiki/Vergence). The issue with vergence is that small errors in gaze estimation result in bigger depth estimation errors the further the gaze target is away.

Usually, depth estimation up to 1.5 meters should be fairly accurate, but looses accuracy quickly at higher depths than 1.5 meters.

At what distance are you trying to estimate depth?

papr 06 January, 2020, 10:09:19

@user-b37f66 2) The Depth Accessor plugin is just a template/starting point for developers. You will have to modify it according to your needs.

user-ddf37a 06 January, 2020, 12:55:46

hello, where do i find the starting time for every individual recording?

papr 06 January, 2020, 13:05:50

@user-ddf37a In the info.json (or in old recordings info.csv) file

user-119591 06 January, 2020, 13:11:54

I need help, I need to control pupil capture from matlab to synchronize it with video, but I don't have idea how to do it, can you help me with it? some how

user-119591 06 January, 2020, 13:12:23

can i change somehow application code to make it record as it starts?

user-c5fb8b 06 January, 2020, 13:14:25

Hi @user-119591 we have a couple of script for remote controlling Pupil from Matlab. Take a look here: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab You can send e.g. a command to start recording to Pupil. Look at the examples in https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/pupil_remote_control.m specifically.

user-119591 06 January, 2020, 13:17:11

thanks!!!

user-119591 06 January, 2020, 14:22:31

https://zeromq.org/download this is what i need for using matlab part?

papr 06 January, 2020, 14:23:23

@user-119591 please follow the readme at the bottom of the pupil-helper link above

papr 06 January, 2020, 14:23:35

it links you to the necessary repositories

user-119591 06 January, 2020, 14:37:28

in some point it says 'download ZMQ' but doesn't say exactly how , but i will try what i found ,hopefully it will work.

papr 06 January, 2020, 14:38:20

@user-119591 are you referring to

Make sure you have ZMQ 4.0.x installed. in https://github.com/fagg/matlab-zmq ?

user-119591 06 January, 2020, 14:42:43

yeah

papr 06 January, 2020, 15:22:54

@user-119591 are you on windows or mac/linux?

papr 06 January, 2020, 15:23:50

What they mean by that, that you install zmq via one of these ways: https://zeromq.org/download/#windows

user-119591 06 January, 2020, 15:26:22

windows

papr 06 January, 2020, 15:27:07

This will install you a library file (ends with .a on macos/linux or on .lib on Windows). Afterwards edit the config.m file to include the path to this file.

A word of advise: The matlab setup is a bit of a hassle to setup. If you have the choice to setup your script in Python, I would recommend to switch to it.

papr 06 January, 2020, 15:27:53

Also, the matlab setup has not been tested on Windows yet. And we are not able to give support on it because we do not have access to a Windows matlab environment.

papr 06 January, 2020, 15:28:18

Btw see the example config file for windows: https://github.com/fagg/matlab-zmq/blob/master/config_win.m

user-119591 06 January, 2020, 15:33:03

i saw on matlab code : zmq.core.ctx_new(); it is a class no? , but there are no zmq class

user-5ea855 06 January, 2020, 16:01:33

anybody using core system with monkey's eye?

papr 06 January, 2020, 16:03:02

@user-119591 I think this calls this: https://github.com/fagg/matlab-zmq/blob/master/lib/%2Bzmq/%40Context/Context.m#L9-L17

papr 06 January, 2020, 16:03:33

@user-5ea855 You can check our citation list for relevant papers: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?ts=576a3b27#gid=0

user-5ea855 06 January, 2020, 16:08:05

so core system does work on monkey pupil?

papr 06 January, 2020, 17:21:50

@user-5ea855 This paper evaluates multiple eye tracking algorithms for their accuracy on non-human primates, including the Pupil Core pipeline: https://www.sciencedirect.com/science/article/pii/S0165027016301467

user-5ea855 06 January, 2020, 17:25:56

@paper Thanks!

user-5ef6c0 06 January, 2020, 19:29:48

Hello everyone. I am designing a study to record gaze data during a creative model-making task using blocks (in the context of architectural model-making). My aim is to: a) assess whether participants do volitional look-ahead eye movements before grabbing and/or placing the blocks; and b) determine if they spend time just "reflectively" looking at their creations. In my mind, this seemed simple enough to execute, but now I don't even know where to begin with the calibration process and so on (e.g. should I use markers? Do offline calibration).? Anyway, is there anyone here willing to share their wisdom and have a quick conversation to point me in the right direction?

wrp 07 January, 2020, 04:03:33

@user-5ef6c0 welcome to the Pupil community ๐Ÿ‘‹ Your study sounds very interesting thanks for sharing! Some notes and questions:

Setup - I am assuming that the majority of the model making activity will be happening on a table top, correct? If so, then you want to make sure that the scene camera of your Pupil Core headset is adjusted to point more downward to capture the action. I would also recommend that you test out eye camera angles so that you are capturing the predominant area of movements of the eyes for this task.

Calibration - I would suggest that you try single marker calibration with a printed/manual marker: https://docs.pupil-labs.com/core/software/pupil-capture/#single-marker-calibration - I would also suggest that you start recording before calibration. This way you can re-calibrate post-hoc if needed.

Markers and surfaces - Markers in your scene would help you define Areas of Interest (what we call "surfaces" in Pupil Capture and Player). This could help you to analyze where the participant is looking. Given your task, you might have markers on the table top that could be used to define an area where blocks are stored (e.g. model making material) vs area where model is built so that later you can understand how much time was spent searching for the "next piece" vs building the model. There could surely be other uses of markers, but this is just a quick example.

Hope this helps ๐Ÿ˜ธ

user-3259bd 07 January, 2020, 14:34:25

@papr i wanted to know up to which version we can expect the same recording format as attached. I use this structure(with folder name as numbers -001,002 etc) as input to my code, that's why i wanted to know this

Chat image

user-5ef6c0 07 January, 2020, 16:22:02

@wrp Thank you so much for your comments. I will reply to your comments in more depth later after I have read about the things you mention. For now, I am having issues with pupil detection and with calibration. Have tried different things without much luck. I have followed the online guide instructions. Are there any other resources available?

user-abc667 07 January, 2020, 16:23:01

@papr "You can use the iMotions Exporter in Pupil Player which exports a video file containing the undistorted video." Perfect, thanks.

user-b37f66 07 January, 2020, 20:43:05

Hi and thank you for the answer @papr . 1) I'm trying to estimate objects at depht of 500 mm - 1500 mm. I'm afraid that my screen marker calibration isn't right. 2) Can you direct me how can I access the depth data of Realsense frames? In the description of the plugin is written that "Example plugin that shows how to access the 16 bit depth data of Realsense frames in Capture" https://github.com/pupil-labs/pupil-community

user-5ea855 07 January, 2020, 20:57:33

can I use 3rd party webcam for pupil-capture?

user-abc667 08 January, 2020, 00:13:18

@papr Further on using the iMotions Exporter: Selected the iMotions plugin and system said it was launched. Did an export, but got nothing different from export without that plugin. In the iMotions plugin control window there is a Tasks list, but it's empty and it's not obvious how to add things to it. What am I missing? Thanks.

papr 08 January, 2020, 07:48:18

@user-5ea855 Third-party cameras - Pupil Capture supports third-party USB cameras that fulfill the following criteria: - UVC compatible [1] (Chapters below refer to this document) - Support Video Interface Class Code 0x0E CC_VIDEO (see A.1) - Support Video Subclass Code 0x02 SC_VIDEOSTREAMING (see A.2) - Support for the UVC_VS_FRAME_MJPEG (0x07) video streaming interface descriptor subtype (A.6) Support UVC_FRAME_FORMAT_COMPRESSED frame format

[1] http://www.cajunbot.com/wiki/images/8/85/USB_Video_Class_1.1.pdf

papr 08 January, 2020, 07:54:35

@user-abc667 The should be an iMotions subfolder in the export. But from your description it looks like your recording was not eligible for the iMotions export. ๐Ÿ˜• In this case, you should see a warning pop up when you start the export. The exporter requires 3d gaze data to be available and does currently not support Pupil Invisible recordings.

Please let us know if there is no warning and your recording actually fulfils these requirements so we can help to resolve the issue.

user-d9a2e5 08 January, 2020, 14:43:30

i downloaded zeroMQ, but it is 4.3.2 version, should it work properly on matlab?

papr 08 January, 2020, 14:44:03

@user-d9a2e5 I do not know. I can only recommend to follow the instructions as closely as possible.

user-d9a2e5 08 January, 2020, 14:45:21

i cant download 4.0 version , can you tell me to whom i need to talk about the matlab toolbox part?

user-c5fb8b 08 January, 2020, 14:48:29

@user-d9a2e5 You can at least try if it works with 4.3.2 (this shouldn't take too long). If it does not work, you will have to contact the people who made the matlab zmq port. The best way to communicate would be to open an issue on the GitHub repository: https://github.com/fagg/matlab-zmq/issues

user-d9a2e5 08 January, 2020, 14:50:12

i tried to make it work , but i am not sure i downloaded zmq right

user-c5fb8b 08 January, 2020, 14:56:45

@user-d9a2e5 As papr already mentioned, we unfortunately cannot provide support for MATLAB issues on Windows. Is using MATLAB an absolute necessity for you? Otherwise I'd recommend you switch to Python, as the ZMQ bindings there work like a charm.

user-d9a2e5 08 January, 2020, 14:58:09

its not a absolute necessity and i will change to python if nothing will work ๐Ÿ™‚ thanks

user-eaf50e 08 January, 2020, 15:03:54

hello! is it possible to obtain the pose (orientation + position) of apriltags?

papr 08 January, 2020, 15:06:12

@user-eaf50e to my knowledge, they are currently not published via network api. So currently, you would have to write a plugin to access them

papr 08 January, 2020, 15:06:47

alternatively, you can use the apriltag detector library directly, without Capture/Player

user-eaf50e 08 January, 2020, 15:11:36

okay, then I would need to run from source right?

user-d9a2e5 08 January, 2020, 15:12:13

if i would like to use python to control pupil capture , what do you recomend me to do?

user-d9a2e5 08 January, 2020, 15:14:27

i am new to python , never used one so i am not sure where to begin ๐Ÿ˜ฎ

papr 08 January, 2020, 15:26:15

@user-d9a2e5 you can check out our helpers, which are examples for how to access the network api via python. e.g. this script to start and stop recording: https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py

papr 08 January, 2020, 15:27:19

The matlab examples are mirrored implementations of the python examples. You will need to install zmq and msgpack to run the example script. pip install zmq msgpack

papr 08 January, 2020, 15:28:01

@user-eaf50e not necessarily. Even though it can help to run from source during development, the plugin can be run using the bundle, too

papr 08 January, 2020, 15:28:46

@user-d9a2e5 @user-eaf50e I would recommend reading https://docs.pupil-labs.com/developer/core/overview/ to both to you ๐Ÿ™‚

user-d9a2e5 08 January, 2020, 15:36:08

thanks! ๐Ÿ™‚ i would try to use it

user-abc667 08 January, 2020, 15:48:23

@papr We are indeed using 2d gaze detection as we want the most accurate gaze location -- we're tracking people working on a paper maze form on the table, and <1deg means locating the gaze within a specific maze path. (I hit return by accident before finishing.) Will try the 3rd party idea. Also: we record the calibration too; can we re-analyze everything offline but with 3d gaze location, just so we can run the iMotions exporter?

papr 08 January, 2020, 15:50:18

@user-abc667 you can make a copy of the imotions exporter code, remove all the unnecessary code, and run it as a third-party plugin. This way you can get around the requirments

user-abc667 08 January, 2020, 15:59:25

@papr Thanks; hacking someone else's code is always a time sink, but we'll do it if need be. What about this: as we're recording the calibration, can we re-analyze everything offline but with 3d gaze location, just so we can run the iMotions exporter? All I really want from it is the rectified image so we can do image analysis on individual frames. tnx.

papr 08 January, 2020, 16:01:41

@user-abc667 yes, that would be possible

user-eaf50e 08 January, 2020, 16:02:09

thanks ๐Ÿ™‚ yes, I read it, but I'm a bit lost on how to access from a plugin e.g. x.detect(img, estimate_tag_pose=False, camera_params=None, tag_size=None), so I can set estimate_tag_pose = True, without writing a new surface detection method.

papr 08 January, 2020, 16:02:54

@user-abc667 in case that you did not record a calibration, you can set some dummy natural feature calibration points to create a dummy calibration. See this example: https://www.youtube.com/watch?v=mWyDQHhm7-w&list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh&index=3

papr 08 January, 2020, 16:03:29

@user-eaf50e In this case just run from source and change that line ๐Ÿ™‚

user-abc667 08 January, 2020, 16:34:31

@papr Thanks, will keep that in mind.

user-aaa87b 08 January, 2020, 17:40:51

Could somebody please explain what is the difference between the 2D and 3D Pupil Detector in Capture? What are the appropriate use of the different detectors? Thank you!

papr 08 January, 2020, 17:42:03

@user-aaa87b Please see the references in our terminology section: https://docs.pupil-labs.com/core/terminology/#pupil-positions

user-aaa87b 08 January, 2020, 17:42:58

@papr Thank you, Iโ€™ll check that!

user-5ef6c0 08 January, 2020, 18:16:04

How can I check if my pupil core has 120 or 200hz eye camera?

user-5ef6c0 08 January, 2020, 18:17:02

I ordered 200hz but in the pupil capture eye window, max frequency in the drop down menu is 120

user-5ef6c0 08 January, 2020, 18:17:21

also, the image looks kinda blurry, but I haven't dared focusing as I thought it was the 200hz camera

papr 08 January, 2020, 18:17:22

@user-5ef6c0 please change the resolution to 192x192 pixels

user-5ef6c0 08 January, 2020, 18:17:46

that did the trick, thank you.

papr 08 January, 2020, 18:18:15

The blurriness also comes partially from upscaling the image. 200x200 is acutally quite tiny ๐Ÿ™‚

user-5ef6c0 08 January, 2020, 18:19:56

How does resolution affect detection? 400x400 means more accuracy?

papr 08 January, 2020, 18:21:51

@user-5ef6c0 Actually, detection is better with 200x200 in some cases*. There is not much difference in detection other than the increased frame rate for the 200x200 resolution.

* in cases with large pupils (e.g. dark VR environment)

user-5ef6c0 08 January, 2020, 18:22:17

@papr I see, thank you for the clarification.

user-5ef6c0 08 January, 2020, 20:15:48

@papr In terms of eye camera focusing/sharpness, how do this look? 192x192

Chat image

papr 08 January, 2020, 20:48:00

@user-5ef6c0 looks pretty good already. I would recommend repositioning the cameras slightly such that the eye is more centered in the field of view. This way the risk of the pupil leaving the field of view during eye movements is lower. I would also slight increase the exposure time to brighten up the image a bit and therefore increasing the contrast between Pupil and environment.

user-bc8b26 09 January, 2020, 03:48:19

We noticed that the camera line is cut. Could you repair it?

wrp 09 January, 2020, 03:49:51

@user-bc8b26 please send an email to sales@pupil-labs.com with details about your Pupil Core headset (ideally order ID if possible) and our team will follow up regarding repair/replacement options.

user-588603 09 January, 2020, 12:23:06

hey guys. im triing to install and compile the source code on windows 10 according to this.: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md when first starting run_capture.bat it told me that "torch" python dependency couldnt be found. Afaik there is no instruction to install torch in the first link. So i went to https://pytorch.org/get-started/locally/ and selected "Stable 1.3, windows, pip, python 3.6, CUDA:NONE (my pc doesnt have an NVIDIA card, just some lazy CPU based GPU) and did run.:

pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html

now run_capture did build and they launch, but the eye tasks crash with.:

eye1 - [INFO] numexpr.utils: NumExpr defaulting to 8 threads. eye1 - [ERROR] launchables.eye: Process Eye1 crashed with trace: Traceback (most recent call last): File "C:\work\pupil\pupil_src\launchables\eye.py", line 168, in eye from pupil_detectors import Detector_2D, Detector_3D, Detector_Dummy ImportError: cannot import name 'Detector_2D'

i suspect that the detector relies on torch and my torch install is wrong. What would be the right way to install torch?

user-588603 09 January, 2020, 12:25:38

add: i did modify eye.py which is the reason for the whole endevour. i need additional code in eye.py. so line 168 might be different in your document. for me its the line "from pupil_detectors import Detector_2D, Detector_3D, Detector_Dummy". the code i added to eye.py is working. i already tested that on other eyetracker systems we have. im now just triing to set up system 3 and have problems.

user-588603 09 January, 2020, 12:48:33

as far as i understand "pip install pupil-detectors" shoudl and has already placed the 2d detectors in C:\Python36\Lib\site-packages\pupil_detectors\detector_2d and they are there. i dont understand why they cant be found. System path contains "C:\Python36\Scripts\", "C:\Python36\", "C:\work\pupil\pupil_external", and "C:\Python36\Lib\site-packages\torch\lib"related to pupil labs atm

papr 09 January, 2020, 13:07:07

@user-588603 The detectors do actually not depend on torch. Actually, we have removed the torch requirement completely.

user-588603 09 January, 2020, 13:07:39

so i was building old code?

papr 09 January, 2020, 13:08:11

@user-588603 I think this is the case.

user-588603 09 January, 2020, 13:08:19

i start from scratch. thx for now

papr 09 January, 2020, 13:09:53

@user-588603 The pip pupil detectors installation should be working via a wheel

papr 09 January, 2020, 13:10:19

So if this is not working, there might just be something wrong with that single step

user-588603 09 January, 2020, 13:14:38

taking a look

user-588603 09 January, 2020, 13:39:21

@papr It is working - one of the rare occsions where something actually works! ๐Ÿ™‚ thanks a lot!

user-d9a2e5 09 January, 2020, 14:41:08

i tried to combine matlab and python code that you gave me and it worked! thanks :)!

user-aaa87b 09 January, 2020, 14:45:03

@papr I've read the two papers referenced in the link you gave me, and it's ok, thank you. However I'm still dubious about which detection method is to be used and why. For instance, our experiment consists just of the observation of pictures shown on a monitor in front of the subject. Using 2D or 3D method, would make any difference? Thanks!

user-b8b425 09 January, 2020, 16:10:54

@papr Hi, I'm trying to use multiple eye trackers simultaneously (we need them to be highly synchronized) for our research. Could you please tell me that, if I use the NETWORK API, would every tracker be highly synchronized? Or is there a easy way to manage them on a single Linux machine (e.g. 4 eye trackers connected to one Linux machine)? (I cloned your repo and read a part of it. I found it is not easy to write my own version of "main.py" using some key components as everything is based on plugin, and interacting using "g_pool" and multi-threading event. I am not able to find a comprehensive document either. Besides, I think it would also be interesting if I could mount the eye tracker on rasp pi, but it seems not realistic, do you guys have any idea about it?)

user-c5fb8b 09 January, 2020, 16:53:27

@here ๐Ÿ“ฃ Pupil Software Release v1.21 ๐Ÿ“ฃ This release adds support for Pupil Invisible audio recordings and fixes a few stability issues.

Check out the release page for more details and downloads: https://github.com/pupil-labs/pupil/releases/tag/v1.21

user-f497a5 09 January, 2020, 17:12:27

Weldone and a huge thanks to the Pupil Labs team for this v1.21. especially with the invisible audio part. Yet to update but looking forward to this feature. Sometimes you just got to give credit where it is due. Happy New Year to all at Pupil Labs HQ helping us shine at what we do with eye tracking technology. Cheers.

user-bcbb4e 09 January, 2020, 18:42:59

Hello! I am having difficulty opening the Core apps for mac. My mac tells me that it won't open because the app might be malicious software. What should I do? It also shows the following:

user-bcbb4e 09 January, 2020, 19:02:29

Chat image

user-abc667 10 January, 2020, 00:57:29

@papr Next step in the adventure (trying to export intrinsics-corrected video by iMotions exporter). Dropped onto player a video with a calibration at the start and 2D pupil info. I set player for Offline Pupil Detection, 3d, and Gaze from Recording. Offline Pupil Detector runs to completion, but when I try to export, with iMotions export plugin selected, I get the message "Currently the iMotions export only supports 3D gaze data." In the export folder that results I get pupil_positions.csv with data in all the columns (ie including the 3d info). The gaze_positions.csv however has 2d data. Sounds like this is the problem? Should I have used "Gaze from offline calibration"? Looked at it but couldn't quite figure it out. If this is the issue, how do I generate 3D gaze data from my video? Give me a hint and I'll go dig into docn as appropriate. Thanks as always.

papr 10 January, 2020, 07:07:13

@user-abc667 correct, gaze loaded from recording is not overwritten by offline pupil detection. You will have to calibrate. Please checkout our YouTube tutorials on this topic.

[email removed] in case that you did not record a calibration, you can set some dummy natural feature calibration points to create a dummy calibration. See this example: https://www.youtube.com/watch?v=mWyDQHhm7-w&list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh&index=3

papr 10 January, 2020, 07:09:15

@user-bcbb4e yes, this is a known issue. We are working on integrating Apple's notarization into our bundling procedure such that we can avoid this issue. Until then, right click the application and then click Open. A similar dialogue will show, but with the option to start the application.

user-bcbb4e 10 January, 2020, 13:37:49

Thank you, I got it running on my mac. I have a couple students who could not open it either on their PCs. Is there a solution for the PC?

papr 10 January, 2020, 13:38:19

@user-bcbb4e What is the exact issue on the Windows computer?

papr 10 January, 2020, 13:39:37

@user-bcbb4e And please let us know which version your students are using. We recommend to use the most recent release https://github.com/pupil-labs/pupil/releases/latest

user-bcbb4e 10 January, 2020, 13:44:13

I will find out but I think it just didn't open and of course some of security issue pop-ups. I think they were using Windows 10 and downloaded v 1.21. I believe it was the latest on either github or your main website.

papr 10 January, 2020, 13:45:17

@user-bcbb4e If this is the case, they will have to add an exception to their security software in order to start the application. Apologies for the inconvenience. ๐Ÿ˜•

user-bcbb4e 10 January, 2020, 13:46:11

It's all good. I did tell them to stop at a certain point in case their antivirus was picking something up.

user-abc667 10 January, 2020, 17:00:24

@papr Thanks again. Two questons: (a) given that my video includes a (single marker) calibration, is the idea that I go back through that and do what you did in the video you pointed me to? (b) If we decide to do the undistortion ourselves using the world.intrinsics file, is there documentation on that file's format. I realize it's pretty simple, but it would save some trial and error to know what info and how encoded. I checked documentation and didn't see it. Thanks.

papr 10 January, 2020, 17:05:50

@user-abc667 if there are markers already, you can detect them automatically as shown in the 2nd (?) video of the Playlist. But it is not necessary to perform an accurate calibration if you only want to perform the iMotions export for the video.

papr 10 January, 2020, 17:07:05

@user-abc667 The format is not officially documented, correct. But you can read the file using msgpack to get an idea of its structure. The important part are the camera coordinates and the distortion coefficients which can be passed to Opencv to undistort single images.

user-a10852 10 January, 2020, 19:13:29

Hello, can external eye video sources also be used in pupil player for pupil detection? Thanks

user-c629df 10 January, 2020, 23:29:54

I'm running experiments with psychtoolbox in MATLAB and Pupil Lab. I hope to control Pupil Lab and record trial numbers in eye tracking by integrating MATLAB codes into the experiment. Any suggestions on how I can achieve this and what repository I should look for? @papr

user-abc667 11 January, 2020, 00:52:25

@papr Just FYI, finally got iMotions to export, using circle locations in the recording to generate a 3d mapping; no interaction needed given the presence of the target in the video. A clear improvement when the barrel distortion is removed. Thanks again for your help.

user-b37f66 11 January, 2020, 22:36:43

Hi @papr , posting again my followup. I'm really looking forword for your response. 1) I'm trying to estimate objects at depht of 500 mm - 1500 mm. I'm afraid that my screen marker calibration isn't right. 2) Can you direct me how can I access the depth data of Realsense frames? In the description of the plugin is written that "Example plugin that shows how to access the 16 bit depth data of Realsense frames in Capture" https://github.com/pupil-labs/pupil-community Thanks.

user-c5fb8b 13 January, 2020, 08:10:11

Hi @user-c629df we have a couple of script for remote controlling Pupil from Matlab. Take a look here: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab You can send e.g. a command to start recording to Pupil. Look at the examples in https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/pupil_remote_control.m specifically.

A word of advise: The matlab setup is a bit of a hassle to setup. Also, the matlab setup has not been tested on Windows yet and we unfortunately are not able to give support on it because we do not have access to a Windows matlab environment.

user-c5fb8b 13 January, 2020, 08:14:34

Hi @user-a10852 Do you just want to detect pupils in images, or do you want to use all the other functionality of Pupil Player (calibration, mapping gaze to world video, ...) as well? If you just want pupil detection, you can use our pupil detectors, which we wrappen into a standalone Python library: https://github.com/pupil-labs/pupil-detectors

papr 13 January, 2020, 08:17:28

@user-a10852 To add to @user-c5fb8b 's statement: You can follow our documentation regarding the recording format: - https://docs.pupil-labs.com/core/software/recording-format/#pupil-core - https://docs.pupil-labs.com/developer/core/recording-format/ and create your own recording based on your externally recorded videos.

user-4b473e 13 January, 2020, 11:19:22

Hello, Just got a new Core device, and one of the cameras looks like shown here. Is there some hardware issue?

Chat image

user-755e9e 13 January, 2020, 12:46:30

@user-4b473e this issue can be resolved by selecting "check stripes" in UVC Manager eye window. Please also contact [email removed] with all the information regarding the issue and your order, in this way we will be able to offer you an appropriate solution.

user-d9a2e5 13 January, 2020, 13:22:57

can I control pupil player?

papr 13 January, 2020, 13:31:22

@user-d9a2e5 Could you elaborate on the means that you want to control player with? Do you mean controlling it remotely using a network api?

user-d9a2e5 13 January, 2020, 13:37:13

I mean controlling it with python , to get the excel file with pupil size with time data

user-d9a2e5 13 January, 2020, 13:38:07

I want to make a code which I can control with recording and maybe even getting the data all the way together. if it is not possible, that's okay too

papr 13 January, 2020, 14:23:08

@user-d9a2e5 There are some scripts by the community that export csv files from the Capture recordings directly without opening Player at all, e.g. https://github.com/tombullock/batchExportPupilLabs

user-d9a2e5 13 January, 2020, 14:24:30

thanks! i will check it out.

user-bbfd3f 13 January, 2020, 15:13:44

Hello everyone, does anyone have experience with recording outside while using the Pupil Core set? For example tips on what type of place the calibration is most reliable for the eye tracker?

papr 13 January, 2020, 15:39:23

@user-bbfd3f Recording outside comes with a series of challenges for the Core pipeline. The biggest issue is that there a lot of IR reflections, especially on sunny days, that can deter the pupil detection. The second issue is slippage. Being outside implies some kind of movement which will likely result in slippage of the Core headset in relation to the subject's head. This can decrease gaze mapping accuracy over time.

Further challenges are - changing light conditions: Under/overexposed eye images can result in very poor pupil detection - limited amount of movement: Since you have to connect the headset to a computer running Pupil Capture, the subject's freedom of movement is limited by the cable length

papr 13 January, 2020, 15:41:29

@user-bbfd3f I would highly recommend using Pupil Invisible instead which performs better than Pupil Core in respect to these challenges.

user-bbfd3f 13 January, 2020, 15:43:56

Thank you for the information papr!!

user-057596 13 January, 2020, 18:50:51

Hi a wire connecting one of the pupil cameras of our Pupil Core glasses has come loose and so we arenโ€™t able to get any streaming or data from this camera. Is there a wiring diagram that our engineer could use to put the wire back in the right location? Thanks Gary

user-abc667 14 January, 2020, 00:09:32

@papr (a) Where do I find documentation/best practices for optimizing pupil capture using the UVC Manager and UVC Source windows? (b) At times we see a lot of bright yellow in the eye cam view. Is there an adjustment that will help with this? Thanks.

user-bbfd3f 14 January, 2020, 09:29:18

Sometimes all 3 cameras give a grey screen and says: 'Capture initialisation failed'. Anyone knows how to solve this?

user-c5fb8b 14 January, 2020, 09:34:01

Hi @user-bbfd3f we had an issue like that in the past where an update to the libusbk drivers solved it. Which operating system are you on? Also it could be a hardware issue. If we can rule out a driver issue, you should contact info@pupil-labs.com and we can help you with a faulty hardware.

user-bbfd3f 14 January, 2020, 09:51:54

Im on Windows, Im sure it is a computer problem and not an eye tracker problem

user-c5fb8b 14 January, 2020, 09:58:59

@user-bbfd3f do you have other computers available to test it on? How often does this problem occur? If you experience the issue next time, please share the log file with us. You can find it in your home folder > pupil_capture_settings > capture.log Note that this file will get overwritten every time you restart capture, so make sure to send or copy it after the issue occurred.

user-bbfd3f 14 January, 2020, 15:28:55

@user-c5fb8b Thanks again for your help, the problem seems to be solved. The main issue at this moment is when I go outside with the ET and the laptop. For this I do the laptop in my backpack. But even though my energy saver is installed in a way that the screen should continue to work after shutting the laptop, the recordings do not continue to run after I put the laptop away. Anybody knows how to keep recording with the laptop shut?

user-2143a5 14 January, 2020, 16:07:29

I tried this question over in software-dev, but perhaps it's best asked here: can Pupil Capture record gaze overlaid onto a screen recording, as opposed to the world camera view? Thanks for your help!

mpk 14 January, 2020, 16:20:24

@user-057596 please email info@pupil-labs.com with your hardware issue.

user-abc667 14 January, 2020, 23:45:41

@wrp (a) Where do I find documentation/best practices for optimizing pupil capture using the UVC Manager and UVC Source windows? Perhaps I overlooked it, but didn't see it in the docs. (b) At times we see a lot of bright yellow in the eye cam view. What is this and is there an adjustment that will help with this? Or does it not matter? Thanks.

user-c5fb8b 15 January, 2020, 09:17:54

@user-abc667 Regarding a) the UVC settings: from my experience the default settings are ususally sufficient to yield best results. We don't have explicit documentation on how to fine-tune these settings. If you are experiencing accuracy problems with pupil detection, please write us and share screenshots, then we can assist you in optimizing your setup. Regarding b) the bright yellow areas in the eye cam: this might come from the algorithm view that is used to investigate the pupil detector. The last release v1.20 had an issue where the algorithm view would be displayed instead of the normal eye image. This would not affect performance in any way however, but was just a visualization issue. Please upgrade to v1.21 as this should fix the issue.

user-c5fb8b 15 January, 2020, 09:23:53

@user-2143a5 Can you elaborate on what you mean specifically with "record gaze overlayed onto a screen recording... "? The gaze needs to be related to some real world "thing" that you look at, otherwise it becomes meaningless. Note that the gaze needs to be calibrated with reference points in the real world recording that you want to reference. Normally this is the world camera view. If you were to substitute this somehow then you will also need to do a calibration via this other video. It's not yet fully clear to me what the exact use case would be. However, you can also open an existing recording in player and just overlay another video, by enabling the "Video Overlay" plugin. The gaze won't be correlated to that overlay though. Please elaborate so I can better point you into the right direction.

papr 15 January, 2020, 13:29:02

@user-abc667 To elaborate on @user-c5fb8b's statement: I would recommend reducing the eye camera's exposure time (either manually or setting it to automatic) in the eye windows' UVC Source menus.

@user-2143a5 Pupil Capture is designed to be used with Pupil Core, a head-mounted eye tracker. If I understand you correctly, you might be talking about a remote eye tracking setup where one calibrates gaze specifically in respect to a computer screen. As @user-c5fb8b mentioned already, Pupil Capture calibrates in respect to the world camera. In order to get gaze in respect to a computer screen, you have to set it up as a "surface" in the Surface Tracker plugin. It allows you map gaze to planar areas. See our documentation: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-0eb381 15 January, 2020, 15:11:08

Hi, just wondering if there's anyone at pupil available for a quick chat right now?

user-0eb381 15 January, 2020, 15:17:35

My main inquiries about the core model are: compatibility with Tobii software and portable capabilities.

papr 15 January, 2020, 15:19:42

@user-0eb381 I assume that the Pupil Core headset is not compatible with Tobii software.

user-0eb381 15 January, 2020, 15:23:33

Hi @papr, thanks for the speedy reply, is there any way I could get my hands on sample export file from the pupil software to asses potential implementation to our current framework?

user-0eb381 15 January, 2020, 15:34:07

Also, Im currently working within a tight budget and I'd like to asses the feasibility of including the core model in a mobile lab. Would the core series paired with an android work identically to a core series hooked up to something like Microsoft surface pro?

user-c5fb8b 15 January, 2020, 15:36:47

@user-0eb381 regarding the data export, here's a description of the fields in the csv files that you can export from a recording: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter If you want, I can make a quick recording and share the actual export. But I guess without context that won't offer much more information than the documentation?

user-c5fb8b 15 January, 2020, 15:37:55

Also you can actually download an example recording and open it in Pupil Player: https://docs.pupil-labs.com/core/software/pupil-player/#load-a-recording

user-0eb381 15 January, 2020, 15:38:30

Perfect! Thank you.

user-c5fb8b 15 January, 2020, 15:40:29

(You can find the download link for Pupil Player here at the top: https://docs.pupil-labs.com/core/ )

user-0eb381 15 January, 2020, 15:41:25

Does Pupil have an all-in-one data export file option? Containing eye tracking data points and relative times?

papr 15 January, 2020, 15:42:47

@user-0eb381 The raw data exporter linked above exports csv files for pupil and gaze data, including their positions and their absolute (!) timestamps

papr 15 January, 2020, 15:43:57

The link above lists all exported fields. Also, have a look at our terminology section, since it might differ from terms used by Tobii: https://docs.pupil-labs.com/core/terminology/

user-5ea855 15 January, 2020, 15:44:07

I tried 200Hz eye camera but in Windows 10, it works in around 120-130Hz. Any way to improve it to make it work in 200Hz?

papr 15 January, 2020, 15:44:30

@user-5ea855 I assume that you explicitly set the framerate to 200Hz already? If not, please do so.

user-0eb381 15 January, 2020, 15:45:31

Thanks, I'll read that over.

user-5ea855 15 January, 2020, 15:49:12

where can I set framerate ? pupil-capture?

papr 15 January, 2020, 15:49:36

@user-5ea855 Eye window > UVC Source menu > Framerate selector

user-5ea855 15 January, 2020, 15:49:53

got it! thanks!

papr 15 January, 2020, 15:50:04

@user-5ea855 You might need to set the resolution to 192x192 first

user-5ea855 15 January, 2020, 15:50:18

OK. Thanks.

user-5ea855 15 January, 2020, 15:50:31

Does Nexus 5x work with pupil-mobile?

user-0eb381 15 January, 2020, 15:51:03

This page is very informative however, I'm having trouble finding a sample output file from the raw data exporter.

user-0eb381 15 January, 2020, 15:51:19

Is there one available that im not seeing?

papr 15 January, 2020, 15:51:25

@user-5ea855 Yes, it is known to work. You need to enable the OTG Storage option in the settings first if the cameras do not show up.

user-5ea855 15 January, 2020, 15:51:50

thanks for confirming it!

papr 15 January, 2020, 15:52:00

@user-0eb381 Open the sample recording in Player, hit e and there will be an exports/000 subfolder containing the exported csv files

papr 15 January, 2020, 15:52:57

@user-5ea855 A word of advice: The Nexus 5x is comparibely old already. It might be that you won't be able to record at 200Hz with it due to its limited cpu resources.

user-5ea855 15 January, 2020, 15:54:11

Thanks for advice. I will use it for testing purpose.

user-5ea855 15 January, 2020, 15:57:57

However I need to use OTG cable to use Nexus 5x?

papr 15 January, 2020, 15:59:45

@user-5ea855 yeah, there are usb-c cables that are know to not work. @user-755e9e could you provide a link to a cable that is known to work?

user-755e9e 15 January, 2020, 16:02:38

@user-5ea855 a recommended cable is CHOETECH 2.0 high speed. https://www.choetech.com/product/hi-speed-usb-c-to-usb-c-cable-6.6ft-2m.html

user-5ea855 15 January, 2020, 16:03:32

got it! thanks!

user-0eb381 15 January, 2020, 16:24:37

is there a way to log events during/post capture?

papr 15 January, 2020, 16:25:31

@user-0eb381 yes, https://docs.pupil-labs.com/core/software/pupil-capture/#annotations

user-f1866e 15 January, 2020, 20:28:02

can I use a second display for calibration?

user-14d189 15 January, 2020, 23:32:00

Happy New Year all! I have a question to Capture> manual marker calibration vs Player>Gaze from Offline calibration> Detect circle markers in Recording. Is that the same function? (Win10, pupil 1.13 + pico flexx app (currently pico flexx app only works on pupil 1.13))

user-14d189 15 January, 2020, 23:33:34

during calibration in capture all markers are found, but using the circle marker detection just a couple of markers are found.

user-14d189 15 January, 2020, 23:34:08

Size does not matter, I tried in several distances.

user-14d189 16 January, 2020, 00:14:11

I got a bit better results by not using pico flexx camera shift function.

user-abc667 16 January, 2020, 01:15:25

@user-c5fb8b @papr Thanks for replies. Will go with setting exposure to auto per @papr's comment and see how that works. Regarding the bright yellow -- yes indeed this is only in the alg view (should have mentioned that) and did not show up in the normal eye image. Glad to hear it's only a viz problem and does not affect accuracy. (I was guessing this was the system's way of displaying the infrared from the eye cameras. Is that right or is it something else? Just curious.)

papr 16 January, 2020, 07:55:56

@user-abc667 The yellow parts are just visualizations for very bright spots in the image. In dark environment it is very likely that headset's own IR LEDs are visible as such bright points. These bright spots are only problematic to the pupil detection algorithm if they lie exactly on the outline of the pupil.

user-6b3ffb 16 January, 2020, 15:18:01

Hello. I have a question regarding the Pupil Mobile. I want to create a mobile application that will calculate some metrics from the gaze coordinates (e.g. fixations, blinks etc). Is it possible to create a mobile application that will recieve the gaze coordinates? Do I need to run the Pupil Mobile in the background and then start My app and recieve the gaze points?

user-2143a5 16 January, 2020, 16:04:47

@papr & @user-c5fb8b Thank you very much for your responses! I was looking for a nudge in the right direction, and these answered my question perfectly. We are tracking in a simulated environment, with an observer seated in front of a screen. We are interested in being able to overlay the tracking data directly on the source video, mainly for the purpose of communicating the work to others. Thanks again for the information!

user-456d87 17 January, 2020, 11:54:12

Hi! What phones are currently recommended for Pupil Mobile? The list on github ( https://github.com/pupil-labs/pupil-mobile-app ) looks outdated and it seems from the issue tracker + and discussion here that Android 9 but not 10 works fine (?). E.g., is the OnePlus 6 used for the Invisible product also recommended for Pupil Mobile?

mpk 17 January, 2020, 12:23:53

@user-456d87 Android 8 and 9 work. 10 does not. The one plus 6 is a great phone to use. You must make sure to turn on OTG and application lock though.

user-456d87 17 January, 2020, 12:26:53

@user-b5166ep Thanks!

user-65d4fc 17 January, 2020, 13:59:15

To whom it may concern. We have a new two pupil camera Pupil Eyes core system. if I put the glasses centered, then I do not see my eye pupil in the pupil window. The only think I see is my white eyeball. Is there a way to adjust both pupil cameras so I can see on both pupil windows the red circle that surround my eye pupils. Thanks.

user-606166 20 January, 2020, 00:14:24

I have a question about getting the camera distortion coefficients. In order for OpenCV to undistort an image, it requires both the camera matrix (which I am able to get) and the distortion coefficients (link: https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#void%20undistort(InputArray%20src,%20OutputArray%20dst,%20InputArray%20cameraMatrix,%20InputArray%20distCoeffs,%20InputArray%20newCameraMatrix) I would greatly appreciate if someone may point me to where I can find the distortion coefficients in the software.

user-eaf50e 21 January, 2020, 09:40:54

@user-606166 Unfortunately I don't know where you can find the distortion coefficients, but I need the camera matrix as well. Could you tell me where you get it?

user-abc667 21 January, 2020, 15:42:48

@user-eaf50e It's in world.intrinsics, in msgpack format. (msgpack is like json, but allegedly faster; google it for lots of examples).

user-c87bad 21 January, 2020, 16:32:50

Hi! I read some data from file fixations_on_surface_Surface 1.csv and gaze_positions_on_surface_Surface 1.csv. I wonder how you get the data on the surface without filtering the low confidence and on_surf=False?

user-5ef6c0 21 January, 2020, 16:33:04

Hi @papr . After a short vacation, I'm resuming work on this. I still haven't been able to get a robust eye detection, and I'm starting to think it may have something to do with my face lol. Anyway, I'll leave that aside for a second to work on my actual experiment setup. I am doing calibration with a manual marker, as you recommended. I am also defining surfaces using apriltags. Regarding the calibration, should I leave the marker visible through the experiment? Should it remove it after calibration? Or is it irrelevant? I keep seeing that pupil play detects the market evertytime it shows on the scene, and I am not sure that is making calibration more robust or if it's messing calibration as time passes.

user-c87bad 21 January, 2020, 16:35:11

Also, I'd like to know the difference among different resolution on 'Sensor Settings', is that only related to the world camera resolution? If I choose different resolution, will that have any influence on the fixation position or anything?

user-c87bad 21 January, 2020, 16:39:47

Another question is that if the pupil is used for screen, is that 2D detection better? Or actually, data from 3D detection like norm_x, norm_y actually are 2D information that can be used?

user-8e220c 22 January, 2020, 07:37:28

Hi there!

user-8e220c 22 January, 2020, 07:37:59

Question: the HTC Vive is out of stock.. Is the PupilLabs HTC add-on compatible with the HTC Pro Eye?

wrp 22 January, 2020, 08:39:21

@user-8e220c The Pupil Labs eye tracking add-on for HTC Vive is compatible with Vive, Vive Pro, and Vive Cosmos (not Vive Pro Eye).

user-973483 22 January, 2020, 08:55:11

@user-8e220c Are you switching from the Pro Eye to the pupil labs? Can I ask why. I'm currently research both options, maybe you have some input that can help me pick the right eye tracking solution. Thanks

user-8e220c 22 January, 2020, 10:30:43

Thanks @wrp . @user-973483 : I'm sticking with PupilLabs for now as I've invested quite a bit on developing for this platform.. Let's see how far it takes me!

user-973483 22 January, 2020, 11:05:49

@user-8e220c thanks

user-ab1953 22 January, 2020, 11:38:25

Hi ! we are missing the old batch exporter in the new pupil player version. Is this option still available. And does it work with the pupil invisibile data aswell. Thanks

papr 22 January, 2020, 14:50:42

@user-ab1953 Hi ๐Ÿ‘‹ Unfortunately, this plugin has been disabled due to technical reasons for a while now. In case of Pupil Invisible recordings, it is possible to write a script that extracts the necessary data from the raw data without having to open the recording in Pupil Player. If you need to do offline post-processing, e.g. surface tracking, then there is no way around Player at the moment.

user-7b943c 22 January, 2020, 20:02:15

Is anyone else having a problem where one of the eye cameras fails to pick up anything? I fear it may be a hardware issue. Does Pupil Labs offer a repair service and/or offer a warranty? I am bit scared to try and redo the wiring myself on such an expensive device.

papr 22 January, 2020, 20:04:03

@user-7b943c Please do not try to fix the wiring by yourself unless you know what you are doing. ๐Ÿ™‚ We are happy to help. Please contact info@pupil-labs.com in this regard and we will come back to you.

user-5ef6c0 22 January, 2020, 20:18:14

can pupil core be used while wearing glasses for correcting visionm?

papr 22 January, 2020, 21:48:34

@user-5ef6c0 technically yes, but it is difficult to position the glasses such that 1) they do not obstruct the eye cameras, 2) they do not slip during a recording, and 3) do not cause slippage of the headset. We recommend using contact lenses.

user-90270c 22 January, 2020, 21:51:03

@wrp had anyone else reported that the Pupil Labs eye tracking add-on for the Vive creates an overly toasty eye environment? I'm wondering if ours have a flaw. We also have a Vive Pro Eye and do not experience the heating and dry eye that we experience with the inserts.

papr 22 January, 2020, 21:57:01

@user-90270c Some warmth from the add-on is normal. Especially, if you run it at 200Hz. I would not describe it as "overly toasty" though. Unfortunately, the add-on does not have the luxury of being able to hide all heat-producing components behind the display; assuming that this is the difference to the Vive Pro Eye.

user-5ef6c0 22 January, 2020, 22:26:07

@papr thank you for the clarification.

user-5ef6c0 23 January, 2020, 03:00:52

@user-7b943c @papr I experienced this issue today. I was using pupil core and had just opened pupil capture, and one eye camera stopped working out of the blue. I closed and reopened pupil capture and it didn't fix the issue, but then closed pupil capture again, disconnected and reconnected the eye tracker, and reopened pupil capture, and it worked. I will report if I have a similar issue again.

user-7b943c 23 January, 2020, 03:03:41

@user-5ef6c0 @papr Thank you for letting me know. Taking a closer look at our system, my team found that one of the wires was broken so that's probably the source of our issues.

user-7b943c 23 January, 2020, 03:04:02

Chat image

user-5ef6c0 23 January, 2020, 03:06:24

@user-7b943c happy to hear you found the issue. In our case, it happened while I was adjusting one of the cameras, so I thought it could be damaged... but a close inspection didn't reveal anything.

user-680f32 23 January, 2020, 10:02:14

Hello, are the authors of this paper involved with Pupil Labs and use the discord channel? https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf I have some questions as I want to modify the core sourcecode, I'd like to ask some clarifications regarding the algo. Thanks.

user-680f32 23 January, 2020, 10:03:04

By the way that is the paper Pupil uses for the eye tracking, right?

wrp 23 January, 2020, 10:06:03

@user-92dca7 maybe you can respond to this point above โ˜๐Ÿฝ

papr 23 January, 2020, 10:15:57

@user-90270c Hey, short follow up. Could you let us know if you are using the 120Hz or the 200Hz Vive add-on?

user-c39205 23 January, 2020, 11:16:03

Hello, I am having issues with the accuracy . I am fixating somewhere and it is always quite a bit off. I always calibrate the pupil labs without moving my head, just my eyes . Confidence is usually above 90% . Pupil capture v1.21.5 . Any points?

user-680f32 23 January, 2020, 13:00:09

What's the minimum exposure time that pupil cam supports?

user-741ae5 23 January, 2020, 13:02:34

Hello! I'm trying to make a recording using the pupil mobile, but the world camera is not being recorded, just the eye camera. On the pc everything works normally. Could it be something related to the application? I tried using different smartphones, but they all show the same problem.

user-92dca7 23 January, 2020, 13:18:23

Hi @user-680f32 , yes, the paper you cite describes a central aspect of the algorithm implemented in Pupil Capture. Feel free to ask your questions here or send me a pm.

user-680f32 23 January, 2020, 13:25:25

Thanks @user-92dca7 , you mean algorithm-related questions as well? I suppose if yes then it doesn't matter if you are one of the authors or not.

user-92dca7 23 January, 2020, 13:31:43

@user-680f32 Yes, you can also ask me questions related to the algorithm.

user-680f32 23 January, 2020, 14:00:52

Thanks @user-92dca7 , I'll pm you later.

papr 23 January, 2020, 14:04:30

@user-741ae5 Which version of Pupil Mobile and Android do you use? There is an issue in older Pupil Mobile versions where the world camera is not correctly detected on Android 9/10.

Alternatively, it might be possible that your phone does not support the 264 transcoding of the world video which is turned on by default.

user-741ae5 23 January, 2020, 14:12:22

@papr I tested it on smartphones with android 9 or higher. Using Pupil Mobile in version 1.2.3. In previous versions of Android the application works normally?

papr 23 January, 2020, 14:13:06

@user-680f32 Technical lower bound is 0.1ms But I doubt that you will get well exposed images with that exposure time.

papr 23 January, 2020, 14:13:23

@user-741ae5 No, this version should be fine.

papr 23 January, 2020, 14:13:53

@user-741ae5 Is the world camera listed in the app? Does the preview work?

user-680f32 23 January, 2020, 14:14:49

Thank you @papr . Is that the hardware limit or also supported by software?

user-741ae5 23 January, 2020, 14:18:10

@papr No! It is listed in the application, but in the preview the image is freeze, or does not come out of loading.

papr 23 January, 2020, 14:21:13

@user-741ae5 try disabling the h264 transcoding in the top left menu of the preview

papr 23 January, 2020, 14:23:23

@user-680f32 So from what I can see, the UVC interface allows values between 1 and 28 (at 200Hz), where the unit is 1/10ms. But I cannot guarantee that this translates to this exact exposure time on the hardware level.

user-680f32 23 January, 2020, 14:26:28

thank you, I was simply guessing more NIR light with smaller exposure would reduce motion blur in the frames

papr 23 January, 2020, 14:29:57

@user-680f32 I would agree with this intuition. Do you have an issue with motion blur in the eye image?

user-680f32 23 January, 2020, 14:32:41

Not yet, I'll let you know if that becomes an issue.

user-741ae5 23 January, 2020, 14:34:18

@papr I did that, and tried to make a recording. But the world camera only captured a few seconds, and the eye camera captured all 2 of the footage.

papr 23 January, 2020, 14:34:37

But the preview works now?

user-741ae5 23 January, 2020, 14:34:52

@papr In the app everything works normally.

papr 23 January, 2020, 14:41:46

@user-741ae5 Could you try again and open the preview regularly to ensure that the camera is still working as expected?

user-741ae5 23 January, 2020, 15:01:17

@papr I tested it a few times, switching the settings. It only worked when I disabled h264 transcoding, and recorded with the world camera at the lowest possible resolution. But I need it to be recorded at a higher resolution (1280x720).

papr 23 January, 2020, 15:02:44

@user-741ae5 May I ask what phone you are using? Have you seen this list of supported devices? https://github.com/pupil-labs/pupil-mobile-app/#supported-hardware

user-741ae5 23 January, 2020, 15:03:46

@papr In most attempts, the preview remained working, but the world camera only captured the first few seconds.

user-741ae5 23 January, 2020, 15:05:33

@papr We used a Samsumg J6. We've done recordings before and we didn't find any problems. So I thought it was strange to be facing these problems.

user-741ae5 23 January, 2020, 15:06:49

@papr We did a project about 6 months ago. About 40 recordings, using the same hardware, and everything went well.

papr 23 January, 2020, 15:07:35

@user-741ae5 Is this still the same phone that you are using now?

user-741ae5 23 January, 2020, 15:08:37

@papr Yes! We use it just for that. It was in the box since the last project.

papr 23 January, 2020, 15:11:14

Could you please check which Android version is installed on the device?

user-741ae5 23 January, 2020, 15:11:34

Android 9

user-741ae5 23 January, 2020, 15:11:50

Pupil Mobile 1.2.3

user-741ae5 23 January, 2020, 15:13:43

@papr We tried to use the beta version of Pupil Mobile too, and the same errors happened.

papr 23 January, 2020, 15:14:06

The beta version is equivalent to the production version at the moment.

user-741ae5 23 January, 2020, 15:15:36

@papr Can the bandwitch factor influence this recording?

papr 23 January, 2020, 15:31:56

@user-741ae5 maybe, but the default value should be fine.

user-d6cfc0 23 January, 2020, 15:41:39

One of our eye cameras has stopped working and now comes up with the note "Ghost capture, Capture initialization failed." We tried restarting and disconnecting the eye tracker and another computer, but the problem is still there. What should we do about this?

user-c5fb8b 23 January, 2020, 15:43:48

Hi @user-d6cfc0 this sounds like a hardware issue. Please contact info@pupil-labs.com for support!

user-5ef6c0 23 January, 2020, 15:54:13

I have a couple general questions concerning calibration methods. First, what are the criteria to determine which method is better suited for a given application? For example, I will conduct an experiment where people will be making a physical model on a table. Does it matter if I use screen calibration or manual marker calibration or single marker calibration? If I did manual or single marker calibration, would it matter if I displayed the marker horizontally or vertically? At what distance should I display the markers? Should they be on top of the table? Please see attached video for further reference

papr 23 January, 2020, 15:59:33

@user-5ef6c0 That looks very well calibrated already!

user-5ef6c0 23 January, 2020, 16:00:52

@papr yes, but I think it could be better if I knew a little more. I think in this particular one I was getting low confidence on one of the eyes, and when I did a test looking at my thumb finger nail, the fixation was on my thumb's knucle

papr 23 January, 2020, 16:01:45

@user-5ef6c0 At the end it also looks like as if there was a drift to the left. This can happen due to slippage of the headset.

user-5ef6c0 23 January, 2020, 16:04:07

Yes, but again, I think here I never really got a good calibration because of low confidence on my eyes, due to wrong eye-tracker positioning.

user-c5fb8b 23 January, 2020, 16:05:02

@user-5ef6c0 can you send us an image from the eye camera?

user-5ef6c0 23 January, 2020, 16:05:45

In any case, I'm thinking of three calibration alternatives: a) having a computer on the table and do screen based calibration; b) having a target laying on the table and do single marker calibration. c) displaying the manual marker in 9 different locations, in which case: should I place the markers horizontally or vertically, would it be enough to just sample the table, as I don't really expect people to look much outside of it, or should I still cover a larger portion of the visual field?

user-c5fb8b 23 January, 2020, 16:05:51

@user-5ef6c0 for calibration you should use the single marker calibration in this setup. The markers should be clearly visible, so best oriented towards the camera.

user-c5fb8b 23 January, 2020, 16:07:17

@user-5ef6c0 generally you want to sample points from the space where your subjects will look at. Note however that people find it much harder to "follow" a moving target accurately. Often you get better results with displaying a fixed marker in the middle and asking subjects to move their head while keeping the marker fixated.

user-5ef6c0 23 January, 2020, 16:08:33

@user-c5fb8b perfect, I understand. I guess that the target would have to be at a ~30deg angle from the table surface. Would that improve accuracy versus having the target just flat on the table?

user-c5fb8b 23 January, 2020, 16:10:12

Well it won't make it worse. I'm not sure how big the effect will be though.

user-5ef6c0 23 January, 2020, 16:11:48

@user-c5fb8b got it. One last question: Could the fiducial markers be used to increase calibration accuracy post-hoc using natural features? Can a second calibration done offline be used to improve the calibration done in the recording? Otherwise, I have no use for the fiducial markers as I won't be defining surfaces anymore

user-c5fb8b 23 January, 2020, 16:14:55

@user-5ef6c0 you cannot "improve" an existing calibration offline, only replace it. From our experience a natural features calibration with the markers on the table will be more error prone. Reasons being: 1) there's no clear salient center that subjects can focus at and 2) you will have to manually click the targets, which also introduces more error compared to having an automatic target detection as for the circle markers.

user-5ef6c0 23 January, 2020, 16:15:55

@user-c5fb8b that makes sense. Thank you for all your help. I will do another video and will share it later.

user-bda130 23 January, 2020, 19:43:40

@papr hello - we were running a pilot study using pupil mobile. The recording seemed to go smooth and we were able to load the video and audio into Player. However, when I indicate the application to run pupil detection this error comes up. What exactly does that mean? And how can we fix it/prevent it from happening in future sessions?

Chat image

papr 23 January, 2020, 19:45:45

@user-bda130 It looks like there is an issue with the recorded eye timestamps. Could you share the recording with data@pupil-labs.com so that we can have a look?

user-bda130 23 January, 2020, 19:51:18

@papr absolutely! Thanks for taking a look at the issue

user-14d189 23 January, 2020, 22:15:49

@user-5ef6c0 This videos might be interesting for you. https://www.youtube.com/watch?v=mWyDQHhm7-w

Chat image

user-14d189 23 January, 2020, 22:18:24

https://www.youtube.com/watch?v=aPLnqu26tWI&t=131s

user-97997c 23 January, 2020, 23:55:30

Hi, Intel recently (1 hour ago) released the new Realsense SDK with som einteresting bug fixes. How can I embed it into the Pupil Cpature GUI?

papr 24 January, 2020, 10:01:22

@user-97997c Are you referring to https://github.com/IntelRealSense/librealsense/tree/v2.32.1 ? In this case, you have to run Pupil Capture from source: https://github.com/pupil-labs/pupil#installing-dependencies

user-97997c 24 January, 2020, 10:02:47

got it, thank you!

user-c87bad 24 January, 2020, 15:26:39

Hi! I read some data from file fixations_on_surface_Surface 1.csv and gaze_positions_on_surface_Surface 1.csv. I wonder how you get the data on the surface without filtering the low confidence and on_surf=False? @user-c87bad Can anyone answer that, please?

user-c87bad 24 January, 2020, 15:27:07

Also, I'd like to know the difference among different resolution on 'Sensor Settings', is that only related to the world camera resolution? If I choose different resolution, will that have any influence on the fixation position or anything? @user-c87bad And also these๐Ÿ˜Ÿ

user-e7102b 25 January, 2020, 05:14:53

Hello, I've run into an issue when attempting to record audio in pupil capture. I have a DIY setup with 3 Logitech c930e webcams and a conference microphone connected to a mac mini (webcams occupy three USB ports, microphone is connected to the audio in port). The problem is that the webcams have built in microphones, and these seem to interfere with the recording. If i just connect a single webcam and create a recording in pupil capture (latest release), then I'm able to record perfect audio. However, if I connect multiple cameras and create a recording, the audio file seems to be double the duration of the actual recording and when played back consists of unintelligible screeching feedback-like sounds. I've tried changing the audio input settings in pupil capture and also playing around with the microphone settings in mac OS, and I either record no sound at all, or horror-movie sound effects. Has anyone else encountered an issue like this, and if so, how did you resolve it? Thanks!

user-7505e4 25 January, 2020, 17:25:38

Hi all , we are new to pupil invisable and we already run a succesfull test, we try to run another test and now the app on the mobile is stuck on 'syncing templates'

user-680f32 27 January, 2020, 14:08:14

Is there any comparison done somewhere between the Tobii, SMI and Pupil Labs eye tracking latency, precision and accuracy?

papr 27 January, 2020, 14:17:47

@user-680f32 Please see our citation list https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?ts=576a3b27#gid=0 As far as I know there are some comparisons in there.

user-680f32 27 January, 2020, 14:18:18

@papr ๐Ÿ‘

user-92dca7 27 January, 2020, 14:19:59

Hi @user-680f32, from these e.g. the following will be of interest to you:

user-92dca7 27 January, 2020, 14:20:00

https://www.biorxiv.org/content/10.1101/299925v1.full.pdf

user-92dca7 27 January, 2020, 14:20:32

https://peerj.com/articles/7086/

user-92dca7 27 January, 2020, 14:21:16

The first is a comparison of Pupil Core with eye trackers by SMI and Tobii. The second compares Pupil Core to a stationary remote eye tracker (EyeLink 1000).

user-680f32 27 January, 2020, 14:22:43

@user-92dca7 ๐Ÿ‘

papr 27 January, 2020, 14:28:23

@user-c87bad

Hi! I read some data from file fixations_on_surface_Surface 1.csv and gaze_positions_on_surface_Surface 1.csv. I wonder how you get the data on the surface without filtering the low confidence and on_surf=False? I am not sure if I understand the question correctly. Would you like to know how the software does something specific, or how you can access the data using the exported csv files?

Also, I'd like to know the difference among different resolution on 'Sensor Settings', is that only related to the world camera resolution? If I choose different resolution, will that have any influence on the fixation position or anything? Different camera resolutions require different camera intrinsics. These influence the gaze and surface mapping result. The effect should be small though as long as the camera intrinsics are well fit for both resolutions. See more information on estimating the intrinsics here: https://docs.pupil-labs.com/core/terminology/#camera-intrinsics

user-8779ef 27 January, 2020, 21:53:06

@papr Hey Pablo, would it be possible to get a spec sheet for the V2 HMD cameras?

user-69f1f2 27 January, 2020, 22:03:42

Hello, first time poster I was wondering if anyone else had issues in the beginning with their imaging devices driver not installing while the cameras and libUSBk devices drivers were installing fine. Iโ€™ve walked through the windows trouble shooting a few times with no success

papr 27 January, 2020, 22:07:54

@user-8779ef I think this should be possible. Please contact info@pupil-labs.com with this request.

wrp 28 January, 2020, 03:15:37

@user-69f1f2 regarding driver installation for Pupil Core eye tracking headsets on Windows 10: Drivers should be installed on their own if you right click on pupil_capture.exe and run as admin. Do you have admin permissions on your machine? What do you see in the Device manager?

user-c87bad 28 January, 2020, 10:33:35

@papr Thank you for the reply! ๐Ÿ˜€ So if I choose the 2D detetction and mappling mode, the camera intrinsics is not that important? Cuz I read ' The camera intrinsics contain camera matrix and lens distortion information. They are used in 3d gaze mapping to correctly transform 3d pupil data to 3d gaze data.' And how can I decide which is the best resolution for me? .... About the gaze file, I am just curious about the content inside, like why the data corresponding with 'on_surface == False' appeared and why with low confidence appeared. ๐Ÿ™‚

papr 28 January, 2020, 11:03:14

@user-c87bad Even if you use 2d gaze data, the camera intrinsics will be used to correct camera lens distortion during surface marker detection. So the gaze might not change when changing the resolution, but the surface location might.

papr 28 January, 2020, 11:04:13

@user-c87bad We try to export as much data as possible because we do not want to assume any use cases. If you do not need low confidence data or data outside of the surface, you can remove it. But others might use it for their project.

user-2ff80a 28 January, 2020, 11:16:00

Hi, I'm trying to get eye gaze coordinates on a specified surface (which in my case is just a computer screen). After recording, I've tried processing the exported surface data from the pupil player, as well as the regular data packages that're recorded. I'm trying to map the surface gaze coordinates onto a screenshot, but it's not working accurately. I can see in the pupil player, that the recording was fine and the gaze marker there shows where on the screen/surface I was looking at, so the recording/data itself should be fine. Can anyone help me figuring out which data drops and how I have to process them to extract these accurate gaze coordinates?

user-2ff80a 28 January, 2020, 11:19:08

(btw im using pupil core with a single eye cam)

user-2ff80a 28 January, 2020, 11:20:39

"pupil w120 e120b"

user-a3b60b 28 January, 2020, 17:42:37

I installed pupil lab and tried to calibrate. Whenever I hot calibrate, I keep getting calibration stopped as there is no enough data points to calibrate. Please help here. I bought the pupil lab as I need to track subjects gaze and I am writing a code in python where it beeps when subjects moves from fixation point.

user-69f1f2 28 January, 2020, 21:21:09

@wrp I am administrator thank you for replying

papr 29 January, 2020, 08:10:51

@user-a3b60b it looks like you are using the manual marker calibration and are displaying the "stop marker". This causes the calibration to stop immediately.

papr 29 January, 2020, 08:12:32

@user-a3b60b I would highly recommend to use the single marker calibration instead in your case. You only have to display a marker at single point without moving it. ~~I can link the documentation for it in a bit.~~ Edit: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-methods

papr 29 January, 2020, 08:13:20

@user-2ff80a could you share the recording with [email removed] This way we can have a look at the data directly.

user-bc8123 30 January, 2020, 09:26:12

Hi, i have an issue regarding the gaze point 3d. I am trying to record data, using a binocular headset on the real world, by watching targets pose at distances of about 500 mm up to 1000mm, but the data returns me values of gaze depth way smaller (~130 mm). I have read some older questions about the problem of depth estimation but i wasn't able to get valid data. I also tried calibrating at various depths using manual calibration with no success (are there some calibration depth suggested?). Also i noticed that changing the world camera resolution affect the resulting gaze point 3D. Maybe someone who had this problem could give me some advices?

papr 30 January, 2020, 12:03:54

@user-bc8123 Changing the world camera resolution changes the camera intrinsics. These have an impact on the 3d gaze estimation. Please recalibrate after changing the world camera resolution.

user-bc8123 30 January, 2020, 14:31:18

Thank you for the reply. I am trying. I have noticed also that when i look to a flat screen staying in the center, watching at right the gaze depth decrease, while looking at the left the depth increase. Is that normal? I think i am missing something

user-d6e717 30 January, 2020, 21:43:24

I cannot seem to run the program on my HP laptop

user-d6e717 30 January, 2020, 21:44:36

When I use the troubleshooting instructions on the website and go into device manager, I cannot find any hidden devices for Pupil Capture

user-c6717a 31 January, 2020, 01:57:56

Hi, I'd like to use the pupil core with dual eye cameras to record pupil diameter change over time. After exporting the data, is this data written into the gaze_positions.exl? Thanks!

user-c5fb8b 31 January, 2020, 08:05:02

@user-d6e717 What version of Windows are you using?

user-c5fb8b 31 January, 2020, 08:06:39

@user-c6717a we only export .csv files, but you can also easily import them into Excel if you want to. Diameter data will be exported to pupil_positions.csv Please have a look at the following link for an overview of what data we export: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

user-c5cc3a 31 January, 2020, 14:48:30

I apologize if this has been answered elsewhere; I've been searching and can't find anything similar (but as a novice, I've probably missed something!).

user-c5cc3a 31 January, 2020, 14:49:11

Hi all - We're trying to use surface tracker in an experiment in our lab, and we're having trouble getting Pupil player or Pupil Capture to detect our markers robustly.

user-c5cc3a 31 January, 2020, 14:50:13

We're using Apriltags from the tag36h11 0-23 images in the documentation

user-c5cc3a 31 January, 2020, 14:51:23

They're printed to be 2 cm square, which seems fairly large, on surfaces approximately 2ft from the world cam position.

user-c5cc3a 31 January, 2020, 14:52:50

The markers are usually detected initially, but flicker in and out of recognition (showing up highlighted in blue) over the course of the session. The surfaces we define with the markers then warp and shift, which is wreaking havoc with our AOI data.

user-c5cc3a 31 January, 2020, 14:53:29

Is there a way to make detection more robust, or to manually indicate to the tracker that there should be a marker present?

user-c5cc3a 31 January, 2020, 14:53:33

Thanks!!

user-c5cc3a 31 January, 2020, 14:54:49

In case it matters, we're running using Windows 10, and a single-eye camera, head-mounted set up

user-26fef5 31 January, 2020, 16:33:35

@user-c5cc3a Hey there, since nobody from the pupil-devs answered yet, here are just a few hints on april-tag or fiducial marker detection in general. A) Check your camera resolution, higher Resolution usually helps when detecting fiducial markers, B) 2 cm for roughly 2ft is not really that big , but should be sufficient, C) check your lightning condition, camera exposure (not to large and not to small), auto-exposure settings, detection improves if auto-exposure is turned off (especially while moving). If you want to improve things easily, enlarge the markers.

user-d6e717 31 January, 2020, 18:37:42

@user-c5fb8b I am using Windows 10 and it is all fully updated software!

End of January archive