πŸ‘ core


user-31df78 01 December, 2019, 00:42:55

Hi, the world camera crashed partway through data collection today and I can't recover it. The eye cameras are fine, but the world camera shows up in Device Manager on Windows as some sort of unrecognized device with a port reset error. Upon trying to reconnect the cable, I get a warning from Windows about a USB malfunction. Same problem in Ubuntu, eye cameras initialize fine but the world camera stays gray.

user-36b0a3 01 December, 2019, 09:02:52

Is there any accuracy/precision data for the DIY headset?

user-2be752 01 December, 2019, 21:16:59

Hello! I'm having some trouble with one eye camera, it has problems detecting the pupil, I think because it detecta something dark on the corner of the eye (but there is nothing weird there). It's the eye 0 in the image attached, see the blue corner?

Chat image

papr 01 December, 2019, 21:18:06

@user-2be752 try the automatic exposure mode in the eye's uvc source settings

user-2be752 01 December, 2019, 21:20:18

@papr this happened now

Chat image

papr 01 December, 2019, 21:20:31

@user-2be752 Have you tried adjusting the detector's region of interest? You can do so by changing to thee RoI mode in the eyes general settings and dragging the corners. Also, adjusting the intensity range in the detector settings might help.

user-2be752 01 December, 2019, 21:23:23

Ah adjusting the ROI helped! The automode kind of made it worse

user-2be752 01 December, 2019, 21:23:38

It's interesting it only happens with one camera

user-2be752 01 December, 2019, 22:22:40

One more question: what is the marker cache appaearing in pupil player?

user-c5fb8b 02 December, 2019, 08:18:04

@user-2be752 Player detects all surface marker positions across the video and stores this in the marker cache. This results in better playback/scrubbing performance since we don't have to run the detection again afterwards for every frame we look at. The cache will be recalculated when you change any detection parameters.

papr 02 December, 2019, 08:38:57

@user-2be752 to round off @user-c5fb8b 's statement: The marker cache includes the surface markers, e.g. Apriltags, which are used by the surface tracking plugin. https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

papr 02 December, 2019, 08:43:23

@user-36b0a3 The accuracy/precision is not primarily dependent on the hardware but the software and how well the subject is able to fixate the marker's center. The relevant hardware details are the eye camera position [1] and the frame rate [2].

[1] You should have a similar view on the eye as on the screenshot posted by @user-2be752 above [2] A frame rate of 60 or higher is recommended for the usage of the 3d eye model

papr 02 December, 2019, 08:46:17

@user-31df78 Please contact info@pupil-labs.com in this regard. Please attach a screenshot of the error to the mail.

papr 02 December, 2019, 08:49:18

@user-e2056a If Player does not show the world video then it is due to it being missing or potentially being corrupt. Are you able to open the world video in VLC player?

You do not need Capture to define surfaces in Player. Here is an example recording which you can use as reference: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing

papr 02 December, 2019, 08:50:46

@user-e7102b Yes, there is definitely a way. We will have to extract/refactor that functionality at some point.

papr 02 December, 2019, 08:53:25

@user-aaa87b You should find a capture.log file in the pupil_capture_settings folder in your home directory. Please provide a copy of it after reproducing the crash on calibration.

Since it is a DIY headset, do not forget to perform the camera intrinsics estimation procedure. https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation

papr 02 December, 2019, 08:56:26

@user-0767a7 Have you tried using Pupil Player v1.19? Includes a series of fixes for the fixation detection, including properly ignoring low confidence data.

user-36b0a3 02 December, 2019, 08:56:43

@papr I am talking about the original Pupil Dev headset, built in 2014, using the Microsoft HD-6000 and Logitech C615 cameras mounted on the Shapeways frame.

user-36b0a3 02 December, 2019, 08:57:20

I have had to go into the WaybackMachine to find the documentation I consulted at the time...

user-ed70a0 02 December, 2019, 10:39:18

Hi! I have a problem of heat map exporting. I'm using the most recent version of pupil player. Up until the previous version, the heat map data was exported in png form but not now. How can you export hot map data to a png?

user-c5fb8b 02 December, 2019, 10:45:28

Hi @user-ed70a0 Are you getting any image export for the heatmap in a different format? Or are you getting no image at all? The export format did not change, it should still be png.

papr 02 December, 2019, 10:52:25

@user-ed70a0 Please be aware that Player resets to its default settings when being upgraded. This includes the list of loaded plugin. Surface Tracker is not loaded by default. Is it possible that the plugin was not loaded?

If it was, and the heatmaps are visible in the Player window then you should also get the heatmap as png export. I have just successfully exported the heatmaps as png using the demo recording I have linked above and Pupil Player v1.19 on macOS.

user-ed70a0 02 December, 2019, 11:21:24 PFA

I got any image file...

user-ed70a0 02 December, 2019, 11:21:43

@user-c5fb8b I got any image file...

user-ed70a0 02 December, 2019, 11:23:07

@papr I already set up the plugin of surface tracker. But I don't have any image file...

user-c5fb8b 02 December, 2019, 11:26:04

@user-ed70a0 1. Do you have surfaces defined that you can see in Player? 2. Can you see the heatmap overlayed in Pupil Player? 3. Do you get the rest of the surface data export when exporting? I.e. is only the heatmap image missing or is everything missing?

user-ed70a0 02 December, 2019, 12:25:20

@user-c5fb8b I can see the surface in player and see the heatmap overlayed in pupuil player. And I only missed the heatpmap image. All files except γ…—heatmap images have been saved

papr 02 December, 2019, 12:25:47

@user-ed70a0 Could you let us know what operating system and which version of v1.19 you are using?

papr 02 December, 2019, 12:26:22

Also, do you think it would be possible to share the recording with data@pupil-labs.com ?

papr 02 December, 2019, 12:26:55

Since we are not able to reproduce the issue with our own recording it would help us to use yours.

user-ed70a0 02 December, 2019, 12:30:37

@papr I'm using window and operating version of v1.19 'pupil_v1.19-2-g4e38268_windows_x64'. And If possible, I'd like to share it.

papr 02 December, 2019, 12:32:17

@user-ed70a0 Before you do that, can you share the player.log file in the pupil_player_settings folder after performing an export?

user-ed70a0 02 December, 2019, 12:49:32

@papr Thank you for your helping!

player.log

papr 02 December, 2019, 12:51:17

@user-ed70a0 Could you shutdown Player, move the recording to a folder that does not contain any non-ascii characters (in your case \렌즈\영재 렌즈3\) and try again?

papr 02 December, 2019, 12:52:38

My suspicion is that cv2.imwrite is not handling the path correctly.

papr 02 December, 2019, 12:53:20

Also can you check if there are any unexpected folders in C:\Users\Beautiful\recordings\2019_11_21? They might include your heatmaps

user-ed70a0 02 December, 2019, 13:13:27

@papr I try to move the recording to a folder that does not contain any non-ascii characters. I also checked other folders that might be stored, but no images were saved.

papr 02 December, 2019, 13:14:07

@user-ed70a0 Could you please share the newly generated log file?

user-ed70a0 02 December, 2019, 13:17:02

Thank you so much

player.log

papr 02 December, 2019, 13:17:52

@user-ed70a0 The recording path still includes non-ascii characters: C:\Users\Beautiful\Desktop\λŠμ‹œ\exports\000 Please rename λŠμ‹œ.

user-ed70a0 02 December, 2019, 13:30:46

@papr Oh, I solved it. Typing their ASCII code conflicts were resolved. Thank you very much!

user-aaa87b 02 December, 2019, 16:26:38

Did someone succeeded in running the Player on a Debian Stretch system? It works quite nicely on Ubuntu, but I get errors on Debian. More specifically, the problem seems to be here: GLFW window failed to create: GLX: GLX version 1.3 is required Can't find this library on the Debian repository.

papr 02 December, 2019, 16:31:24

@user-aaa87b GLX relates to the installed version of OpenGL. Maybe this is easier to find.

user-aaa87b 02 December, 2019, 16:32:33

Could it be because I'm using X2Go to connect to the Debian system?

papr 02 December, 2019, 16:32:56

Yes, this is possible.

user-aaa87b 02 December, 2019, 16:34:35

Damn. Thank you papr.

papr 02 December, 2019, 16:35:25

@user-aaa87b I would give this a 60% chance that this is the issue. I would definitively try to upgrade the installed opengl version before giving up πŸ™‚

user-aaa87b 02 December, 2019, 16:46:35

@papr Ok, solved. X2go is the problem. I'll have to work directly on the system console.

user-707c48 02 December, 2019, 19:18:05

Quick question: does the core use an IR pass filter on the eye camera?

user-707c48 02 December, 2019, 19:18:21

Or does it just record the visible spectrum?

user-0767a7 02 December, 2019, 20:48:05

@papr yes, I am using the latest version of the software. the data from the right eye is sketchy; it jumps up and down between 0% and 100% confidence as my participant turns his head obliquely and looks out of the corner of his eye. data from the left eye is consistently good though; a fixation will be stable when the confidence from the right eye is low, but then jump about a little bit when the confidence from the right eye jumps back up. So the right eye IS contributing, but not consistently. Can I tell pupil player to calculate fixations using ONLY the consistent signal?

papr 02 December, 2019, 22:01:58

@user-0767a7 you can rename the right eye video and timestamp file to _eye0*, run offline pupil detection on the remaining eye, recalibrate using the saved calibration, and rerun the fixation detector.

user-e7102b 02 December, 2019, 23:10:44

@papr I was able to get the batch exporter working for surface data. The updated scripts are here: https://github.com/tombullock/batchExportPupilLabs in case anyone needs them.

user-0767a7 03 December, 2019, 00:03:20

@papr thank you! I'll give that a shot.

user-94ac2a 03 December, 2019, 03:53:34

Instead of using the actual world camera in Pupil Capture, Can I use my own screen output as the world camera?

wrp 03 December, 2019, 04:23:17

@user-94ac2a that is, in theory possible - but you would likely need to write your own video capture backend similar to: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/hmd_streaming.py

user-94ac2a 03 December, 2019, 06:59:23

@wrp Thanks. I will take a look

user-94ac2a 03 December, 2019, 13:38:01

How to decrease pupil capture CPU usage?

papr 03 December, 2019, 14:11:29

@user-94ac2a The part that uses the cpu most significantly is the pupil detection. If you do not require the detection to run in real time, you can disable it and analyse it in Pupil Player using the Offline Pupil Detection. You can disable realtime pupil detection in the general settings of the World window.

user-94ac2a 03 December, 2019, 14:15:19

@papr what if I am using pupil service?

papr 03 December, 2019, 14:16:04

@user-94ac2a Pupil Service does not support recording. Therefore, Offline Pupil Detection is not an option.

papr 03 December, 2019, 14:16:34

Pupil Service is only needed in low-latency real-time applications, though.

papr 03 December, 2019, 14:17:00

In summary: It depends on your use case if you can turn off real-time pupil detection or not.

user-94ac2a 03 December, 2019, 14:33:06

@papr is there a way to make pupil service consume less cpu?

papr 03 December, 2019, 14:36:14

@user-94ac2a You can try minimizing the windows to save the cpu that is required to draw the eye images. But as I said, the component that uses most of the CPU is the pupil detection.

user-94ac2a 03 December, 2019, 14:41:34

@papr thanks. But isn’t the pupil detection the core of pupil service?

user-94ac2a 03 December, 2019, 14:42:19

If I turn off detection, it will not track eyes?

papr 03 December, 2019, 14:45:36

@user-94ac2a correct. You could reduce the frame rate of the eye cameras to perform the pupil detection less often and therefore save CPU.

user-94ac2a 03 December, 2019, 14:46:24

@papr What is the minimum frame rate for service to work properly?

papr 03 December, 2019, 15:11:49

@user-94ac2a The 3d model is recommended to run with at least 60fps. The 2d pupil detection runs on a per-frame basis, i.e. the frame rate can have any value. Please be aware that other processing algorithms, e.g. fixation and blink detection work technically with less data, but it is more likely to miss events if the frame rate is lower.

user-0ec597 04 December, 2019, 14:04:05

Where could I get the code for eye tracking in C language?

user-c5fb8b 04 December, 2019, 15:28:40

@user-0ec597 A part of the pupil detection pipeline in Pupil is written in C++. The C++ code is wrapped with cython to be useable within the rest of our Python Application. You can find all code for the pupil detection in the pupil_detectors module of the Pupil source code: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/pupil_detectors This module is a mix of C++, cython and Python. There is also some external C++ code that we compile against in https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_cpp/include

user-c37dfd 04 December, 2019, 15:44:20

@papr sorry for the delayed response. I was away for the thanksgiving holiday. Both say they are on Android version 9.

user-c37dfd 04 December, 2019, 17:01:45

@papr I was able to do a bit more troubleshooting and it seems like the connection between our older headset and the phone was the issue. I was able to use a different cord to work around the issue. Thanks!

user-abc667 04 December, 2019, 17:42:35

@papr The right eye camera in one of our headsets stopped responding -- just get the grey square. I suspect it may be a loose connection, as I hear the computer's connect/disconnect sounds when I jiggle the wires. I tried making sure the plug was firmly in the socket at the rear of the camera, and it is. So the problem may be in the wires. How do we go about getting this repaired/replaced? Thanks.

papr 04 December, 2019, 17:45:30

@user-abc667 please contact info@pupil-labs.com in this regard.

user-abc667 04 December, 2019, 17:46:07

@papr Will do. tnx.

user-abc667 04 December, 2019, 17:48:58

@user-755e9e Thanks for the suggestion. I had ordered from Shapeways, using PA12. Just got them today and trying them out. Looks like it might work well.

user-abc667 04 December, 2019, 19:33:13

@papr I noticed an earlier comment about reducing cpu load by turning off pupil tracking during recording, then doing it offline. I suspect I know the answer, but just to be sure -- it's the same tracking code, yes? So the result would be no different doing the tracking online vs offline, yes? The one problem I can imagine arises because we're doing manual marker calibration, moving the standard target around on the table top. With pupil tracking on, we get feedback right away about the quality of the calibration (and have repeated it when necessary). But if tracking is done offline, we won't know until too late that the calibration is bad, yes? Or can we have it on during calibration, check the result, and if ok, turn off for the rest of the experiment? (Lots of questions embedded here; thanks for your patience and advice.)

user-2be752 04 December, 2019, 21:45:54

Hello! I was calibrating through pupil player, and when I finished and tried to export the video and data, it output a message of 'exporting dummy calibration'. What does this mean?

user-06ae45 05 December, 2019, 05:03:30

Hi! I'm having an issue when dropping a recording directory into pupil player on my Mac. When I drag the same recordings (have tried multiple with and without world camera used during recording) to pupil player on my Ubuntu machine where I record, they work and are recognized immediately. On the Mac, I get the error 'oops, this is not a valid recording'. I found this github issue (https://github.com/pupil-labs/pupil/issues/1143) and tried moving the directory directly to my Desktop, as well as checked the info.csv file for special characters, but no luck. Any advice? Thanks!

user-c5fb8b 05 December, 2019, 10:41:47

Hi @user-06ae45 are you using the same versions of Pupil Player on both machines? Can you check for the following files in the recording folder and send them over to me? - info.csv - info.player.json Also after failing to open the recording on the Mac, can you collect the player.log file and send it over? You will find it in your home folder > pupil_player_settings > player.log.

papr 05 December, 2019, 10:55:57

@user-06ae45 Also, if you use a recent version of Pupil Player, it will show more information than just "Oops this is not a valid recording". πŸ™‚

papr 05 December, 2019, 10:57:43

@user-2be752 Could you provide the exact warning? The term "dummy calibration" is usually used in the context of camera_models/camera intrinsics estimation. In this case it means that no intrinsics were found for the specified camera/video file. This is normal for the eye videos/cameras.

papr 05 December, 2019, 11:00:40

@user-abc667 Please beware the difference between Pupil Detection and Gaze Mapping. Both can be done offline, but Gaze Mapping requires a valid calibration configuration to be recorded, or that the calibration procedure is being included in the recording itself (more flexible option).

Yes, you are right that turning detection and mapping off will remove the possibility to monitor the current performance. This is why I would recommend to get a better CPU rather than turning these features off.

user-06ae45 05 December, 2019, 16:03:51

Hi @user-c5fb8b & @papr Thanks for the quick responses! I just installed the latest version of pupil player on my mac, and now it works! Thank you for pointing me to it!

FWIW the player log from the previous version install (1.11) had the warnings: '2019-12-05 07:54:31,076 - player - [ERROR] player_methods: Could not read info.csv file: Not a valid Pupil recording. 2019-12-05 07:54:31,076 - player - [ERROR] launchables.player: '/Users/vashadutell/Desktop/000' is not a valid pupil recording'.

papr 05 December, 2019, 16:05:53

@user-06ae45 Mmh, weird. But it works as expected now, did I understand correctly?

user-06ae45 05 December, 2019, 16:06:33

@papr yes, it opened up immediately.

papr 05 December, 2019, 16:06:50

@user-06ae45 That is great to hear! πŸ™‚

user-06ae45 05 December, 2019, 16:07:47

@papr & @user-c5fb8b Thanks a million! 🀩

user-e91538 05 December, 2019, 17:30:11

Hi everybody

user-e91538 05 December, 2019, 17:30:31

I need to ask a couple of pre-sales question

user-e91538 05 December, 2019, 17:30:38

Am I in the right place?

papr 05 December, 2019, 17:33:45

@user-e91538 If they are technical questions, feel free to ask them here πŸ‘ Questions regarding quotes and orders should be directed to sales@pupil-labs.com πŸ™‚

user-067553 05 December, 2019, 19:32:05

Hi @papr, I'm using the blink detector plugin. I want to detect when the person has the eyes closed for more than 1s. Is it possible? I use binocular data.

user-abc667 05 December, 2019, 22:54:31

@papr Yes, gaze mapping != pupil detection; I typed too quickly. I know we have to record a calibration procedure at the start of the trial for each subject. Point taken about cpu speed. Thanks.

user-045b36 06 December, 2019, 10:44:29

Hello! I have a problem with the device. i run pupil_capture.exe , my pupil- glasses are on the table. I do nothing. Over time window eye1 and eye0 is not responding and close. Why?

user-c5fb8b 06 December, 2019, 10:46:06

Hi @user-045b36 can you please send us the log-file after this happened? You can find it in your home folder > pupil_capture_settings > capture.log

user-045b36 06 December, 2019, 12:01:45

sorry/

capture.log

user-c5fb8b 06 December, 2019, 12:41:27

@user-045b36 I can't see anything in the log, not even that the eye windows close. The log always only stores the last run of capture and is overwritten when you restart capture. Can it be that you restarted capture after you experienced the issue? If so, please make sure to reproduce the error and then send me the log file without restarting capture in between. Thank you!

user-045b36 06 December, 2019, 12:52:00

@user-c5fb8b I will try to reproduce the error later and send you log-file

user-c5fb8b 06 December, 2019, 12:53:20

Ok thanks!

user-479baa 07 December, 2019, 12:18:13

Hello! I have a question about the tech specs of Pupil Core. According to https://pupil-labs.com/products/core/tech-specs, gaze precision is 0.02. I want to know what 0.02 for gaze precision stands for, since unit of measurement is not specified.

papr 07 December, 2019, 13:05:58

@user-479baa it is also measured in degrees.

user-c9d205 08 December, 2019, 12:17:32

Is there an option to cut the video in pupil player?

user-479baa 08 December, 2019, 15:17:41

@papr I appreciate it!

user-e7102b 08 December, 2019, 19:46:39

Hey @papr unfortunately I'm having some trouble with my batch exporter script only successfully exporting surface data for ~50% of my pupil recordings. It seems like the error has something to do with functions in pandas treating the data as a string rather than numerical values. Perhaps this is a version specific issue? Can you tell me what version of pandas you used when you wrote the surface extract function? (https://nbviewer.jupyter.org/gist/papr/87157c5da93d838012444f4f6ece6bcc) Thank you

user-e7102b 08 December, 2019, 21:06:05

Error report

Pupil_Labs_Surface_Import_Error.txt

user-c5fb8b 09 December, 2019, 08:11:03

Hi @user-c9d205 You can set an export range / trim marks in Player. You can either use the input field in the General Settings menu, or just drag around the start/end of the timeline. This won't cut the recording, as we never modify the original recording. But when exporting you will only get the selected slice.

user-94ac2a 09 December, 2019, 08:22:22

Will that be possible to stream the video from local to another computer to do the computation and send it back in real-time?

user-81a601 09 December, 2019, 13:52:42

Hello guys, I got a pupil core last week, and I have a question about it

user-81a601 09 December, 2019, 13:53:21

if I want to record the session with pupil mobile, how can I calibrate the session for the user ?

user-81a601 09 December, 2019, 13:53:45

I need to calibrate in the computer beforehand, and then connect the pupil on the mobile to record the data ?

user-c5fb8b 09 December, 2019, 14:00:03

Hi @user-81a601 If you record the calibration procedure as well, you can perform the full analysis process offline in Pupil Player. See the following playlist on youtube with tutorials on how to do the offline analysis: https://www.youtube.com/playlist?list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh

user-81a601 09 December, 2019, 14:00:51

so I need to record the calibration connecting core with the computer

user-81a601 09 December, 2019, 14:01:03

and then connect it to the mobile, and record the session

user-81a601 09 December, 2019, 14:01:15

then I can use this two data to do the analysis

papr 09 December, 2019, 14:03:10

@user-94ac2a Pupil Capture does currently not have the option to stream the video and receive the result from somewhere else. Usually, you would connect the eye tracker directly to the other device and just receive the detection result on the target device.

Technically, you can write a plugin that subscribes to the video data published by the Frame Publisher plugin and displays it using the HMD Streaming video backend. But this will add considerable delay to the detection procedure.

user-c5fb8b 09 December, 2019, 14:04:11

@user-81a601 No you would do a manual marker calibration. You record the calibration procedure and then your experiment with Pupil Mobile in one take. Then you transfer the recording to the PC and open it with Pupil Player. There you can run offline pupil detection and offline gaze mapping. Please have a look at the youtube tutorials I linked, maybe that shows a bit better what's happening.

user-81a601 09 December, 2019, 14:10:21

okay, I'll try it

user-81a601 09 December, 2019, 14:10:23

thanks in advance

user-c5fb8b 09 December, 2019, 14:11:45

No problem!

user-81a601 09 December, 2019, 18:19:06

hello there, another questions

user-81a601 09 December, 2019, 18:21:07

can I use pupil core with people whose use glasses or eye lenses ?

user-81a601 09 December, 2019, 18:23:44

another question is, the surface tracker only works with april tags ? Or it's possible to track custom images

user-81a601 09 December, 2019, 19:57:02

@user-c5fb8b it worked, but something is wrong, like my gaze data is inverted in relation of my iris movement

user-81a601 09 December, 2019, 19:57:38

anyone know anything about it ?

papr 09 December, 2019, 20:00:04

@user-81a601 The important question is if the gaze matches the actual locations that you were looking at. The pupil movement in the eye overlay might be just flipped.

user-81a601 09 December, 2019, 20:01:45

the gaze data is following my eye movement, but it seems mirrored

user-81a601 09 December, 2019, 20:02:21

the gaze data is generated by the offline pupil detection feed ?

papr 09 December, 2019, 20:04:02

Pupil data (red circle on the eye overlay) is generated by the offline pupil detection, gaze data (green dot) is generated by the offline calibration

user-81a601 09 December, 2019, 20:04:54

right

user-81a601 09 December, 2019, 20:05:03

my pupil data look's right

user-81a601 09 December, 2019, 20:05:28

my gaze data also looks right, but when my pupil move to the left, the gaze move the same amount but to the right direction

papr 09 December, 2019, 20:07:16

@user-81a601 This is just a visualization issue. Do not forget that the eye cameras point in the opposite direction of the world camera. This causes the image to look mirrored. You can flip the eye overlay in the eye overlay menu on the right.

user-c5fb8b 10 December, 2019, 08:44:50

@user-81a601 Regarding surface tracking: Currently we only support marker based tracking and not area-of-interest (AOI) based tracking (e.g. tracking a specific image). We are working on AOI tracking support, but I cannot give you an estimate of when we will be able to ship this. For now you will have to place markers around the images that you want to track.

Regarding glasses on Pupil Core: We don't have prescription lens add-ons for Pupil Core (like we do for Pupil Invisible), but many members in our community and developers on our team wear prescription lenses. You can put on the Pupil Core headset first, then eye glasses on top. You can adjust the eye cameras such that they capture the eye region from below the glasses frames. This is not an ideal condition, but does work for many people.

user-c5fb8b 10 December, 2019, 09:00:23

@user-81a601 While glasses usually work fine (with the procedure described above), I'd recommend using contact lenses if you have the choice. These do not impact our software at all.

user-abc667 10 December, 2019, 14:57:15

I have a remote trbouleshooting session starting in 4 min. Do I need to connect somewhere or will they know where to find my machine?

user-abc667 10 December, 2019, 15:03:28

@papr I have a remote troubleshooting session starting now. Do I need to connect somewhere or will they know where to find my machine?

user-abc667 10 December, 2019, 15:22:21

Never mind, found the email with the contact info.

papr 10 December, 2019, 15:27:57

@user-abc667 Sorry for the delayed response. I hope everything has been cleared up now.

user-6cdb90 10 December, 2019, 16:15:58

Hi, I have a question about the offline calibration and gaze mapping. I recorded files with pupil capture not pupil mobile and I know the offline gaze mapping is using for recorded files by pupil mobile but I am wondering if I do the offline calibration for my recorded files, does it improve the accuracy of my calibration even for recording in pupil capture and is that worthwhile to do this offline process for all my recorded files in order to increase the accuracy of the previous files?

user-e7102b 10 December, 2019, 16:17:48

@papr I'm just following up on my batch exporter question from the weekend. The example surface export jupyter notebook example that you kindly provided seems to have issue with loading certain datasets. I think it has something to do with the pandas functions being unable to read in string data, but I'm also wondering if it's possibly a panads version issue, as running the code with different versions of pandas seems to give me different error messages. If you have any suggestions and/or can tell me what version of Pandas you are using, I'd really appreciate it. This is the final hurdle I need to clear to have a fully functioning batch exporter. Thanks.

Pupil_Labs_Surface_Import_Error.txt

papr 10 December, 2019, 16:27:05

@user-6cdb90 You can apply manual offset correction during offline calibration. So yes, it definitively can increase accuracy. But it won't increase by just running the offline pupil detection/calibration with default settings, which is equivalent to your pre-recorded data.

user-6cdb90 10 December, 2019, 16:30:09

@papr Thank you very much for your response. So how can I apply this manual offset correction during offline calibration?

papr 10 December, 2019, 18:02:52

@user-e7102b Sorry, I did not have time to look into this further yet. Could you write an email to data@pupil-labs.com such that it does not get lost? I will come back to you as soon possible.

papr 10 December, 2019, 18:03:15

@user-6cdb90 Checkout the "Gaze mapper" submenu. There is an option to set values for x and y offsets.

user-c37dfd 10 December, 2019, 19:14:47

Hi. I was playing with the generic video overlay plugin in pupil player. I have dyadic data (synced in pupil capture) that I would like to manipulate and export simultaneously. I see that I can pull one worldview on top of another full recording directory, but this isn't exactly what I would like. I am wondering if there is any functionality for having both full recording directories running simultaneously in pupil player? We've gotten around this issue in the past by exporting the videos separately from pupil player and then syncing in final cut pro, but if I could skip that step and do everything in pupil player that would be really great. Thanks!

papr 10 December, 2019, 19:44:38

@user-c37dfd How about exporting one recording and adding the exported video as overlay? Or is that what you tried already?

user-e7102b 10 December, 2019, 20:04:33

@papr - thank you - I just sent an email.

user-a98526 11 December, 2019, 03:12:37

@papr Thank you for your previous help,I want konw Can I get gaze positions data in real time from core?

user-a98526 11 December, 2019, 03:23:20

And What is the function of USB-C mount,can i buy High speed cramer and USB-C mount together?

wrp 11 December, 2019, 03:28:52

@user-a98526 yes, you can stream data in real-time. You would subscribe to gaze topic. See this script for example on how to subscribe and filter messages: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py

wrp 11 December, 2019, 03:31:31

@user-a98526 Pupil Core USB-C scene camera mount does not ship with any scene camera. This configuration is intended to be used for those who want to add their own scene sensor. Some researchers who require depth data for the scene, use the Pupil Core USB-C scene camera configuration with a RealSense R400 series depth sensor. However, the configuration is designed to accommodate other USB-C sensors that you might want to use/develop/prototype with.

user-a98526 11 December, 2019, 03:37:08

YES, I need these two, but But I can’t pick them at the same time

wrp 11 December, 2019, 04:28:22

@user-a98526 what does "these two" refer to? Please could you clarify?

user-119591 11 December, 2019, 09:29:27

can you please help me?

Chat image

user-c5fb8b 11 December, 2019, 09:31:55

Hi @user-119591 Sorry that you are experiencing this issue. It will be fixed in with the next release of Pupil. Until then you can make it work by downloading and running vc_redist.x64.exe from the official Microsoft Support page: https://support.microsoft.com/en-ca/help/2977003/the-latest-supported-visual-c-downloads After this Pupil should start as expected!

user-119591 11 December, 2019, 09:32:22

thanks!C:

user-a98526 11 December, 2019, 13:49:14

@wrp

Chat image

user-a98526 11 December, 2019, 13:49:53

I means i want these two at same time

papr 11 December, 2019, 13:54:12

@user-a98526 I think there is a misunderstanding. These are actually two mutually exclusive options.

1) The high speed camera option is a Pupil Core frame with eye cameras and a fully integrated camera, i.e. the world camera cannot be disconnected. 2) The USB-C mount option is a Pupil Core frame with eye cameras and a USB-C connector that can be connected to a USB-C camera of your choice. I think the visualisation on the website shows this nicely.

Addition to 2) The USB-C mount option does not include a world camera.

user-a98526 11 December, 2019, 14:17:21

thanks! I think I may need depth information,so Intel RealSense D400 series sensors can can get depth information and scene information at the same time?

papr 11 December, 2019, 14:25:07

@user-a98526 Correct. Please be aware that Intel does not provide pyrealsense on macOS. pyrealsense is the Python module that Pupil Capture uses to access the realsense cameras. With other words: Pupil Capture does not work with the Intel Realsense D400 series on macOS.

papr 11 December, 2019, 14:26:23

Also, please be aware that Pupil Capture does not save the raw depth data during a recording. Instead it saves the RGB color stream and a colored representation of the depth stream int two separate video files.

user-141bcd 11 December, 2019, 14:26:46

@papr can the "Reset 3D model" function in Capture be triggered via remote notification?

papr 11 December, 2019, 14:33:29

@user-141bcd Not at the moment, no.

user-141bcd 11 December, 2019, 14:35:25

kk, thx

user-067553 11 December, 2019, 16:03:17

Hi, I'm using pupilposition.csv generated from pupil player to understand when the user has the eyes closed (I recognize it as confidence drops in both eyes). Is this the correct way or are there better ways to do that? Thanks in advance

papr 11 December, 2019, 16:04:19

@user-067553 That would be the correct way. The blink detector does that for you already btw. Just activate the plugin, wait for it to run, export and checkout the csv file exported by the plugin.

user-b8789e 11 December, 2019, 16:27:01

Hello everyone, is it possible to make Heatmap analysis without QRCodes?

user-b8789e 11 December, 2019, 16:28:27

I just found iMotions platform that can do it, but costs almost $9000

user-067553 11 December, 2019, 16:28:40

Thanks @papr, I'm not sure about the meaning of onset confidence threshold and offset confidence threshold showed in the plugin interface. If I only want the blinks.csv file, do I still have to export everything?

user-c5fb8b 11 December, 2019, 16:31:05

Hi @user-b8789e Currently we only support marker based tracking and not area-of-interest (AOI) based tracking (e.g. tracking a specific image). We are working on AOI tracking support, but I cannot give you an estimate of when we will be able to ship this. For now you will have to place markers around the images that you want to track.

user-b8789e 11 December, 2019, 16:34:13

@user-c5fb8b the Invisible does that?

user-b8789e 11 December, 2019, 16:36:19

@user-c5fb8b is it possible to test the alpha version when available?

user-b8789e 11 December, 2019, 18:22:59

Hello, there's no software that could help Pupil Core to export heatmaps?

marc 11 December, 2019, 18:43:42

Hi @user-b8789e ! In Pupil Capture and Pupil Player you can use the Surface Tracker plugin for AOI Tracking. See the documentation here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-b8789e 11 December, 2019, 18:45:08

@marc Thank you. But for this kind of project, we could not use any April Tags or Markers around the AOI.

user-b8789e 11 December, 2019, 18:46:05

That's why I'm asking if there's any other software that works with Pupil Core.

marc 11 December, 2019, 18:48:49

I think Gaze Intelligence distributes software for marker-free AOI tracking. Their tools are compatible with Pupil Core recordings. https://gazeintelligence.com/

user-abc667 11 December, 2019, 19:54:19

@papr Yes, finally found the right email in my email avalanche and did the troubleshooting session.

user-abc667 11 December, 2019, 19:58:30

@papr Just finished defining 16 surfaces (paper forms) using AprilTags 36h 11, and when I start up the program again it complains: "surface_tracker.surface_serializer: You have loaded an old and deprecated surface defn." I get one warning msg for each sufrace defined. Is this a bug or is there something wrong with using the 36h 11 version of the tags? Thanks.

papr 11 December, 2019, 20:15:24

@user-abc667 did you do that in Capture or Player?

user-abc667 11 December, 2019, 20:53:37

@papr in Capture, is that wrong? Did them with live images of the forms.

papr 11 December, 2019, 21:05:52

@user-abc667 no :) there should be two surface definitions files in the pupil_capture_settings folder. One with a _v1 suffix, one without. Can you confirm this?

user-abc667 11 December, 2019, 21:42:49

@papr No, have just the _v01 suffix file. I had been using the legacy square markers and thought that the right way to get rid of them was simply to move the existing _v01 file out of that directory and define the new surfaces with the AprilTags markers. Did I screw things up?

papr 11 December, 2019, 21:47:25

@user-abc667 no, you should not get a warning if you only have a _v1 file. Just to be sure: Did you get the "deprecated surface definition" in Capture or Player?

user-abc667 11 December, 2019, 21:48:02

@papr in Capture

papr 11 December, 2019, 21:49:55

You should be on the save if you have just redefined the surfaces. I will look into possible reasons for the warning when I am back in the office.

user-abc667 11 December, 2019, 21:51:33

@papr ok, thanks. Meanwhile I will press forward.

papr 11 December, 2019, 21:51:46

As long as there is a surface_definitions_v1 file, your are good to go.

user-abc667 11 December, 2019, 21:52:24

@paper OK. Also I just downloaded and ran 1.19.2 and got the same warning msgs.

papr 11 December, 2019, 21:52:59

@user-abc667 yes, this is probably a mistake on our side.

user-771cfd 12 December, 2019, 01:11:08

Hi, I would like to ask a question about replacement of the cables. My camera cable is broken recently and I found some wires that seems like replacements. Are they really replacements and should I open the black box of the original one? I'm careful since it's very fragile. Thanks very much!

user-771cfd 12 December, 2019, 01:11:25

Chat image

user-771cfd 12 December, 2019, 01:11:36

Left is the broken one, right is the replacement(?)

wrp 12 December, 2019, 01:50:53

Hi @user-771cfd please email sales@pupil-labs.com and our hardware team can provide you with assistance.

user-119591 12 December, 2019, 09:40:01

pupil player application ,does it gives me pupil size on excel file?

user-119591 12 December, 2019, 09:50:17

never mind thanks! πŸ˜„

user-b7d6e5 12 December, 2019, 12:23:44

Hi, I just bought a Pupil lab core and tried to set up it today. However, it's extremely difficult for me to get the right vision in the eye camera. This image is the best that I can adjust. Besides the slider and ball joint, is there any other place that I can make adjustment? And what is the orange arm extender for? How it can be used?

Chat image

marc 12 December, 2019, 12:37:04

Hi @user-b7d6e5 ! Please see this section of the documentation on how to the eye cameras can be adjusted: https://docs.pupil-labs.com/core/hardware/#headset-adjustments The orange extender arms are necesseary for some face geometries to get a good view of the eye. Please see this section on how to use it: https://docs.pupil-labs.com/core/hardware/#eye-camera-arm-extender

user-c5fb8b 12 December, 2019, 13:03:53

@here πŸ“£ Pupil Software Release v1.20 πŸ“£ This release contains a few stability fixes and deprecates the fingertip calibration method. Additionally we have externalized the pupil detectors into a standalone library, significantly simplifying the setup procedure for running Pupil from source.

Check out the release page for more details and downloads: https://github.com/pupil-labs/pupil/releases/tag/v1.20

user-4d0769 12 December, 2019, 18:13:26

@papr When I run pupil_capture it says that it doesn't have a calibration for the eye cameras and it loads the dummy one (with focal length=1000.0). If it's not calibrated, how can it find the actual eye center in mm? Is there a way to calibrate the eye cameras manually?

papr 12 December, 2019, 18:16:08

@user-4d0769 We assume a fixed size for the eye ball. This allows us to infer the Pupil size in mm.

papr 12 December, 2019, 18:18:05

We do not calculated the Pupil size from a single image but based on the 3d model which requires a series of images.

user-4d0769 12 December, 2019, 18:22:34

thanks!

user-2798d6 12 December, 2019, 19:47:40

Hello - are there any known issues of version 1.20 not opening on macs with Catalina? I’m getting an error message saying the software needs to be updated because Apple can’t check for malicious software.

papr 12 December, 2019, 20:33:28

@user-2798d6 this is a general issue on Catalina. Right click the application, click open, and it will give you the option to open the app normally.

papr 12 December, 2019, 20:34:33

In order to prevent the dialogue, one needs to notarize the app at Apple. We are working on integrating this step into our deployment.

user-067553 13 December, 2019, 08:32:08

I want to detect prolonged eye closures more than 1s. I'm trying it by modifying the blink detector plugin, do you have any suggestion/plugin/instrument to do that?

marc 13 December, 2019, 09:44:19

@user-067553 The blink detection plugin detects the start/onset and end/offset events of blinks. It should immedeatly allow you to also detect prolonged eye closures, which should simply have onset and offset events that are further appart in time.

user-067553 13 December, 2019, 10:46:00

Thank you @marc, do you know if the filter lenght measures the lenght of the eyes closed or the segment analyzed in which confidences are measured? (as a window)?

marc 13 December, 2019, 10:56:00

@user-067553 The latter is correct. The confidence signal is concolved with a kernel/filter, which yields a signal that measures blink-like movements. The filter length is specifies the temporal width of the kernel.

user-d20f83 13 December, 2019, 16:03:54

Hi, is it possible to stream the audio using pupil mobile?

user-9ee9c8 13 December, 2019, 16:26:45

Hi, I tried to do the camera intrinsics, but it doesn't seem that it functions. And I think that some parts are missing on your help/support page, as the text refers to step 6, but there's only step 1 visible on the support page: https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation

user-9ee9c8 13 December, 2019, 16:29:49

There's only step 1, but this is not enough as it is not clear how to use the pattern. Could you explain the process in more detail?

Chat image

wrp 16 December, 2019, 02:36:45

Hi @user-9ee9c8 thanks for spotting this error. I have just made the fix and it should be online within the next minute or two.

user-d18d24 16 December, 2019, 04:09:16

I purchased Pupil labs hardware for the HTC Vive. What options do I have in terms of different software that can be used to analyze eye tracking data? Thanks in advance.

wrp 16 December, 2019, 05:16:47

Hi @user-d18d24 you can use Pupil Player for visualization and lightweight analysis.

user-9a94be 17 December, 2019, 10:10:41

Hi! I started using Pupil with VR, so I have considered using Pupil Remote so I can record from other computer and the performance is not affected in the computer running VR. Is this the correct approach? I also have no clue about how to do this if this is the case. I would appreciate any help, since I am lacking most of the technical skills. Thank you so much!

papr 17 December, 2019, 10:25:14

@user-9a94be Correct, the recommended setup would be to run Pupil Capture on a separate computer. You would insert the VR add-on into your VR headset, and connect the add-on two the computer running Pupil Capture. You can use our Unity plugin to communicate with Pupil Capture through Pupil Remote. Please see our documentation https://docs.pupil-labs.com/developer/vr-ar/ or the πŸ₯½ core-xr channel for more information.

user-9a94be 17 December, 2019, 10:28:19

@papr Thanks so much for the quick reply. I will start exploring in that direction πŸ™‚

user-274c94 17 December, 2019, 11:29:44

Hi! I am using the pupil core to measure gaze in a driving simulator. Because of performance problems running both the pupil software and the simulator at the same time I tried to run pupil software on a separate laptop which works fine. Only problem is the calibration. I want to do screen marker calibration and of course on the screen where the simulator software is running to have the area calibrated I need for this screen. I wrote my own script to show the markers used in manual marker calibration on the screen and start and stop the calibration using pupil remote. However the result of the calibrations seems not to be as good as with screen marker calibration. Is this the right way to go or is there another method to do such a "remote calibration"?

papr 17 December, 2019, 11:34:30

@user-274c94 This implementation/workflow sounds exactly right!

papr 17 December, 2019, 11:38:44

@user-274c94 Maybe, you could try performing a single-marker calibration instead? It would reduce the complexity of the remote calibration code and might improve accuracy if performed correctly.

user-ca31eb 17 December, 2019, 14:10:05

Hi, I'm trying to get used to the kit for research purposes, but I can't load Pupil Capture properly. Every time I try, it gives an error message saying a file is missing and closes the application. I believe it was "win drv". Could someone please tell me what I'm missing? I'm using windows 10

user-c5fb8b 17 December, 2019, 14:52:39

Hi @user-ca31eb You might have to run Pupil Capture as administrator initially for setting up the necessary drivers. Can you give that a try?

user-ca31eb 17 December, 2019, 15:10:10

@user-c5fb8b I'll try that. I think the drivers are already installed but I'm not sure. Thank you!

user-5529d6 17 December, 2019, 17:19:33

Hi there, I just upgraded to v1.20. In Pupil Player, turning on "legacy square markers" in the offline surface tracker plugin causes the software to crash in Ubuntu 18.04

user-bda130 17 December, 2019, 17:51:09

@papr Hi we are looking to buy a usb-c to -c for pupil mobile. We bought one eyetracker in 2017 and one in 2019 and need a usb-c that works for both. Should we get a usb-c 2.0 or 3.0?

user-670869 17 December, 2019, 22:03:20

Hi. I'm setting up Pupil Core device for the first time. To set up the Pupil Detection i read that I should the min/max values for pupil size. What are the conventional value for this?

user-670869 17 December, 2019, 23:33:55

Also, is semantic gaze mapping possible with the Pupil Core system?

wrp 18 December, 2019, 04:56:58

@user-bda130 the company choetech makes reliable USB-C to USB-C cables. You can find them readily available on Amazon or other online shops.

wrp 18 December, 2019, 05:00:41

@user-670869 You might not need to set any of the min/max values for pupil size. The first recommendation is to physically adjust Pupil Core headset so that you are getting a good image of the eye and check confidence values in the world window while moving your head so you can sample different eye positions.

"Semantic gaze mapping" is, I believe, SMI's term for markerless AOI tracking. Pupil Core software does not have markerless AOI tracking. Instead, Pupil Core software has implemented marker based AOI tracking that we call "surface tracking". You can read more about this feature in the docs: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

wrp 18 December, 2019, 05:01:11

@user-5529d6 I will try to reproduce the behavior you note re legacy square markers with Pupil Player v1.20.

user-b37f66 18 December, 2019, 08:44:16

Hi, I'm trying to record videos in order to use the gaze 3d distance for my research. The problem is that the gaze distance is mostly not accurate. I used the default screen marker calibration at a distance of ~500 mm from the screen, as I read in the doc. What am I doing wrong? Thanks, Yogev

user-95b5d7 18 December, 2019, 09:16:15

Hello everyone, I am having problems using the Pupil Core ETG in very dark environments, such as inside flight simulators. The ETG are not able to locate the markers correctly, especially those to identify the AOI. Did any of you find any solution to make them more accurate in dark environments? Thanks in advance!

papr 18 December, 2019, 09:57:28

@user-5529d6 We might have found a possible issue that could cause such a crash. While we are investigating it, please try restarting Pupil Player with default settings from the General Settings menu.

user-e10103 18 December, 2019, 10:41:37

could you help me? i can't open the film!!!!

Chat image

papr 18 December, 2019, 11:35:55

Hey @user-e10103 it looks like the main window is positioned off screen. Please close Player, delete the user_settings_* files in Home directory -> pupil_player_settings, and start Player again.

user-e10103 18 December, 2019, 11:54:49

@papr thanks! Another question is that how can i see the eye-tracking and which application can help me to see that?

papr 18 December, 2019, 12:05:23

@user-e10103 Do you mean the eye tracking for a given recording? How was the recording created? Did you use Pupil Capture or Pupil Mobile? If you are looking for a software to see the eye tracking in real-time and make recordings, use Pupil Capture. See our documentation for details: https://docs.pupil-labs.com/core/#getting-started

user-e10103 18 December, 2019, 12:11:18

but there's no red dot on my screen

Chat image

user-e10103 18 December, 2019, 12:25:23

and how can i get more ref point ?

Chat image

user-5529d6 18 December, 2019, 12:43:47

@papr Just letting you know that resetting Pupil Player's settings to default did not fix the legacy marker bug. Same thing for downgrading to v1.19

papr 18 December, 2019, 12:49:01

@user-5529d6 Could you share the player.log file in Home directory -> pupil_player_settings after reproducing the crash?

user-5529d6 18 December, 2019, 12:50:58

yes give me a minute

papr 18 December, 2019, 12:51:12

@user-e10103 The ref points are the concentric circles that are shown on the screen during a calibration. They need to be visible in the field of view of the scene camera. The center dot shows green if they are detected, and red when they are not.

user-5529d6 18 December, 2019, 12:56:44

@papr Ok I am confused now. 1. I uninstalled 1.20 2. Installed 1.19 3. Tested, it crashed the same way 4. Uninstalled 1.19 5. Installed 1.20 6. Reloaded the same recording session and the marker cache will not load anything 7. Loaded another session and this time it did not crash when selecting legacy markers

I have a backup of the other session, I will try with the first one again to see if this is an issue with that specific session. I didn't do anything special when collecting the data though.

user-5529d6 18 December, 2019, 13:24:53

@papr update 1. The software just hangs and based on the time stamps of the player.log file, there is no entry when that crash occurs, which is odd. (The most recent entry is many minutes old) 2. I just found out that if the surface tracker plugin is activated AND the marker type is already set to legacy marker when I start Pupil Player, it will not crash and cache the markers correctly. 3. The bug occurs when selecting the legacy marker in the drop-down list.

user-c5fb8b 18 December, 2019, 14:36:33

Hi @user-5529d6 would it be possible to share the recording with data@pupil-labs.com so we can take a look?

user-5529d6 18 December, 2019, 14:46:47

I will double check to make sure it does not contain any sensitive stuff before, but there shouldn't be a problem.

user-5529d6 18 December, 2019, 14:53:25

@user-c5fb8b I just sent a link to download it from google drive.

user-c5fb8b 18 December, 2019, 15:05:24

@user-5529d6 Thanks, we will take a look at this!

user-670869 18 December, 2019, 16:29:32

@wrp Thank you!

user-670869 18 December, 2019, 17:27:32

Can we get gaze maps from the Pupil Player? I just want to see some visualizations.

user-c37dfd 18 December, 2019, 17:32:29

@papr Thanks for the suggestion! I think that would get me closer to what I am trying to do, but I think the videos would still be overlapping in a way that would obstruct some of what I will be trying to code on the backend. I do appreciate the idea though

user-c37dfd 18 December, 2019, 19:55:48

Awhile back I indicated I was getting some errors in the pupil mobile app regarding an unknown sensor identity and internal android issues. I did some trouble shooting and determined it was our older headset (USB-micro USB) causing the issue (the errors never occur with our newer headset; USB-USBc). I had thought changing the connection between the older headset and the phone had resolved the issue, but it appears that was a temporary fix as I am now getting the same errors as before. I have a few questions: 1) is there any way we can update the older headset in some way that would resolve these errors? 2) Our newer headset is still not the newest model. We are wondering if there are any specs different from our newer headset (purchased in ~2016/2017). 3) Is there potential to order an older version of a headset if the newest version is significantly different from our current setup?

wrp 19 December, 2019, 02:38:15

@user-670869 Do you mean "heat maps" when you say "gaze maps"? If so, you can generate heatmaps with the surface tracker. You might want to try loading this demo recording in Pupil Player and enabling the Surface Tracker plugin. https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing

wrp 19 December, 2019, 02:39:41

@user-c37dfd re hardware - could you send an email to sales@pupil-labs.com with your order id(s) so that we can best help you with hardware questions.

user-e3b669 19 December, 2019, 12:56:36

Hi,

user-e3b669 19 December, 2019, 12:59:46

Hi, I just installed the new version (1.20) and I cant find the "Scan Path" Plugin. Its mentioned in the user Guide and I would expect to find it in the "Vis" plugins.

papr 19 December, 2019, 13:01:22

@user-e3b669 unfortunately, it has been disabled due to technical reasons for a while now. We have an idea for a solution but are still evaluating its technical feasibility.

user-e3b669 19 December, 2019, 13:03:04

Thanks, I will try to be patient πŸ™‚

user-5529d6 19 December, 2019, 15:17:04

@papr @user-c5fb8b Some update on the legacy marker bug. I got it to work doing the following procedure 1. Open pupil player 1.20 2. Turn on surface tracker 3. Select legacy marker. - It crashes Repeat 1-3 once, it crashed Repeat 1-3, it works. I have to do this individually for each session for it to work.

user-c5fb8b 19 December, 2019, 15:21:38

Hi @user-5529d6 we have been able to reproduce this on one of our machines, but not on others. We are still not sure about the cause of this issue, but are actively investigating. Could you try running Player from command line (do you know how to do that?) and send us the output from there when reproducing the issue? We noticed there might be additional debug info that is not written to the player.log file.

user-5529d6 19 December, 2019, 15:23:00

@user-c5fb8b Yes I can do that. I am fetching some more data that I need to process and send you the output

user-c5fb8b 19 December, 2019, 15:24:11

Thanks!

user-5529d6 19 December, 2019, 16:27:51

Also, unrelated to the legacy marker issues, is there any plan to do something smart about RAM management in Pupil Player? When it reaches the RAM limit it just automatically shuts down the software. I would normally expect things to slow down by a lot to shuffle the memory, clear a few things, but not a total crash of the software.

user-5529d6 19 December, 2019, 16:28:09

I have it happen with anywhere between 8 and 64g of ram.

user-e2c411 19 December, 2019, 16:32:59

Hey, has anyone experience with the groups plugin and with the recording of two pupil core glasses? Does anyone know if you need two PCs to record them both and does anyone know if and how things like gaze convergence and gaze latency of the two recorded gazes can be shown in the pupil player program can be shown or what can be shown in the player.. Thanks already for your help!

user-5529d6 19 December, 2019, 16:35:18

@user-e2c411 No prior experience with recording with two pupil core simultaneously, but based on my experience with one, if you are using high resolution and high frame rate settings for both the world and eye cameras, you will need a serious computer to record everything without dropping frames.

user-e2c411 19 December, 2019, 16:36:39

What specs would you consider as serious?

user-5529d6 19 December, 2019, 16:50:04

I am sure papr can comment on the required specs and how it should scale as the number of pupil core increases.

We are currently using this setup for collection, which I would not qualify as a serious computer but sufficient so far: Ubuntu 18.04 AMD Ryzen 7 3800x 8-core processor 64Gb of ram fairly standard SSD Geforce GTX 1080

recording at 400x400 eye camera 120Hz and 1280x720 60 Hz world camera (these are from memory)

With this configuration we are not dropping frames, but turning on online surface recognition results in immediate drop in frame rate.

We also tried the following configurations: AMD Ryzen 7 1700X 8 core, 3.6 GHz, 32 GiB ram, ssd, running windows 10 with ~25% frame drop (online surface tracking turned on) AMD Ryzen 7 1700X 8 core, 3.6 GHz, overclocked to 4.1 GHz, 32 GiB ram, ssd, running windows 10 with less than 5% frame drop (online surface tracking turned off) I5-4690k 4 cores, 3.5 Ghz, 32 GiB ram, hdd, running ubuntu 18.04 with ~13% frame drop (online surface tracking turned off)

user-5529d6 19 December, 2019, 16:51:07

Not sure how much any of this helps you though.

user-e2c411 19 December, 2019, 17:00:00

Okay thanks a lot!

user-b37f66 19 December, 2019, 20:11:08

Hello, I'm asking again because I didn't get any response.

I'm trying to record videos in order to use the gaze 3d distance for my research. The problem is that the gaze distance is mostly not accurate. I used the default screen marker calibration at a distance of ~500 mm from the screen, as I read in the doc. What am I doing wrong?

wrp 24 December, 2019, 13:20:12

@user-5529d6 @user-e2c411 You should be able to use any Intel i5 or i7 series with 8-16Gb of ram. You do not need a serious graphics card (integrated graphics card in laptop is fine for example).

user-abc667 24 December, 2019, 15:14:17

@wrp @papr I'm running the latest releases (tho this has happened with prior releases) and have been trying to calibrate manually, using the v0.4 marker, on the end of a 26" stick, which we use to move the marker around on the tabletop. Previously this worked without much trouble, moving the targett around slowly for about 2 min. (This is like your single marker calibration method, but moving the target instead of the head.) But lately I have routinely been getting lots of pings with error msgs that say the target is moving too quickly, where moving slowly enough would make the calibration take 10 or 15 min. (a) Is there some setting I'm missing? (b) The docn says when using manual marker, "select Marker display mode > manual", but I can't find that anywhere in any menu. Am I overlooking it somewhere?

user-abc667 24 December, 2019, 15:20:40

@papr @wrp I'm using the Pupil headset with its wide FOV camera. The camera intrinsics estimation -> show undistorted image does a very nice job of straightening lines. Is this something that can be applied offline/during playback as well, or does the original recording need to be done with this on? I ask mainly because it appears to be a sizable load on the CPU and we have to run with a laptop that is adequate but not amazingly powerful. Thanks. (And yes, I know it's late in your day on Xmas eve; thought I get this in for attention when you can. Thanks)

user-aaa87b 27 December, 2019, 14:07:28

Just a quick question: I'm using pupil in an experiment with a sequence of 40 different images. In the corners of each image I'm going to insert 4 apriltags. Do I have to use 4 different tags for each image (sounds like 160 different tags)? In other words, should each image have an unique combination of tags or could I use the same 4 tags in a different order in 4 different images (that would make 40 different tags)? Thanks a lot.

user-b37f66 28 December, 2019, 21:55:45

Hi, I still didn't get any response. @papr @wrp I will really appreciate your help. 1. I'm trying to record videos in order to use the gaze 3d distance for my research. The gaze distance that I got is mostly not accurate. I used the default screen marker calibration at a distance of ~500 mm from the screen, as I read in the doc. How can I calibrate the device in order to get accurate gaze depth data? 2. @papr I tried to use the Depth Frame Accessor plugin as you suggested here, But I didn't understand where I can find the depth data after I activate the plugin. In the log file there is only a repeating message that says "- world - [INFO] depth_frame_accessor: Depth values: shape=(480, 640), dtype=uint16"

user-39f4f5 30 December, 2019, 02:43:37

Hi everybody, I am very new to this field. Nice to meet you.

user-39f4f5 30 December, 2019, 02:44:29

I just want to know the exact data structure or data shapes that we can get from the recording data by this pupil core so that we can make a decision for our project to start with this nice application! Do they consists of coordinates of locations for all frames (i.e., all the time stamps)?

user-c5fb8b 30 December, 2019, 08:15:45

@user-aaa87b Hi, do you mean you have 40 images that are presented to subjects and you want to track gaze on these images? How are these images presented? Are they printed out or presented on a screen?

user-aaa87b 30 December, 2019, 08:24:41

@user-c5fb8b Hi and thank you for the answer. Yes, each subject is presented with 40 images on a screen using Psychopy.

user-c5fb8b 30 December, 2019, 08:25:02

@user-39f4f5 Hi! You can get gaze coordinates and timestamps for every frame yes. Besides that there is a lot more data that we export. Additional plugins that you enable will export even more data. You can see a description of the common data fields in the Raw Data Exporter section of the docs: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter Please note that the general workflow to get this data is not just from a recording. Our recordings store the same data, but in a very efficient format that is not human readable. In order to get the data, you would open the recording in Pupil Player first. Here you can also fine-tune pupil detection afterwards and e.g. select only specific timespans that are interesting to you. Then you can use the Raw Data Exporter Plugin to get all the data that is referenced in the docs as cvs file. Does that answer your question?

user-c5fb8b 30 December, 2019, 08:58:02

@user-aaa87b By "insert 4 apriltags" do you mean just "paint" them onto the image?

The normal workflow would be to just stick 4 markers (printed to paper) to the edges of the screen. This means that you will have to split them up into the distinct images later manually for your case. You can of course also add "paint" them onto the images.

If you add different apriltags for each image, you can distinguish them by the tag. But this means that you will have to define 40 surfaces. I know that we had cases of performance issues in the past with a lot of surfaces, especially on Windows machines. If you chose to go that route, please make sure to test this to make sure this approach is viable for your setup. If you are experiencing performance issues with 40 surfaces, you will have to fallback to the first method.

user-c5fb8b 30 December, 2019, 09:17:50

@user-abc667 Regarding the calibration: What you want to select is Calibration Method -> Single Marker Calibration in the Calibration Plugin (the one with the circle marker icon). Then further below you will find Marker display mode which you should set to manual. I assume you were using the Manual Marker Calibration method, which is something different!

user-c5fb8b 30 December, 2019, 09:20:01

@user-abc667 @user-b37f66 Regarding your questions to @papr and @wrp: They are both currently on vacation, but we will come back to you as soon as possible!

user-aaa87b 30 December, 2019, 09:49:19

@user-c5fb8b Thanks a lot for the answer, however I’m not quite sure I’ve got it right. Now, our experiment goes like this: the subject is shown a first single image, then is shown a second image composed by two separate pictures to choose from. This sequence is repeated 20 times, with different images. I’m interested in tracking the subjects’ gaze during the exploration of the first image and during the decision process as well (second image).

user-c5fb8b 30 December, 2019, 09:59:25

@user-aaa87b Well in theory this should be no problem. Again I would recommend you try making a recording and defining 40 surfaces in player to test for performance issues. If this does not work, the best solution essentially depends on your experiment. Essentially: The less surfaces you define, the more work you have to do manually. If the "presentation areas" of the first and second image are of the same size, you could just use 4 markers to define a single surface to track. Then you will have to figure out which image the subjects look at yourself. Or you define two surfaces, one for the first image and one for the second image. (Or even 3, with 2 separate surfaces for the second image). When you only have to figure out which stage you are in. All of these are viable approaches, but as I said: check the surface performance.

user-aaa87b 30 December, 2019, 10:17:04

@user-c5fb8b OK, I'll check that. Thank you very much for your help, and best wishes for the new year!

user-abc667 30 December, 2019, 16:24:58

@user-c5fb8b Thanks so much for the advice on the calibration. As I suspected, i did indeed have the wrong thing selected. Single marker works fine, and of course I found the Marker Display Mode. Happy to wait for @papr and @wrp to return to get the second question (about intrinsics) answered. Hope they and you have a great new year.

user-854a19 30 December, 2019, 21:24:57

hey guys. what are the system requirements for the pupil labs eye tracker?

user-c5fb8b 31 December, 2019, 08:04:12

Hi @user-854a19 the key specs are CPU and RAM. We suggest at least an Intel i5 CPU (i7 preferred) with a minimum of 8GB of ram (16GB is better if possible). We support macOS (minimum v10.12.0), Linux (minimum Ubuntu 16.04 LTS), and Windows 10.

user-854a19 31 December, 2019, 11:30:40

@user-c5fb8b Thank you very much. A happy new year.

End of December archive