core


user-c5fb8b 01 October, 2020, 07:00:29

Hi @user-c563fc, Pupil currently only runs with Python 3.6 on Windows.

user-c5fb8b 01 October, 2020, 07:01:26

@user-c563fc please make sure to follow the setup instructions closely. They should tell you to use Python 3.6. https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md

papr 01 October, 2020, 07:02:15

@user-b292f7 The fixation detector is based on a minimum-duration-maximum-dispersion algorithm. In your case, I would guess that the maximum dispersion is set to 1.6 degrees

user-b292f7 01 October, 2020, 10:01:33

and where can I change it to 3 for example?

papr 01 October, 2020, 10:05:37

@user-b292f7 In the "Fixation Detector" menu on the right.

user-908b50 01 October, 2020, 10:15:58

So I am not having the best time with the surface tracker plug-in. It was hit for the first export, but the player won't recognize the legacy square markers for at least 2 other files. Would you recommend just copy pasting the surface definition file into the other data folders?

papr 01 October, 2020, 10:17:00

@user-908b50 Have you tried reducing the minimum marker perimeter?

papr 01 October, 2020, 10:17:52

@user-908b50 I would recommend copying the surface definitions if they are the same across multiple recordings. But if the markers are not being detected, the surface cannot be tracked.

user-908b50 01 October, 2020, 10:19:13

@papr i will try that! I wanted to keep the min perimeter the same for all recordings. does it matter? the surface is the same. I am using the same width x height pixel dimensions for all data.

papr 01 October, 2020, 10:20:47

The perimeter can be set individually for each recording. It is a trade off between marker size and false positive marker detections.

user-908b50 01 October, 2020, 10:21:33

So lower perimeter, higher chance of false positives? I am going to try copy pasting the surface definitions first.

user-908b50 01 October, 2020, 10:22:16

Is there a way to double check the settings used during an export?

papr 01 October, 2020, 10:24:25

Correct. Unfortunately, the surface tracker does not export its configuration

user-908b50 01 October, 2020, 10:25:20

yes, that is very unfortunate! I had been playing around the settings between different exports so I am little unsure about the min perimeter I used.

papr 01 October, 2020, 10:26:09

You should see the effect of any change to the parameter in Realtime though.

user-908b50 01 October, 2020, 10:27:50

true, but that won't give me much information about the export I have right?

user-908b50 01 October, 2020, 10:35:34

nvm, I am using the default, recommended perimeter of 60, figured it out.

user-908b50 01 October, 2020, 11:58:01

Re-using the surface definition did not help and neither did changing the minimum parameter. I get this error:
Background Video Processor - [INFO] camera_models: Loading previously recorded intrinsics... /home/fiza/pupil/pupil_src/shared_modules/square_marker_detect.py:174: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray contours = np.array(contours) I think this was because I unclicked inverted markers in an attempt to re-start finding the surface markers again. Please correct me if I am wrong! Anyway, the square legacy markers are not detectable.

user-c563fc 01 October, 2020, 15:52:14

@user-c563fc please make sure to follow the setup instructions closely. They should tell you to use Python 3.6. https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md @user-c5fb8b I followed the setup instructions and now I am getting this error ImportError: DLL load failed: %1 is not a valid Win32 application.

user-c563fc 01 October, 2020, 16:01:16

and this link is also broken in instructions Download FFMPEG v4.0 Windows shared binaries from ffmpeg

user-c563fc 01 October, 2020, 17:41:29

th

@user-c5fb8b I followed the setup instructions and now I am getting this error ImportError: DLL load failed: %1 is not a valid Win32 application. @user-c563fc I solved it the issue, it was because FFMPEG dll files and FFMPEG is not maintained anymore and I have to go to web archive to download the old version of FFMPEG to find the those dll from 2018

user-d8853d 01 October, 2020, 18:01:54

Hi Guys, Just a general question. While doing the diy, we need to remove the ir filter from the lens. Why is that? Don't the software works without removing the ir filter.

user-908b50 02 October, 2020, 00:13:21

Just a question on the different pupil player versions! Since I collected data using version 1.11 and now I'm using v2.4.3 to analyze data, do the performance gains in pupil labs software translate to post-hoc offline processing and analyses as well?

papr 02 October, 2020, 07:39:23

@user-908b50 are you able to share one of the non-detectable recordings with us? I can have a look if you want.

papr 02 October, 2020, 07:39:51

@user-908b50 also, what performance improvements do you talk about in particular?

papr 02 October, 2020, 07:41:39

@user-d8853d the software works best on IR images because the Pupil is much darker than the surrounding than in visible light. Removing the filter allows IR light to pass onto the sensor. Usually, IR light is blocked to only capture visible light.

user-7daa32 02 October, 2020, 13:23:37

Hello

Apart from manually starting and stoping the system, is there a way to tell the system to stop, pause or stop?

Like something that make sound signals?

user-c5fb8b 02 October, 2020, 13:24:48

Hi @user-7daa32, what do you mean with "the system"? The whole application? Pupil Capture or Pupil Player? The calibration?

papr 02 October, 2020, 13:25:46

@user-7daa32 You can start and stop recordings and calibrations via the network api https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote

user-690703 02 October, 2020, 13:28:40

Hello! I'm a new user of Core, we are just beginning to set up an eyetracking experiment with it, and I am quite lost regarding a few things. After calibrating, whichever method I use, the tracking marker is always shifted to one side. For example, if the wearer say she is looking at my face, based on the captured video it's like she is in fact looking at someone sitting next to me. Probably there is an easy solution to this, I just can't find the source.

papr 02 October, 2020, 13:30:18

@user-690703 What accuracy is reported in the accuracy visualizer menu?

user-c5fb8b 02 October, 2020, 13:31:00

Hi @user-690703, welcome to the community! Did you have a look at the best practices section in our docs? It's a good starting point for new users: https://docs.pupil-labs.com/core/best-practices/

papr 02 October, 2020, 13:32:13

A warm welcome from my part as well 🙂

user-690703 02 October, 2020, 13:37:00

Thank you! 🙂 Yes, I've checked out the documentation, and I guess I'm still missing something.

user-690703 02 October, 2020, 13:37:49

@papr Should I look at the Angular accuracy?

papr 02 October, 2020, 13:38:03

@user-690703 yes please

user-690703 02 October, 2020, 13:39:00

It's 2.39 in this case.

papr 02 October, 2020, 13:40:28

@user-690703 That is fairly high. As a reference, one degree corresponds roughly to the width of your thumb at arm's length if you hold your arm straight in front of you.

papr 02 October, 2020, 13:41:09

@user-690703 I think it would be easiest for us to give feedback if you shared an example recording of you calibrating with [email removed] Just hit the R button before starting to calibrate.

user-690703 02 October, 2020, 13:41:33

If I use screen marker choreo, it's much better, close to 1, but we are examining faces during interaction, so I figured another choreography would be much better in this case.

user-690703 02 October, 2020, 13:42:23

Thank you, I'll do that!

papr 02 October, 2020, 13:42:54

@user-690703 Feel free to record the calibration with which you have trouble getting good accuracy.

user-7daa32 02 October, 2020, 13:48:12

@user-7daa32 You can start and stop recordings and calibrations via the network api https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote @papr Thanks. This means I will do a code to do this.

How to tell the system to start or stop should be the calibration and recording. Just a goal given from our group. Thanks for link. Assuming the information from that link is the solution, it means that I will write a code which I can't do. I will see if my PI understand it. I will as well ask what he wants to start or stop.

I thought we wouldn't have problem by just hitting the C and R buttons

user-908b50 02 October, 2020, 18:27:58

@user-908b50 are you able to share one of the non-detectable recordings with us? I can have a look if you want. @papr There are quite a few recordings! I will email one or two to you. I shouldn't think ethics should have a problem since they are de-indentified....fingers crossed! Could you kindly re-share your email? Excepting the first one, every recording is now coming as undetectable for whatever odd reason. In terms of performance gains, I meant fixation detection and pupil positions. Since I collected the data using an older version, is it worth struggling through the the surface tracking to export data using the newer version?

papr 02 October, 2020, 18:33:41

@user-908b50 data@pupil-labs.com

papr 02 October, 2020, 18:34:08

@user-908b50 let me check out the recordings first. I will let you know.

user-020426 02 October, 2020, 21:36:26

Hi all, i'm having an issue with getting my pupil capture application to recognise a video source, i have the Pupil labs HTCVive add on plugged in via a USB 3.0 connection, i'm using Windows 7 and i can see the Pupil Cam1 ID0 and ID1 in my device manager under imaging devices. Should i expected any other devices to popup in my manager? I attempted to uninstall the drivers and open pupil capture as an administrator to reinstall the drivers but had no luck. (Let me know if i should direct this to the vr-ar chat)

papr 03 October, 2020, 11:05:36

@user-020426 Hey, unfortunately, we do not support other Windows versions than Windows 10. Please upgrade or use one of the other supported operating systems: Ubuntu 18.04 or newer, or macOS High Sierra 10.13 or newer

user-020426 03 October, 2020, 11:37:27

Ahhh, that will explain my issue, thank you for letting me know @papr and i'll look at upgrading now.

user-467cb9 03 October, 2020, 19:40:38

Hello, i've just started working with Pupil Labs Core for my engineering diploma, when I started Pupil Core as administrator (after drivers installation for cameras by Windows 10) and one of communicates in console reads: "video_capture.uvc_backend: Hardware timestamps not supported for Logitech Webcam C930e. Using software timestamps." And second: "pyre.pyre_node: Group default-time_sync-v1 not found". I have in Pupil Core application only image from main camera, but camera for both eyes doesn't work. What i should do?

user-467cb9 03 October, 2020, 19:42:07

After next reconnecting via USB i got messages "pyre.pyre_node: Peer None isn't ready" for both eyes.

user-467cb9 03 October, 2020, 20:09:04

I've tried uninstall and install drivers for device according to instructions from your website https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting

user-467cb9 03 October, 2020, 20:11:53

in second Pupil Labs Core (not pro) i have problems too. But in this case don't work main and second camera. For all devices i get message "video_capture.uvc_backend: Could not connect to device! No images will be supplied."

user-a6e660 05 October, 2020, 14:00:07

Hello there,

user-a6e660 05 October, 2020, 14:02:15

I have a question. Is there any way to read the data of the eye tracking glasses "pupil core" directly from a computer via USB? Or do you need a remote access via API?

papr 05 October, 2020, 14:24:03

@user-467cb9 This looks like the camera drivers for the eye cameras were not installed correctly. Which version of Pupil Capture and which hardware configuration do you use?

papr 05 October, 2020, 14:25:19

@user-a6e660 The glasses only provide the camera images via USB. The eye tracking results are generated by software on the computer. To access the results, you can either build a plugin for the software or access it via the Network API (recommended).

user-a6e660 05 October, 2020, 14:41:37

Thanks for the answer. I have already informed myself about the access to the real-time data (IPC backbone, PUB-SUB pattern of ZeroMQ for one-to-many communication). However, I could not successfully implement this in the project. Is there any other way to read the data as easy as possible to evaluate it afterwards?

papr 05 October, 2020, 15:06:00

@user-a6e660 If you do not need to access the data in real-time, the recommended work flow is to export the recording to CSV files with Pupil Player: https://docs.pupil-labs.com/core/software/pupil-player/

user-7daa32 05 October, 2020, 17:23:48

Hello

I think the other times, I wasn't clear on what I wanted or rely wanted because I didn't understand it from my PI.

Here is it

We have a starting point. While at the start point, the system will tell a participant to look at A (maybe in the form of a sound. The first sound should mean this) and then we will have a sound signaling the fixation on area of interest ( just like the screen marker calibration process, the stop signal is when the marker move to another spot), the next is a dead time when the participant returns back to the starting point. The process will start again for another area of interest to be searched for. All these will be in one video for that particular participant. I don't know how feasible this can be done

user-b7ea86 06 October, 2020, 07:23:16

Yes @user-a6e660, I currently have the same problem... I would like to read out the data in real-time, but I don't have a solution for this problem. I cannot successfully install the network API on my computer. Does anyone have any instructions or tips on how I can do this best ? The instructions on Pupil Labs are very vague... Thanks.

papr 06 October, 2020, 07:32:24

@user-b7ea86 Let me try to clarify: The network API works based on a server-client principle where Pupil Capture is the server and your script/program/experiment is the client. The communication requires 2 library dependencies: zeromq and msgpack. Both are available for a multitude of programming languages.

If you were to use Python as your programming language of choice, you would have to do the following things on the client computer:

  1. Install Python
  2. pip install pyzmq
  3. pip install msgpack==0.5.6
  4. Run the example script (https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py)
  5. Go from there.

Therefore, you cannot install the Network API itself on your computer, only its dependencies. I hope this has clarified your questions.

Please be aware that you can run the client script on the same computer as Pupil Capture. If you run them on separate computers you will have to address the server ip address in the client script.

papr 06 October, 2020, 07:36:46

@user-7daa32 Pupil Capture can not be used to create visual or auditory stimuli. Usually, you would use an experiment-building software like PsychoPy. After building the experiment, you would integrate the network API into your experiment code such that the experiment can control Pupil Capture remotely, including sending timestamps to Capture when it presents stimuli to the subject (often called triggers or annotations, see https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations)

user-430fc1 06 October, 2020, 07:40:25

@user-b7ea86 and @user-a6e660 - here's some of my code that may help you along. There's a python class for pupil core and another for grabbing data in real time. It may not be exactly what you need, but I hope you find it useful. You could use it to grab 10 s of pupil data as follows:

p = PupilCore() pg = PupilGrabber(p, 'pupil.1.3d', sec=10) pg.start() sleep(10) data = pg.get('diameter_3d')

It would be great to get some feedback on this if you do find it useful, as this is my own development against the remote helper and Network API.

jtm_pupil.py

papr 06 October, 2020, 07:42:18

@user-430fc1 Do you need the R/r commands for this to work? If not, I would remove them to make this an even more minimal example.

papr 06 October, 2020, 07:43:07

@user-430fc1 Also, in case you have a repository for this code, feel free to link to it in https://github.com/pupil-labs/pupil-community

user-430fc1 06 October, 2020, 07:43:09

@papr I suppose not - I always assumed they were necessary

papr 06 October, 2020, 07:43:44

R/r start and stop recordings. If you just want real-time data without a recording, you do not need to start a recording.

user-430fc1 06 October, 2020, 07:45:54

@papr this is part of a larger repo that will be made available before long. I'll be sure to link it when the time comes.

user-b292f7 06 October, 2020, 09:47:20

Hi, can I change the maximum dispersion more than 4.9? I try do so and it dosent work

user-1768fa 06 October, 2020, 09:48:41

Hi everyone: I would like to ask what the horizontal and vertical coordinates of the FOV of the scene camera?

user-1768fa 06 October, 2020, 09:48:54

thank you!

papr 06 October, 2020, 09:50:25

@user-b292f7 No, this is not possible if you want to use the fixation detector via the UI. More than 4 degrees are usually considered a very large dispersion; not necessarily representing fixations.

papr 06 October, 2020, 09:52:17

@user-1768fa

# wide-angle lens on high-speed camera
FOVinDeg(resolution=(1920, 1080), horizontal=139, vertical=83)
FOVinDeg(resolution=(1280, 720), horizontal=99, vertical=53)
FOVinDeg(resolution=(640, 480), horizontal=100, vertical=74)
user-b292f7 06 October, 2020, 10:17:44

is there a limit for the duration of the fixations? it seems that I need more than 4 degrees for dispersion,

Chat image

user-b292f7 06 October, 2020, 10:19:36

and its look the same with 4.9

Chat image

papr 06 October, 2020, 10:20:41

@user-b292f7 By increasing the dispersion limit, you are grouping smaller fixations into bigger ones. This is why you see less fixations with a higher dispersion limit.

papr 06 October, 2020, 10:20:56

I suggest giving the default values a try. 🙂

user-1768fa 06 October, 2020, 10:21:38

@papr Hi... Do you have a construction map of the scene camera? I think I accidentally broke it up...

user-1768fa 06 October, 2020, 10:21:57

Chat image

papr 06 October, 2020, 10:24:18

@user-1768fa If you are lucky @user-755e9e might be able to help you with that. But I cannot make any promises.

user-1768fa 06 October, 2020, 10:27:36

thank you sir @papr and @user-755e9e , can't wait for your replying..

user-b292f7 06 October, 2020, 10:28:21

Thank you, when I was with the default values its look the same with 1.5 on the top' maybe I need to change somthing else..

user-c5fb8b 06 October, 2020, 10:33:07

@user-b292f7 what specifically are you trying to achieve? Or what was your expected result when looking at the fixations?

user-1768fa 06 October, 2020, 10:41:59

hi sir @papr if it possible, may I buy a brand new scene camera?

papr 06 October, 2020, 10:42:46

@user-1768fa please contact info@pupil-labs.com in this regard

user-b292f7 06 October, 2020, 10:48:30

@user-c5fb8b I try to comper data before and after learning and try looking the parameters(dispersion, duration and diameter...) its seem that I have higher durations of fixation than the default...so I play with that also...

user-c5fb8b 06 October, 2020, 10:54:49

@user-b292f7 ok, as @papr mentioned above, the higher you set the maximum dispersion, the longer the detected fixations will become, which might not be what you actually want. If I recall correctly, fixations are usually only very short, in the order of a few 100 milliseconds. The higher you set the maximum dispersion, the more likely you are to detect something as "fixation", which actually is none. That being said, I'm no eye-movement researcher and I would recommend to check with the relevant literature again.

user-b292f7 06 October, 2020, 11:03:09

Thank you @user-c5fb8b and @papr. when I use the default I get somthing that semm not good so I try change the limit of duration and fixation but maybe its not the direction ...

Chat image

user-c5fb8b 06 October, 2020, 11:04:21

@user-b292f7 what exactly do you mean by "seem not good"? How do you judge whether your settings result in good or bad fixation detections?

user-b292f7 06 October, 2020, 11:09:42

its because my partners and advisor said that i need to change somthing till I wont see artificial dispersionשt the top of the graph (the same with duration)

user-1768fa 06 October, 2020, 11:40:23

excuse me, may i ask the information of this camera? @papr

user-1768fa 06 October, 2020, 11:41:29

Chat image

user-1768fa 06 October, 2020, 11:41:46

Maximum pixel, FOV and fps...

user-1768fa 06 October, 2020, 11:45:23

and i could not contact with @pupil-labs.com ..

user-6b3ffb 06 October, 2020, 11:47:34

Hi, would like to know if the prescription glasses affects the pupil diameter calculated by the software. We are going to design an experimental protocol.

papr 06 October, 2020, 11:48:04

@user-1768fa you can find resolution and frame rates in our website. The latest fov numbers are those that I posted above. What do you mean by you could not contact [email removed]

user-1768fa 06 October, 2020, 11:50:11

hi @papr there are 2 kinds of camera, i can not identify which one is 100 FOV one , and which one is 60 FOV one.

papr 06 October, 2020, 11:51:25

@user-1768fa these fov are out of date and will be updated in the near future. Please see the values I posted above. They refer to the wide angle lens which is used in the pictures that you have posted so far

user-1768fa 06 October, 2020, 11:51:28

and is it a email address of [email removed]

papr 06 October, 2020, 11:51:35

@user-1768fa yes

user-1768fa 06 October, 2020, 11:56:16

thank you!! @papr

mpk 06 October, 2020, 12:02:38

@user-6b3ffb we recommend having the eye camera below or inside the prescription lenses.

user-1ccccf 06 October, 2020, 14:37:04

@papr Recently, I am reading and thinking about the gaze calibration code. But I have some confused about https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L203-L213 . For eye_camera_to_world_matrix T = [R t], I don't know why we need to calculate t=t' + R*(-sphere_center_pos) in get_eye_cam_pose_in_world. The t' is eye_hardcoded_translation and it also is the eye camera center position in world.

The t may translate the sphere_center to the eye camera position in https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L253-L254 because of sphere_center*R + t = sphere_center_center*R + t' + R*(-sphere_center_pos) = t'. Thus, the s0_center is consistent with the eye camera center position in world t. I don't know where my misunderstanding is. Please point out it. Thank you very much.

papr 06 October, 2020, 14:39:25

@user-1ccccf Btw, we found a reason why the confidence is not consistent across OS. The reason is that the 3d detector is built for running in realtime, and updates models based on time passed in realtime instead of timestamps. This means that the model will update less often compared to the total amount of frames if the detection runs faster.

Edit: This is at least one possible reason for the different outcomes.

user-1ccccf 06 October, 2020, 14:41:10

The Figure.

Chat image

user-1ccccf 06 October, 2020, 14:54:18

@papr Thank you for the explain. I found a reason why there are higher confidence on Linux. But it sounds incredible.

After the getIntersectedCircle finished in the https://github.com/pupil-labs/pupil-detectors/blob/master/src/singleeyefitter/EyeModel.cpp#L116-L117 , the unprojectedCircle value is equal to circle value on Linux, which causes the oberservationFit.value = 1 incalculateModelOberservationFit. I was stunned by what I observed, but it did happen. There is normal value on windows.

user-1ccccf 06 October, 2020, 14:56:31

I also think it impossible. Because the unprojectedCircle is const value. But it did happen on Linux. It happened on my computer at least.

papr 06 October, 2020, 14:58:51

@user-1ccccf ok, thank you. I will have a look at both, the finding as well as your question above. Unfortunately, I will not be able to come back to you in this regard today anymore.

user-1ccccf 06 October, 2020, 15:01:35

OK, thank you very much. Just reply to me at your convenience

user-1ccccf 06 October, 2020, 15:13:08

@user-1ccccf Btw, we found a reason why the confidence is not consistent across OS. The reason is that the 3d detector is built for running in realtime, and updates models based on time passed in realtime instead of timestamps. This means that the model will update less often compared to the total amount of frames if the detection runs faster.

Edit: This is at least one possible reason for the different outcomes. @papr Yes, I also notice the problem. So I often use the realtime as the timestamp. Therefore, I just need to keep the time at 30FPS for updating models. You mentioned the realtime: https://github.com/pupil-labs/pupil-detectors/blob/master/src/singleeyefitter/EyeModel.cpp#L151-L152

user-c563fc 06 October, 2020, 15:17:29

how to add world video in pupil player?

user-c563fc 06 October, 2020, 15:19:37

for me, world video is separate screen recording

papr 06 October, 2020, 15:31:52

@user-c563fc hey, please be aware that the Pupil software is not meant for remote eye tracking, which seems to be your goal :)

user-c563fc 06 October, 2020, 15:50:10

Yeah, Because I am using HTC Vive and now I want to put game play as a world video and that game play is separate screen recording video

papr 06 October, 2020, 15:56:11

@user-c563fc ah, I understand. But you did not record it via the Screencast feature, did you? If not, you will have to generate timestamps for the externally recorded screen recording

papr 06 October, 2020, 15:57:44

Also, you will have to figure out the proper intrinsics to use for the virtual camera from which the scene was filmed

user-7daa32 06 October, 2020, 17:14:17

Sorry about it.

I am thinking of best way the system will tell the participant to perform different tasks (all in one video). When to start and stop doing each of the task

I hope it's clear now sir

papr 06 October, 2020, 17:15:46

@user-7daa32 I still struggle with understanding of what you actually need, to be honest. I feel like I cannot help you properly because of that.

papr 06 October, 2020, 17:17:15

Is it general advice on how to perform the experiment with the subject? Or are you missing specific functionality within Pupil Capture? Or is it something in addition to Capture, that you need?

user-7daa32 06 October, 2020, 17:26:59

Is it general advice on how to perform the experiment with the subject? Or are you missing specific functionality within Pupil Capture? Or is it something in addition to Capture, that you need? @papr

Sorry about it.

I am thinking of best way the system will tell the participant to perform different tasks (all in one video). When to start and stop doing each of the task

I hope it's clear now sir

papr 06 October, 2020, 18:21:19

@user-7daa32 So, you want to automate the experiment instruction without programming, do I understand correctly? Have you had a look at https://www.psychopy.org/ before? It provides a graphical user interface for experiments, including playing sounds and displaying instructions and stimuli. You can use it without programmatically integrating Pupil Capture into it (in other words: without using the network api).

Keep in mind, the less you program, the more you will have to do manually after-the-effect. 🙂 Or in reverse: The more you program, the more time you will save when processing the recordings.

user-b292f7 06 October, 2020, 18:47:15

Hi its me again, can you explain me how the Pupil decides what is fixation?

user-467cb9 06 October, 2020, 19:02:04

@user-467cb9 This looks like the camera drivers for the eye cameras were not installed correctly. Which version of Pupil Capture and which hardware configuration do you use? @papr Hi there, I' m using last version Pupil Caprure 2.4.0 from your website

user-467cb9 06 October, 2020, 19:05:20

@user-467cb9 This looks like the camera drivers for the eye cameras were not installed correctly. Which version of Pupil Capture and which hardware configuration do you use? @papr what do you mean "hardware configuration"?

papr 06 October, 2020, 19:43:55

@user-b292f7 See the documentation https://docs.pupil-labs.com/core/terminology/#fixations

papr 06 October, 2020, 19:45:23

@user-467cb9 Do you use a Pupil Core headset? With how many eye cameras (Monocular vs binocular setup)? What generation are they (120Hz vs 200Hz)?

user-7daa32 06 October, 2020, 21:14:03

@user-7daa32 So, you want to automate the experiment instruction without programming, do I understand correctly? Have you had a look at https://www.psychopy.org/ before? It provides a graphical user interface for experiments, including playing sounds and displaying instructions and stimuli. You can use it without programmatically integrating Pupil Capture into it (in other words: without using the network api).

Keep in mind, the less you program, the more you will have to do manually after-the-effect. 🙂 Or in reverse: The more you program, the more time you will save when processing the recordings. @papr Thanks. I will look at it

user-467cb9 06 October, 2020, 21:16:35

@user-467cb9 Do you use a Pupil Core headset? With how many eye cameras (Monocular vs binocular setup)? What generation are they (120Hz vs 200Hz)? @papr yes i use Pupil Core headset. Monocular both. I have two headsets. How can I check the generation of cameras?

Chat image Chat image

papr 06 October, 2020, 21:20:38

@user-467cb9 Oh, these are the very old ones. I do not think that Capture ships auto-install drivers for these. You definitively will have to install the drivers manually. Please follow steps 1-7 from these instructions. The eye cameras will appear as "integrated camera" if I remember correctly

user-467cb9 06 October, 2020, 21:21:44

Yes, integrated camera

user-467cb9 06 October, 2020, 21:22:00

So I start check it

user-6e3d0f 06 October, 2020, 21:24:47

hi all, I wanted to ask if its possible to get access to an 3D Model of the Pupil Core Glasses for Usage in Unity (so any unity compatible 3DModel format), since we want to model the glasses in virtual reality. Is there already an model that I can get my hands on? Would really appreciate it :). Thanks!

papr 06 October, 2020, 21:26:12

@user-6e3d0f These are the only models that we provide officially: https://github.com/pupil-labs/pupil-geometry

user-6e3d0f 06 October, 2020, 21:27:00

Ah thanks I didnt saw that

user-6e3d0f 06 October, 2020, 21:27:28

and given the readme I can build the 3dmodel from those CAD files right?

papr 06 October, 2020, 21:29:23

@user-6e3d0f Check out the github preview of the files. I think they might not what you are looking for. They are meant as an interface documentation for hardware addons.

user-6e3d0f 06 October, 2020, 21:30:45

Ah yes, thats what I thought now too when looking at the files. I want an 3D Model of the glasses itself so I can put them on an Avatar in Virtual Reality. Like the idea is to just import a "ready" model of the Pupil Core (https://pupil-labs.com/products/core/) glasses and use them in unity. Do you get what I mean?

papr 06 October, 2020, 21:31:33

@user-6e3d0f I think so but I do not know if I can help you with that. Try contacting info@pupil-labs.com in this regard.

user-6e3d0f 06 October, 2020, 21:31:47

I'll drop them an email. Anyways thanks for your help 🙂

user-1ccccf 07 October, 2020, 12:23:41

@papr Hi, my last question has been solved. Now, I have two other eye IR cameras, which is different from the Pupil Labs. How can I measure the focal length of eye IR cameras to replace the value in https://github.com/pupil-labs/pupil-detectors/blob/master/src/pupil_detectors/detector_3d/detector_3d.pyx#L47-L48 . I also found the focal length affects the z-coordinate of 3D eye ball centers or pupil centers. But I don't know how to measure the focal length of my eye IR cameras.

papr 07 October, 2020, 12:26:21

@user-1ccccf if the IR cameras are compatible with Pupil Capture, you could select them as world camera and run the camera intrinsics estimation plugin. Be aware that you might need to print the circle pattern in such a way that it is visible to the IR camera.

user-1ccccf 07 October, 2020, 12:34:36

@papr Yes, but I found the IR camera may not capture the image of computer screen. I wonder how the focal length of integrated camera of Pupil Labs was measured.

papr 07 October, 2020, 12:44:22

@user-1ccccf Correct, that is why suggest trying to print the pattern. Some printers are able to print it such that it becomes visible.

user-1ccccf 07 October, 2020, 12:51:57

@papr OK, I understand. Thank you very much. Btw, what has it any specific requirements for the printer? Can you provide more information?

papr 07 October, 2020, 12:52:53

@user-1ccccf Basically, you need paper that reflects IR light and color that does not.

user-1ccccf 07 October, 2020, 12:53:56

@papr OK, thank you very much.

user-ee7c3e 07 October, 2020, 13:29:21

Is there any way to get scene camera video on LabStreamingLayer? Our previous workaround (before using LSL) was to send a ZMQ message to start record on PupilCapture at the same time as we start to record on our acq software. Which is not robust at all, and requires some cleanup work (acq software data & pupil video aren't saved on the same folder, etc.)

papr 07 October, 2020, 14:34:11

@user-ee7c3e Not using the current implementation. You basically would need to extend the pupil_capture_lsl_relay.py with an other outlet that is responsible for streaming the video via https://github.com/sccn/xdf/wiki/Video-Compressed-Meta-Data

https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L47 events is a dictionary containing the optional key frame. Its value is an instance of the pyuvc class Frame https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L82 You would need to extract the codec and raw data and push it to the video outlet

user-ee7c3e 07 October, 2020, 14:37:18

@papr Thanks, that's what I started looking at, but I couldn't find the keys to the events variable, so thanks for the tip 😉 I don't think I'll go through with all that, but if I do I'll make sure to make a PR

user-ee7c3e 07 October, 2020, 14:38:39

Related question: is the scene camera accessible via opencv? If so, could I use something like VideoAcq (https://bitbucket.org/neatlabs/videoacq/) to record it?

papr 07 October, 2020, 14:40:34

@user-ee7c3e If the camera is being used by Pupil Capture, it is usually not possible to access it at the same time using a different software. I think it is possible to access the cameras using OpenCV in general but I do not know if you have the same control over it as if you would have when using Pupil Capture.

user-56be96 07 October, 2020, 16:39:10

Hello - I downloaded the new Pupil apps for mac, and when I open old recordings in Player, the calibration, fixation detection, etc. fails because there is "no gaze data available to find fixations". Is there a fix for this? Hi all! Receiving this error in same condition: recording on 1.2.3 Pupil Mobile, viewing in 2.4 Player on Mac. Is there any way to fix it?

papr 07 October, 2020, 20:02:59

@user-56be96 Have you turned on offline pupil detection and run offline calibration successfully?

user-908b50 07 October, 2020, 20:38:56

@user-908b50 let me check out the recordings first. I will let you know. @papr just sent you 3 recording folders! I am now having difficulties with both version 2.4.3 and 1.11. With the exception of one folder, the other 3 I have tried so far don't recognize 1.11. As a reminder, we collected data using v1.11.

papr 07 October, 2020, 20:44:26

@user-908b50 I won't be able to have a look at this today. I will come back to you as soon as I had a look.

user-908b50 07 October, 2020, 20:44:56

@papr alright, please let me know as soon as you are able to. Thanks!

user-908b50 07 October, 2020, 20:45:40

let me know if you need raw files. The ones I sent are processed ones.

user-4ddeb2 10 October, 2020, 19:03:22

Hey, trying to figure out how to import the pupil size .pldata files into a jupyter notebook

user-4ddeb2 10 October, 2020, 19:03:54

This load_pl_datafile yields an error

user-4ddeb2 10 October, 2020, 19:04:11

NameError: name 'Serialized_Dict' is not defined

user-4ddeb2 10 October, 2020, 19:04:30

anyone know solution? just want to access the pupil size time series

papr 10 October, 2020, 19:05:40

@user-4ddeb2 try to use this version of the function: https://gist.github.com/papr/81163ada21e29469133bd5202de6893e

user-4ddeb2 10 October, 2020, 20:10:33

Got it, thanks!

user-4ddeb2 10 October, 2020, 20:11:35

The timeformat on these timestamps? If I run the datetime.fromtimestamp on them I get 1970 dates

user-4ddeb2 10 October, 2020, 20:16:14

347345.211961 example

papr 10 October, 2020, 20:21:11

@user-4ddeb2 check our tutorial onhow to convert the timestamps to datetimes https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-847ff0 11 October, 2020, 17:24:41

Hi! I'm new to Pupil and am thinking of getting the Pupil Core with USB-C mount to allow for prototyping at a later stage in my project. However, initially I just want to get a feel for the Core and its capabilities as originally designed. What scene cameras/sensors are best supported and/or most commonly used for the Pupil Core with USB-C mount?

user-c1bc31 12 October, 2020, 02:28:37

I am using the core headset to detect the diameter_3d of the pupil. During the same recording, the model_id changed a few times. I

user-c1bc31 12 October, 2020, 02:30:07

That give me a real trouble. Can I control it to use the same model_id during the whole recording? Of course I will try to control the eye of the subject as much as we can

wrp 12 October, 2020, 02:36:22

@user-c1bc31 Hi 👋 yes, you can 'freeze' the model. Go to each eye window in Pupil Capture and click "Freeze Model".

Chat image

user-c1bc31 12 October, 2020, 02:38:33

thank. I will try it.

user-c1bc31 12 October, 2020, 03:17:40

For the diameter_3d, sometimes I got measurement like 3.05E-07mm. How come the value is extremely small? pupil size should be around 2mm to 10mm

user-da7dca 12 October, 2020, 08:41:33

Hey everyone, For a project I need to run the software on a NVIDIA Xavier. Does anyone of you have any experience with installing the dependencies on ARM? Even though i could install everything (dispite of pupil-apriltags) I am not able to launch pupil capture or service. Thanks in advance!

papr 12 October, 2020, 08:42:33

@user-da7dca Hey 👋 What's the error that you get when you start Capture?

user-da7dca 12 October, 2020, 08:46:19 Cython backtrace 0 0x0000007fa5e36268 in __GI___waitpid () at /build/glibc-kgYAZA/glibc-2.27/posix/../sysdeps/unix/sysv/linux/waitpid.c:30 1 0x0000007f6a558c10 in print_enhanced_backtrace () at /tmp/pip-install-rbtcz2kr/cysignals/build/src/cysignals/implementation.c:563 2 0x0000007f6a558d50 in sigdie () at /tmp/pip-install-rbtcz2kr/cysignals/build/src/cysignals/implementation.c:589 3 0x0000007f6a55b354 in sigdie_for_sig () at /tmp/pip-install-rbtcz2kr/cysignals/build/src/cysignals/implementation.c:164 4 0x0000007f6a55b318 in cysigs_signal_handler () at /tmp/pip-install-rbtcz2kr/cysignals/build/src/cysignals/implementation.c:262 5 0x0000007fa5fa56c0 in __kernel_rt_sigreturn () 6 0x0000007fa5e0ec00 in strcmp () at /build/glibc-kgYAZA/glibc-2.27/string/../sysdeps/aarch64/strcmp.S:64 7 0x0000000000000000 in ?? () 8 0x0000000000000000 in ?? ()

30 ../sysdeps/unix/sysv/linux/waitpid.c: No such file or directory.

papr 12 October, 2020, 08:47:34

@user-c1bc31 Given that you fit your eye model correctly, it is still possible that bad 2d pupil detection can cause bad 3d estimations. We recommend removing low confidence data to avoid this. As an additional post-processing step, you could remove data points that exceed your expected diameter bounds.

papr 12 October, 2020, 08:49:01

@user-da7dca Is this everything? I am not sure if this is an issue with cysignals or if cysignals is not able to properly indicate what causes the issue.

user-da7dca 12 October, 2020, 08:53:32

this is the full dump i get when launching core:

papr 21 February, 2022, 09:54:53

Looks like libglfw has an issue when calling strcmp(). This is a third-party dependency that we use for creating and managing windows for our UI. @user-878608 The issue might indeed be related to the used arch (aarch64). Theya re using a NVIDIA Xavier (see https://discord.com/channels/285728493612957698/285728493612957698/765132039229538314)

user-878608 20 February, 2022, 14:11:00

hey may i ask what device you are using? it seems like aarch64 isnt very compatible?

user-da7dca 12 October, 2020, 08:53:43

message.txt

papr 12 October, 2020, 08:56:17

@user-da7dca unfortunately, I am not able to extract any helpful information from that 😕 How did you install cysignals?

user-da7dca 12 October, 2020, 08:59:01

@user-da7dca unfortunately, I am not able to extract any helpful information from that 😕 How did you install cysignals? @papr i could install it via pip

user-5e36de 12 October, 2020, 08:59:35

Hi ) My headset is very unstable. And even when it works it's best it is still lying . After calliobration it is still extremely not accurate in point of view recognition. Is there any app which may correct that, please? Otherwise it's just useless thing unfortunately.. For example - person looks on object 1mx0.8m from distance 2m after callibration, it lies regarding view location inside object confines thanks

papr 12 October, 2020, 09:01:48

@user-5e36de Hey 👋

My headset is very unstable. And even when it works it's best it is still lying Are you talking about the video connection? Does the video freeze when you move the headset?

papr 12 October, 2020, 09:03:48

After calliobration it is still extremely not accurate in point of view @user-5e36de Please start a recording (hitting R) before calibrating, run a calibration, and share the recording with [email removed] such that we can give more precise feedback.

papr 12 October, 2020, 09:08:05

@user-da7dca ok. It is possible that there is a different module causing the segmentation fault but I am not able to tell which one. What is your issue with installing AprilTags?

user-da7dca 12 October, 2020, 09:10:46

when building the wheel via pip i get an error that i have a not supported version of cmake even though i tried several supported versions by building them from source. Since I dont need april tags in my setup i bypass this problem by removing the imports in pupil_src

papr 12 October, 2020, 09:16:17

@user-da7dca ok, understood. Sounds like a good solution. Based on the output, it looks like the issue does not happen on import but usage of the problematic module. My approach to debugging this would be to place several print-statements across service.py and try to narrow down the location of the crash.

user-da7dca 12 October, 2020, 09:17:32

alright thank you very much. will try this approach and come back with my findings 👍

user-5e36de 12 October, 2020, 09:56:09

@user-5e36de Please start a recording (hitting R) before calibrating, run a calibration, and share the recording with [email removed] such that we can give more precise feedback. @papr Hey Ok I will try it again, with Rec, than will write to you and to [email removed] thank you.

user-821b71 12 October, 2020, 11:13:19

Hey everyone, for a project I downloaded and installed the latest core software version (2.4) on a Windows 10 machine. Pupil Player and Pupil Service run fine. However, when running Pupil Capture I get the error in the console as shown in the attached picture. The world window pops up but stays blank, saying "Not Responding". I have tried to run pupil_capture.exe as admin, but the issue remains. I have looked up the error in Google but could not find any helpful information so far. If anyone has an idea how to solve this issue I would love to hear about it. Any help is appreciated. Thanks in advance

Chat image

papr 12 October, 2020, 11:59:53

@user-821b71 It looks like pyre (a network library that we use for various features in Pupil) is having trouble retrieving ~~its~~ an interface's unicast address. Unfortunately, the error message does not tell which interface is causing the issue. What network interfaces do you have on your computer?

user-821b71 12 October, 2020, 13:00:31

@papr Thanks for your reply. My computer has the following network interfaces: Ethernet Adapater, WiFi Adapter (currently connected), Bluetooth and Virtual Private Network Adapter.

papr 12 October, 2020, 13:49:20

@user-821b71 I have the suspicion that either the BT or the VPN is causing the issue. Can you disable the disable the interfaces one by one and check every time Capture/Service starts correctly?

user-821b71 12 October, 2020, 15:00:56

@papr Thanks for the advice. I will try disabling the interfaces tomorrow as it is a working PC and I am already home. I will keep you posted. Thanks anyway 👍

papr 12 October, 2020, 15:01:17

@user-821b71 Have a nice evening

user-8f5c75 12 October, 2020, 15:54:52

Hey. Can't adjust the camera for my eyes. The program opens two separate windows for the right and left cameras, but the image is displayed from the front camera. How to fix it?

papr 12 October, 2020, 15:55:40

@user-8f5c75 There should be three windows in total, can you confirm this?

papr 12 October, 2020, 15:57:20

Each window should be previewing one of the cameras (assuming a binocular (two eye cameras) headset)

user-8f5c75 12 October, 2020, 16:00:56

only three windows, but the image in them from the front camera

papr 12 October, 2020, 16:02:04

@user-8f5c75 Do I understand correctly, that they are all displaying the same video? I am asking because I have not seen this issue before.

user-8f5c75 12 October, 2020, 16:02:32

Yes

papr 12 October, 2020, 16:02:57

@user-8f5c75 Could you share a picture of the headset for reference?

user-8f5c75 12 October, 2020, 16:04:08

Chat image

papr 12 October, 2020, 16:05:03

@user-8f5c75 Which version of Pupil Capture are you running?

user-8f5c75 12 October, 2020, 16:06:03

2.3, 2.4

papr 12 October, 2020, 16:07:53

@user-8f5c75 ok, great. Thank you for the information. 🙂 In one of the menus, please open the Video Source menu and enable "Enable Manual Camera Selection". Could you make a screenshot of the contents of the "Activate Camera" drop down menu?

user-8f5c75 12 October, 2020, 16:11:40

Chat image

papr 12 October, 2020, 16:14:11

@user-8f5c75 It looks like that either (1) the driver for your eye camera was not correctly installed or (2) there is a connection issue with your eye cameras. Also, given the reference picture above, there should be at least one additional entry.

Did the eye cameras work before or is this the first time using Pupil Capture?

user-8f5c75 12 October, 2020, 16:16:11

This is my first time using Pupil Capture

user-8f5c75 12 October, 2020, 16:18:03

does this mean that the driver is installed on only one camera

Chat image

papr 12 October, 2020, 16:19:26

@user-8f5c75 That is correct. Could you open the "Cameras" and "Imaging devices" sections? Are there any "Pupil Cam" entries?

user-8f5c75 12 October, 2020, 16:22:03

@papr There is no Pupil Cam in these sections

papr 12 October, 2020, 16:23:41

@user-8f5c75 ok, thank you. Please contact [email removed] in this regard with your order number and this link https://discordapp.com/channels/285728493612957698/285728493612957698/765247925483339817

user-8f5c75 12 October, 2020, 16:25:31

@papr Thank you very much, you helped me a lot!

user-2c338d 12 October, 2020, 17:27:42

Greetings!! I scanned the recent notes on CORE but am not finding info about my troubles. Our Player stopped working and so I returned to Download the software. The download is only in the RAR format and apparently they want to be paid for unzipping. Is there another method to obtain my updates? I heard that individual use is free, but can't seem to locate how to get it. Any advice is appreciate! I love using this tool!

papr 12 October, 2020, 18:27:00

@user-2c338d At the bottom of the release notes, there is a link to WinRAR. You do not need to buy it. Simply click the blue download button to get it for free

user-6bc565 12 October, 2020, 18:27:54

@user-2c338d if you don't feel comfortable using WinRAR without eventually buying it, you can install 7-zip, a free program that does the same thing

papr 12 October, 2020, 18:28:46

@user-6bc565 @user-2c338d I can confirm that 7-zip works as well.

user-da7dca 13 October, 2020, 12:45:04

@user-da7dca ok, understood. Sounds like a good solution. Based on the output, it looks like the issue does not happen on import but usage of the problematic module. My approach to debugging this would be to place several print-statements across service.py and try to narrow down the location of the crash. @papr ok i think i found the line where the program is crashing: In service.py line 213. However i have no idea whats wrong :/

papr 13 October, 2020, 12:51:47

@user-da7dca This is where all the plugins are initialized. Can you run Service with the --debug flag? This should give you more output. Specifically, which plugin is loaded at that time. If it does not work for you replace this logger.debug with a print statement https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/plugin.py#L382

user-da7dca 13 October, 2020, 13:19:20

ok i just used the --degug flag. I found out that it it failed to load the fixation plugin however i dont think that is the real source of my error since it failed to display any interface

papr 13 October, 2020, 13:20:17

@user-da7dca try the print statement please. the logger output might be delayed

user-da7dca 13 October, 2020, 13:28:35

This is what i got

text.txt

papr 13 October, 2020, 13:31:45

@user-da7dca ok, nice. We are getting closer. Could you check if any of these import statements causes the crash? https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/service_ui.py#L12-L21

user-da7dca 13 October, 2020, 13:38:04

nope seems to be fine

papr 13 October, 2020, 13:40:27

@user-da7dca How did you install glfw?

user-da7dca 13 October, 2020, 13:41:16

from source via github

papr 13 October, 2020, 13:42:08

@user-da7dca Also, my bad. We knew the imports would be fine as the error only appears after initializing Service_UI. Please check these line https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/service_ui.py#L38-L177

user-da7dca 13 October, 2020, 14:07:56

ok ine service_ui.py line 60 seems to be the point where it crashes

user-8b7bfd 13 October, 2020, 15:39:09

Hello, I asked previously about how to get my coordinates into visual degrees and I was sent this https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L140-L148 which is helpful, but I'm not the strongest programmer. I tried tracing this line: vectors = capture.intrinsics.unprojectPoints(locations) to figure out how you read the camera intrinsics file and where the unprojectPoints function is, but I couldn't find it.

papr 13 October, 2020, 15:45:17

@user-8b7bfd This is where the required class is defined: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py

from camera_models import Camera_Model

camera = Camera_Model.from_file(
    directory_path_with_intrinsics_file,
    "world",  # given worlds.intrinsics file
    # depends on your world video resolution:
    camera_resolution,
)
camera.unprojectPoints(locations) 
user-8b7bfd 13 October, 2020, 15:58:21

thank you

user-da7dca 14 October, 2020, 08:04:31

i just noticed that i reported the wrong line since i already added some print statements. The actual source for the error is in line 55 of service_ui.py (main_window = glfw.glfwCreateWindow(*window_size, "Pupil Service")). However i dont know how solve it. I already built multiple versions of gflw with no problem at all :/

user-878608 12 February, 2022, 11:38:20

hey i am facing the same issue as you, did you maange to solve this issue?

papr 14 October, 2020, 10:24:32

@user-da7dca Please uninstall libglfw, and try running the develop branch. We switched from our own glfw bindgins to https://pypi.org/project/glfw/

Installing it via pip should provide you with a correctly built version of libglfw. (hopefully)

user-da7dca 14 October, 2020, 11:43:17

i tried it out with the glfw pip package and develop branch with following results:

user-da7dca 14 October, 2020, 11:43:19

message.txt

user-908b50 14 October, 2020, 17:49:31

Hi, I have been trying to use the surface tracker plugin of the latest sourced release (through remote desktop), but the GUI freezes a lot more than before. Any suggestions?

papr 14 October, 2020, 17:49:52

@user-908b50 which branch are you using?

user-908b50 14 October, 2020, 17:51:02

@papr 2.4.9 (source)

papr 14 October, 2020, 17:51:32

@user-908b50 Please use the surface_tracker_fixes branch instead

user-908b50 14 October, 2020, 17:52:28

@papr so I revert back to the old 2.4.3 version and only update the surface tracker fixes?

papr 14 October, 2020, 17:53:15

@user-908b50 No, you should be able to check it out via git: git checkout surface_tracker_fixes unless you have made personal modifications.

papr 14 October, 2020, 17:53:34

You might need to run git fetch first

user-908b50 14 October, 2020, 17:54:11

Alright, let me see. I assumed updating it all would update surface tracker as well.

papr 14 October, 2020, 17:54:29

@user-908b50 I am not sure what you mean by "updating"

user-908b50 14 October, 2020, 17:55:17

I updated my own fork. And then pulled those changes locally.

user-908b50 14 October, 2020, 17:55:30

for pupil-labs/pupil

papr 14 October, 2020, 17:56:27

Ah, understood. I guess you are pulling from master. Have you made any special modifications to your fork?

user-908b50 14 October, 2020, 17:57:23

Not to pupil directly. My repository does contain other processing scripts that I have been working on.

papr 14 October, 2020, 17:58:23

ok. What is the output of git remote -v for you?

user-908b50 14 October, 2020, 17:59:15

origin https://github.com/fiza09/pupil (fetch) origin https://github.com/fiza09/pupil (push) upstream https://github.com/pupil-labs/pupil.git (fetch) upstream https://github.com/pupil-labs/pupil.git (push)

user-3ec552 14 October, 2020, 18:01:18

hi

user-3ec552 14 October, 2020, 18:01:26

I have a pupil w120 e200b

user-3ec552 14 October, 2020, 18:01:38

I am trying to calibrate with pupil caputure

user-3ec552 14 October, 2020, 18:01:49

but alwasy fail due to not enough pupil data

user-3ec552 14 October, 2020, 18:02:17

any information on how to fix it?

papr 14 October, 2020, 18:02:46

@user-908b50 try git merge upstream/surface_tracker_fixes. This will merge the surface tracker fixes into your fork.

papr 14 October, 2020, 18:03:34

@user-908b50 Should you get an error about glfwInit when running Pupil, you will need to install glfw via pip install glfw

papr 14 October, 2020, 18:04:28

@user-3ec552 Please make sure to adjust the eye cameras correctly https://docs.pupil-labs.com/core/#_3-check-pupil-detection

user-3ec552 14 October, 2020, 18:07:40

is it okay to wear a corrective lens?

papr 14 October, 2020, 18:08:10

@user-3ec552 contact lenses are ok, glasses are problematic if they occlude the pupil

user-3ec552 14 October, 2020, 18:08:55

i am wearing glasses

user-3ec552 14 October, 2020, 18:09:03

but it is not occluding the pupil

user-3ec552 14 October, 2020, 18:09:20

can still see pupil in the image

papr 14 October, 2020, 18:09:27

@user-3ec552 Could you share a screenshot of your eye windows with us?

user-3ec552 14 October, 2020, 18:10:07

Chat image

papr 14 October, 2020, 18:12:21

@user-3ec552 2d detection looks good. your 3e eye model does not look fit well enough. Try rolling your eyes until the green outline is as big as your eyeball (maybe slightly smaller) and the blue and red circles overlap as well as possible for all eye positions.

user-3ec552 14 October, 2020, 18:31:24

seems okay now

user-3ec552 14 October, 2020, 18:31:40

what does C T R mean on the left of the panel?

user-3ec552 14 October, 2020, 18:31:52

Three different types of calibration?

papr 14 October, 2020, 18:32:36

These are buttons. C is for calibration, T for testing / validation, and R is for recording.

papr 14 October, 2020, 18:33:48

@user-3ec552 I can generally recommend having a look at our Getting Started guide [1] and our Best Practices [2] if you have not done so yet.

[1] https://docs.pupil-labs.com/core/ [2] https://docs.pupil-labs.com/core/best-practices/

user-3ec552 14 October, 2020, 19:01:47

thanks

papr 14 October, 2020, 20:22:07

@user-908b50 Do you have an update for me on how the changes are working for you?

user-e94c74 15 October, 2020, 08:02:12

Hi, I'm a researcher from KAIST. I have a question about Blink detection from Pupil capture: how is the blink confidence calculated? Seems they are not the average of pupil detection confidence; the user guide (https://docs.pupil-labs.com/core/software/pupil-capture/#blink-detection) does not explain details about them. Where can I find more information about blink confidence?

user-4b966e 15 October, 2020, 11:36:40

Hi, I am having issues getting started. The world camera works, but both eye cameras don't supply any images. Tried the troubleshooting already (uninstall in the device manager). Did not help. Thanks for your support.

papr 15 October, 2020, 11:38:52

@user-e94c74 The blink detector convolves a step filter with the confidence signal. The resulting signal (green line in Player timeline) peaks when there are very sharp drops or increases in the original confidence signal. When these peaks exceed the thresholds (yellow lines in Player timeline), and on/offset is being detected. On- and offsets are aggregated to blinks.

papr 15 October, 2020, 11:39:23

@user-e94c74 You can find the implementation here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py#L324

papr 15 October, 2020, 11:53:21

@user-4b966e Do I assume correctly that you are using a binocular 200Hz Pupil Core headset? Could you check the device manager and let me know how many "Pupil Cam" entries you can find? And in which category they are listed?

user-4b966e 15 October, 2020, 11:57:01

@papr Thanks for the quick reply. No, it is a 120 Hz device, just looked it up on the receipt. In my device manager, I used to see 3 Pupil Cams, 1 active and 2 hidden cameras. Now, after having done the troubleshooting, I see only twice the "Pupil Cam1 ID0" (once active and once hidden) under "libusbK USB Devices"

user-4b966e 15 October, 2020, 11:57:28

@papr Also tried on a second PC, same there

user-4b966e 15 October, 2020, 11:58:50

@papr but I do see ID1 twice and ID2 once (all entries hidden) below "Cameras"

papr 15 October, 2020, 14:11:12

@user-4b966e Mmh, ok. Please contact info@pupil-labs.com in this regard.

user-074809 15 October, 2020, 14:54:19

Hi there, I was wondering with the 3d sphere x, y, z and circle x, y z positions what orientation is the x, y, and z axes (relative to the camera or to the headest) I am trying to match it with a world space (which I have from 3d motion capture and can track the headset and camera)

papr 15 October, 2020, 14:57:23

@user-074809 pupil data is relative to eye cameras, gaze data is relative to the scene camera.

papr 15 October, 2020, 14:58:17

For reference, in our head-pose tracker we assume the scene camera position to be the position of the head https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking

user-074809 15 October, 2020, 15:02:20

@papr ok, sorry, i mean to orient myself, is the x access forward back, y access left right, and z access up down? I am specifically looking at the xyz in the 3d model pupil detection. not the gaze location as mapped on the scene camera video

papr 15 October, 2020, 15:06:01

@user-074809 we use the opencv 2d and 3d coordinate systems https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

user-074809 15 October, 2020, 15:06:31

@papr perfect! thank you!

papr 15 October, 2020, 15:07:29

You're welcome!

user-020426 15 October, 2020, 16:19:27

Hi all,

I believe i'm having an issue the Pupil LSL Relay, i can capture data from my EEG device and markers from Unity which all look correct which hints to the LSL working correctly, though i'm able to see the pupil_capture stream appear in either a pylsl script or through analysing the xdf file created via LabRecorder (https://github.com/labstreaminglayer/App-LabRecorder), the data entry fields are filled with zeroes.

I've now tried to read in the pupil capture data via the pupil-helps lsl_inlet.py script (https://github.com/pupil-labs/pupil-helpers/blob/master/LabStreamingLayer/lsl_inlet.py) which yielded no samples: C:\Users\Liam\Documents\python\kivy>lsl_inlet.py INFO:main:Looking for Pupil Capture streams... INFO:main:Connecting to Liam-PC INFO:main:Recording at lsl-recording.csv DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written

Again the pupil_capture LSL stream has been created and can be seen by the script though no data is produced. Has the LSL relay plugin been updated for the new version of pupil capture?

papr 15 October, 2020, 16:26:46

@user-020426 starting with Capture v2.0, you need to calibrate first before gaze data is being generated. And if I remember correctly, the lsl data is based on gaze data.

user-020426 15 October, 2020, 16:27:59

@papr If i'm solely using the VR/AR addon cameras in the Vive am i still able to run a calibration? Or can hmd-eyes work reliably with the LSL?

papr 15 October, 2020, 17:42:56

@user-020426 the current hmd-eyes version works independently of the lsl relay. The latter should be started before connecting though as it adjusts the Pupil clock

papr 15 October, 2020, 17:43:30

Use hmd-eyes to calibrate, afterward you should see data being published to lsl

user-020426 15 October, 2020, 18:11:25

@papr thanks for the quick response, i'll have another go when i've got some time over the weekend and drop back in here in case i come across any issues.

user-3cff0d 15 October, 2020, 19:01:06

Hello again! I was wondering, what's the difference between the blue circle + dot and the red circle + dot? I'm under the impression that the blue one is the result of the 2d pupil detector plugin. What might the red one be?

Chat image

papr 15 October, 2020, 19:42:27

@user-3cff0d Blue is the result of the 2d detector, correct. Red is the 3d detector result. You can use them as an indicator for how well your model is fit. Both, blue and red, ellipses should overlay as much as possible in all possible angles. It looks like there is room for improvement in this particular case as the eye model outline (green) is much larger than the eye ball. When well fit, it should be slightly smaller than the real eye ball in the image. We recommend rolling your eyes to get good fitting.

user-3cff0d 15 October, 2020, 20:00:47

Thanks! I'm actually running a forked version of the software with a neural network implemented into the 2d pupil detection but not the 3d pupil detection, thus the discrepancy (and the massive cpu usage)

papr 15 October, 2020, 20:02:29

@user-3cff0d Ah nice. We are working on supporting user pupil detection plugins. This should allow you to separate your implementation from the application and get rid of the fork. This is planned for the v2.6 release. 👍

user-3cff0d 15 October, 2020, 20:03:09

Oh, great! That's very exciting!

papr 15 October, 2020, 20:04:05

@user-3cff0d Do you have a reference on the NN that you are using? Or are you still working on it?

user-3cff0d 15 October, 2020, 20:09:04

I'm working with a research team at RIT to integrate the NN RITnet. However I'm unsure if I should be sharing the specifics just yet- I can get back to you after making sure

papr 15 October, 2020, 20:10:21

@user-3cff0d No worries. No need to share anything that is not ready for it. 🙂 Let us know once you have published your results/work. Keep your eyes on https://github.com/pupil-labs/pupil/releases regarding the mentioned changes.

user-3cff0d 15 October, 2020, 20:46:40

Is there an ETA for the v2.6 release?

papr 15 October, 2020, 20:48:05

@user-3cff0d eta for v2.5 is next week. Then if everything works well 2 weeks after. I can let you know once we have a first implementation. You seem to work from source already so you might be a great beta tester!

papr 15 October, 2020, 20:48:38

@user-3cff0d Where do you use your implementation primarily? Capture/Service or Player?

user-3cff0d 15 October, 2020, 20:50:37

At the moment I'm using more of the Service than anything else, since I haven't yet been fiddling a whole lot with gaze detection

papr 15 October, 2020, 20:51:27

@user-3cff0d ok, good to know. The initial step is to support these plugins for real time use. Afterward, we will extend the support to Player.

user-3cff0d 15 October, 2020, 20:55:36

Gotcha, that makes sense. I look forward to that!

user-c1bc31 16 October, 2020, 02:27:42

what does blue circle and red circle represent (around the pupil)? for 3d model, pupil size detection, it use red circle or blue circle to calculate the pupil size in mm? From my experience, blue circle is more stable than red circle. However, the size calculation seem to be based on red circle. I can send you my recording folder if you are interested.

papr 16 October, 2020, 07:35:29

@user-c1bc31 Please see my response above https://discord.com/channels/285728493612957698/285728493612957698/766385521291034636

user-563b04 16 October, 2020, 20:01:56

Please help me, here is my problem: I am using pupil eye tracker hardware which writes out two files of interest: pupil_position.csv and pupil_timestamps.csv. Within pupil_positions.csv there is a column titled 'pupil_timestamps'.

The problem is: pupil_timestamps.csv does not have the same values as the column 'pupil_timestamps'.

user-563b04 16 October, 2020, 20:02:03

Can anyone explain to me why they are different

papr 16 October, 2020, 20:36:10

@user-563b04 I am not sure where the pupil_timestamps.csv file would come from. Can you confirm the file name?

user-563b04 17 October, 2020, 14:57:04

@papr The original filename was pupil_timestamps.npy, sorry. I converted to .csv to make it readable in matlab. So to restate my original problem: " I am using pupil eye tracker hardware which writes out two files of interest: pupil_position.csv and pupil_timestamps.npy. Within pupil_positions.csv there is a column titled 'pupil_timestamps'.

The problem is: pupil_timestamps.npy does not have the same values as the column 'pupil_timestamps'. Can anyone explain to me why they are different"

papr 17 October, 2020, 15:06:52

@user-563b04 First of all, the exported pupil_positions.csv only includes data from the export range, set by the trim marks. Therefore, it is possible that it contains less values than the pupil_timestamps.npy/csv file. Additionally, comparing floating-point numbers is always tricky, especially if one side is read from a text-based file. Use appropriate comparison functions instead of ==

numpy.isclose(): https://numpy.org/doc/stable/reference/generated/numpy.isclose.html Matlab's ismembertol: https://www.mathworks.com/help/matlab/ref/ismembertol.html

user-563b04 17 October, 2020, 22:24:41

@papr pupil_positions.csv has a 'timestamp value' of approx 4.5e5 whereas the pupil_timestamps.npy file has a 'timestamp value' of approx 3.5e5. We export the entire length, so not sure why trim marks is here. The most important quetsion I am wondering is what time format does each use: pupil time or system time

papr 17 October, 2020, 22:34:54

@user-563b04 pupil software always exports pupil time. If you want you can share the recording with data@pupil-labs.com for us to review

user-563b04 17 October, 2020, 22:44:45

@papr ok what about sample rates? what sample rate does it export at?

papr 17 October, 2020, 22:52:34

@user-563b04 it exports the active pupil data within the export range. From your description it looks like the intermediate data has an offset compared to the export. I do not see any reason for that. The easiest way for me to find the cause for the issue would be to reproduce it using the recording.

papr 17 October, 2020, 22:53:24

There is no fixed export sampling rate.

papr 17 October, 2020, 22:54:09

@user-563b04 are you running pupil from recording or Post-hoc pupil detection?

user-563b04 17 October, 2020, 22:56:01

@papr we are running pupil player and drag and dropping a recording into the window and pressing 'e' for export. That is what generates pupil_positions.csv

user-563b04 17 October, 2020, 22:57:23

@papr On the left, this is the pupil timestamps column header I mentioned in pupil_positions.csv... On the right, this is the pupil_timestamps.npy in a csv file. Notice that the offsets are different AND the sequential increase is also different. Curiously, they are the same length

Chat image

papr 17 October, 2020, 23:06:49

@user-563b04 interestingly, the Pupil timestamps on the right are duplicated. While on the left, each timestamp appears four times. How was the recording made. Did you use any type time sync? How exactly did you convert the npy file to csv.

user-563b04 17 October, 2020, 23:08:11

@papr We did not use a time sync i believe. We made the recording using pupil capture. TO convert the npy file I just read it using python and wrote to a csv file

papr 17 October, 2020, 23:09:22

I really can only guess what is going wrong here. Please share the original recording with data@pupil-labs.com so I can have a look. You do not need to share the video files if you do not want to. I am only interested in the remaining raw data. Without it it is too difficult to tell.

user-563b04 17 October, 2020, 23:10:29

@papr I have just sent it to you

papr 17 October, 2020, 23:11:03

Thank you! I will come back to you via email once I have reviewed the email.

user-c1bc31 19 October, 2020, 03:30:30

It seemed that the detection of pupil size is more stable in 2d model (blue circle) compared to 3d model (red circle under the condition of sudden changes of light intensity in the environment. We are recording eye pupil response to a flashlight (used in photography). Can the 2d model measure the pupil size in the unit of mm?

user-14d189 19 October, 2020, 09:56:01

Hello again! I was wondering, what's the difference between the blue circle + dot and the red circle + dot? I'm under the impression that the blue one is the result of the 2d pupil detector plugin. What might the red one be? @user-3cff0d very nice high resolution image of the eye. Do you use different cameras?

user-a6e660 19 October, 2020, 12:01:42

Hello everybody, I'm currently writing a Python script to read the data in real time via API, which works so far. The x and y coordinates are read out via the norm_pos function and output via the print function. I have shown the data in a plot for x and y directions. However, if the eye position changes, I cannot make an exact statement because the measured values are extremely scattered. Does anyone have an idea why this could be or how the values are to be interpreted? Greetings Dominik

Chat image

user-a6e660 19 October, 2020, 12:02:37

Plot is from x-axis at different Eye Positions

papr 19 October, 2020, 12:06:37

@user-a6e660 Are you plotting pupil or gaze data? Also, do you split the data by eye id? Pupil data is relative to the eye camera and is therefore not comparable between eyes.

papr 19 October, 2020, 12:11:15

@user-a6e660 Checkout our notebook tutorial on loading and visualizing pupil data. Specifically the "Plot Pupil Positions" section might be of interest to you https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb

user-b259f6 19 October, 2020, 13:18:44

Hey everyone, I have a problem with the offset of my data compared to the really observed objects, and I don’t know if it’s possible to correct this error a posteriori, after recording. Could anyone help me?

The idea is to ask a participant to look at a map for about 2 minutes, and then to analyze what he looked at, how long, etc. After recording, we built a heatmap with QGIS software, from the raw data. But we clearly see that there is a shift between what was stared at and what is shawn on the heatmap. I can send the heatmap we built. For example, after discussing with the participant, he told us he looked at the top right part of the map, but the hot spots are more on the left. Is there any possibility to correct the raw data? (I can send you the raw data file we used). I precise that we have the same problems using Pupil Player for the heatmap, which means that it comes from the data and not the software.

And my second question is what does the file “surf_Positions_surfaceN “ represent ? Can it be helpful to resolve my problem?

Thank you for your answers and have a nice day !

Chat image

papr 19 October, 2020, 13:20:56

@user-b259f6 If you recorded the calibration sequence, you can apply a manual offset correction in the "post-hoc calibration" of Pupil Player. https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration

papr 19 October, 2020, 13:22:07

Alternatively, you can use this user plugin to apply a manual offset: https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433

See https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin on how to add user plugins.

user-b259f6 19 October, 2020, 13:23:28

Thank you for your so quick answer ! I'll look at this !

user-a6e660 19 October, 2020, 13:29:28

@papr Thank you very much for your answer. It works now!

user-563b04 19 October, 2020, 19:32:38

hi @papr , I have shared the raw data files with you on google drive!

user-3cff0d 19 October, 2020, 19:34:11

@user-14d189 Hi, yes I'm fairly certain they're different cameras however I am not the one who took that particular video so I'm not sure exactly what was used

user-594d92 20 October, 2020, 13:31:26

hello, I want to be able to run Pupil Core from source code. I have completed Installing Dependencies and Clone the Repo, but there are some problem when we run Pupil.

The building in the command prompt, activate the virtual environment, and into the path 'pupil_src', then enter 'run_capture. Bat'. An error occurred as shown in the figure. The last line says: PermissionError: [WinError 5] Access Denied

Does anybody know what the mistake is?Thank you very much.😁 @papr

Chat image

papr 20 October, 2020, 13:46:32

@user-594d92 you need to update the packaging module

user-594d92 20 October, 2020, 14:04:53

@papr Thank you. I'll have a try

user-594d92 20 October, 2020, 14:21:19

Hi again, may I ask which module I should update? is this?

Chat image

papr 20 October, 2020, 14:21:43

pip install packaging -U

papr 20 October, 2020, 14:22:24

The module that needs updating is called packaging. 🙂

user-594d92 20 October, 2020, 14:32:29

ohh, i know, thanks !😅

user-772ef5 20 October, 2020, 15:12:00

Hi. Just want to ask if this software have and SDK that we can use to implement it on our android app?

user-c563fc 20 October, 2020, 18:32:46

Can someone tell me, how the pupil diameter value is being calculated in the graph?

Chat image

user-c563fc 20 October, 2020, 18:34:03

Because in exported raw data I see the values like 43,91654965, 42,3645332 , 52,24551901 and so on

papr 20 October, 2020, 20:10:21

@user-772ef5 The Pupil Core network API requires a computer running Pupil Capture. You could remote control and receive data from Pupil Capture on your phone. But this will likely not fulfill your use case.

papr 20 October, 2020, 20:12:03

@user-c563fc The timeline data does not show outliers that were removed using Tukeys fences. https://github.com/pupil-labs/pupil/blob/70e93bcb81073f767173087b4137a5078835a282/pupil_src/shared_modules/pupil_producers.py#L204-L212

user-594d92 21 October, 2020, 12:47:21

Hi. I want to run the source code. But what does this error mean? Thank you very much

https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md I follow the procedure of the website. The 'Start Pupil' at the bottom. I entered "run_capt.bat" ,"run_player.bat" and "run_service.bat" from the command prompt, respectively. @papr 😀

Chat image

papr 21 October, 2020, 12:50:55

@user-594d92 it looks there are version mismatches between your python installation and some of the installed modules

papr 21 October, 2020, 12:52:29

The bat files assume that you followed the instructions. It looks like you installed the Pupil dependencies in an Anaconda environment. Not if the bat file can handle that correctly

papr 21 October, 2020, 12:52:48

What is the output of python - V for you?

user-594d92 21 October, 2020, 13:02:58

@papr This looks a little messy...😅

Chat image

user-594d92 21 October, 2020, 13:03:10

and...

Chat image

papr 21 October, 2020, 13:05:48

@user-594d92 I meant upper case V but that is ok.

papr 21 October, 2020, 13:14:18

I was interested in the version number which is included in the second screenshot.

user-594d92 21 October, 2020, 13:16:51

This is the Python version of my virtual environment.

Chat image

papr 21 October, 2020, 13:28:42

@user-594d92 Could you try starting python -v (lowercase v) again. And afterward call import numpy and import pyglui.ui? Please copy the output to a text-sharing service like https://gist.github.com/

user-594d92 21 October, 2020, 13:36:56

ok, I'll just do it. 😁

user-594d92 21 October, 2020, 13:57:59

I have uploaded it to this website. Thank you very much. @papr

papr 21 October, 2020, 13:58:22

@user-594d92 Could you share the link to the document with us?

user-594d92 21 October, 2020, 14:00:44

ohh...😅 https://gist.github.com/mimic777/d8ac3fc0fcac5914120fea62a7f05e0a/revisions

papr 21 October, 2020, 14:04:26

@user-594d92 What is your output for python -c "import numpy; print(numpy.__version__)"?

papr 21 October, 2020, 14:04:54

This comment suggest that upgrading numpy solves the issue https://github.com/scikit-learn-contrib/hdbscan/issues/272#issuecomment-453958532

user-594d92 21 October, 2020, 14:26:45

ok,I will have a try. thank you.

user-430fc1 21 October, 2020, 15:24:52

For error handling, does anyone know if there is a recommended approach for checking if Pupil Capture is running or if Pupil Core is plugged in? For example, when trying to connect with ZMQ, raise an error message instead of having the script just hang.

papr 21 October, 2020, 15:44:35

@user-430fc1 If I remember correctly, socket.connect() does not block, correct?

papr 21 October, 2020, 15:53:28
import zmq

ctx = zmq.Context()
socket = zmq.Socket(ctx, zmq.REQ)
socket.connect('tcp://127.0.0.1:50020')

socket.send_string("t")  # any command is fine here
# https://pyzmq.readthedocs.io/en/latest/api/zmq.html#zmq.Socket.poll
if socket.poll(1000) == 0:
  raise RuntimeError("Pupil application not reachable")
socket.recv_string()  # result can be discarded, but recv_string() must be called!
user-666fd7 21 October, 2020, 16:38:12

If I want to 3D print my own headset, are the files available for that?

user-6e3d0f 21 October, 2020, 16:54:37

Is there anyway to check what Eye Camera my Core Glasses have? I only got them from the university but from the manual there is focus 120Hz and NoFocus 200 Hz Eye Cameras and they look pretty much the same in the manual

papr 21 October, 2020, 16:56:19

@user-6e3d0f You can share a picture of them with us if you want. I know them pretty well.

papr 21 October, 2020, 16:56:35

@user-666fd7 Check out https://docs.pupil-labs.com/core/diy/#getting-all-the-parts

user-6e3d0f 21 October, 2020, 16:57:06

Chat image

user-c563fc 21 October, 2020, 16:57:22

@user-c563fc The timeline data does not show outliers that were removed using Tukeys fences. https://github.com/pupil-labs/pupil/blob/70e93bcb81073f767173087b4137a5078835a282/pupil_src/shared_modules/pupil_producers.py#L204-L212 @papr I am not sure exaclty what do you mean. Graph shows a range between 5.1-7.0mm for pupil diameter and pupil diameter values are like this 43,91654965, 42,3645332 , 52,24551901. So, the value range value is should be the minimum diameter value, but I cannot figure out the formula to convert values like 43,91654965, 42,3645332 , 52,24551901 to 5.1mm

user-6e3d0f 21 October, 2020, 16:57:57

Chat image

user-6e3d0f 21 October, 2020, 17:03:55

@papr

papr 21 October, 2020, 17:32:33

@user-6e3d0f that is a 200hz camera

papr 21 October, 2020, 17:37:30

@user-c563fc the graph does not display all values. Therefore, the lowest displayed value is not the minimum value. The formula is implemented in the linked source code. There is also a link in the comment that explains the procedure. https://en.wikipedia.org/wiki/Outlier#Tukey's_fences

user-3cff0d 21 October, 2020, 19:12:04

Once pye3d is installed and being loaded with the text "Using refraction corrected 3d pupil detector." in the command window, how do I switch to using that plugin instead of the default 3d plugin?

user-3cff0d 21 October, 2020, 19:12:19

Or would that require modifying the code

papr 21 October, 2020, 19:12:45

@user-3cff0d You are using pye3d instead of the old 3d detector already then

papr 21 October, 2020, 19:13:01

Check the detector menu title in the eye window

user-3cff0d 21 October, 2020, 19:13:23

Chat image

user-3cff0d 21 October, 2020, 19:13:40

It just says "Pupil Detector 3D" at the moment

user-3cff0d 21 October, 2020, 19:14:31

Chat image

user-3cff0d 21 October, 2020, 19:14:44

Here's a screenshot of the actual command window where I see that message

papr 21 October, 2020, 19:18:04

@user-3cff0d Please try Restarting with default settings from the general settings.

user-3cff0d 21 October, 2020, 19:18:48

Yep, that fixed it. 🤦‍♂️

user-3cff0d 21 October, 2020, 19:18:50

Thanks!

user-46ec9c 21 October, 2020, 20:41:42

Hello

user-908b50 21 October, 2020, 21:07:22

I have just been trying out the new changes since I got back to town. I got your email and have updated my local repository to 2.5. My program freezes a lot (see pic). Is there a way to work around it. I have also pip installed glfw as per newer release.

Chat image

papr 21 October, 2020, 21:08:01

@user-908b50 Does the bundle release or the source version freeze? Or both?

user-908b50 21 October, 2020, 21:09:04

@papr both do! Source release freezes more. Actually, I have downloaded the bundle release too. But for some reason it take me to v1.11 (which is also on my pc) and not v2.5.

papr 21 October, 2020, 21:10:07

But for some reason it take me to v1.11 (which is also on my pc) and not v2.5. Source or bundle?

user-908b50 21 October, 2020, 21:10:22

Bundle does that

user-908b50 21 October, 2020, 21:10:48

The above pic is a screen capture of the source version

papr 21 October, 2020, 21:10:56

Then it looks like the bundle is not correctly installed. Please use sudo dpkg -i <path to deb file> to install the deb files.

papr 21 October, 2020, 21:11:32

@user-908b50 When running from source and it freezes, does it continue after a while or do you see a traceback/error in the log messages?

user-908b50 21 October, 2020, 21:11:47

Is there a way to solve the errors with source? I like seeing the error messages pop up when working with source and I enjoy working with more transparency.

user-908b50 21 October, 2020, 21:12:12

@user-908b50 When running from source and it freezes, does it continue after a while or do you see a traceback/error in the log messages? @papr it continues after a long long while. Let me share my messages/

papr 21 October, 2020, 21:12:43

The thing is, that I do not exactly know what your modifications are. Therefore, I would like to make sure the issue is also present in the bundle to ensure that the issue is on our side. 🙂

user-908b50 21 October, 2020, 21:13:20

ahh okay, yes let me correctly install the bundle again!

user-908b50 21 October, 2020, 21:14:06

Here is my terminal output

message.txt

papr 21 October, 2020, 21:14:47

@user-908b50 That is an issue with your opencv installation.

papr 21 October, 2020, 21:15:14

You need to build opencv with tbb support or else the background legacy marker detection crashes.

papr 21 October, 2020, 21:15:23

This issue should not be present in the bundle.

papr 21 October, 2020, 21:19:39

Note: Installing opencv via pip or anaconda does not include tbb support.

papr 21 October, 2020, 21:22:35

Also, you might want update your fork to upstream master now that we have released. This should also include the v2.5 tag. 👍

user-908b50 21 October, 2020, 21:23:07

@papr good to know why that happens (although it only happens for some recordings). I will take a look at opencv. Its frustrating to know that i wasn't installed correctly in the first place. I followed these instructions for opencv: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu18.md. So, I actually installed it using sudo. Lets see if there is a way to edit current installation to include tbb support.

papr 21 October, 2020, 21:24:07

@user-908b50 Well, the this document does not tell you to use Anaconda which you use in your setup. 🙂

papr 21 October, 2020, 21:24:45

But no worries, we can check if your anaconda environment is using the correct opencv version.

user-908b50 21 October, 2020, 21:24:49

Also, you might want update your fork to upstream master now that we have released. This should also include the v2.5 tag. 👍 @papr okay, so an correct merge should mean the application version must be 2.5.

user-908b50 21 October, 2020, 21:25:09

@user-908b50 Well, the this document does not tell you to use Anaconda which you use in your setup. 🙂 @papr that's right!

papr 21 October, 2020, 21:25:17

@papr so an correct merge should mean the application version must be 2.5. @user-908b50 Correct, if the git tags are pulled correctly.

papr 21 October, 2020, 21:26:24

What is your output for conda list | grep cv when executed in your pupil anaconda environment?

user-908b50 21 October, 2020, 21:27:50

What is your output for conda list | grep cv when executed in your pupil anaconda environment? @papr i don't get an output.

user-908b50 21 October, 2020, 21:28:43

conda list gets me the following packages.

message.txt

papr 21 October, 2020, 21:30:39

Interestingly, there is no opencv. What s the output for python -c "import cv2; print(cv2.__file__)"?

user-908b50 21 October, 2020, 21:32:07

Yes, you are right! There is no cv2. Could it deleted in between updates? That's the output: Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'cv2' has no attribute 'file'

papr 21 October, 2020, 21:32:43

@user-908b50 Since Pupil is still running, the anaconda environment is getting it from somewhere else. Our goal is to find that place.

user-908b50 21 October, 2020, 21:35:26

@user-908b50 Since Pupil is still running, the anaconda environment is getting it from somewhere else. Our goal is to find that place. @papr yeap, i know that is in the python3 installation. /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so

papr 21 October, 2020, 21:37:32

AttributeError: module 'cv2' has no attribute '__file__' Mmh, that is unexpected. Please try the following: python -v -c "import cv2" /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so How do you know that for sure? The command above should confirm your statement. (Beware, there is a lot of output.)

If this is indeed the version that is being loaded, then our install instructions are incorrect and we will have to revert to the Ubuntu 16.04. OpenCV build instructions.

user-908b50 21 October, 2020, 21:38:15

@user-908b50 Correct, if the git tags are pulled correctly. @papr okay, that's strange! I will try pulling from pupil labs directly. I don't see anything wrong with my git pull.
(pupillabs) [email removed] git pull https://github.com/fiza09/pupil From https://github.com/fiza09/pupil * branch HEAD -> FETCH_HEAD Already up to date. (pupillabs) [email removed] cd pupil_src (pupillabs) [email removed] python main.py player --version Pupil Player version 2.4.73 (source)

papr 21 October, 2020, 21:39:18

To update:

git fetch --all --tags
git merge upstream/master --ff-only
git describe --long
user-908b50 21 October, 2020, 21:41:52

Mmh, that is unexpected. Please try the following: python -v -c "import cv2" How do you know that for sure? The command above should confirm your statement. (Beware, there is a lot of output.)

If this is indeed the version that is being loaded, then our install instructions are incorrect and we will have to revert to the Ubuntu 16.04. OpenCV build instructions. @papr yes, i will re-install opencv. that above command gets me different errors. basically, it seems like there is no such file or directory. Its odd because when i try this, it tells me there is a cv2 file: find /usr/ -iname cv2.* /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so

user-908b50 21 October, 2020, 21:42:03

i will re-install opencv3.0

papr 21 October, 2020, 21:42:29

@user-908b50 No, please wait. Let's try to approach this systematically.

papr 21 October, 2020, 21:42:57

Please share the output of the above command.

user-908b50 21 October, 2020, 21:43:36

To update: sh git fetch --all --tags git merge upstream/master --ff-only git describe --long @papr i get this finally: git describe --long v2.5-0-g70e93bcb. it worked with the force installation. thanks you!

papr 21 October, 2020, 21:43:58

/usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so This file is to be expected there if you used apt to install opencv. The only issue is that we need to ensure that is actually being used. 🙂

user-908b50 21 October, 2020, 21:44:13

here it is!

message.txt

papr 21 October, 2020, 21:44:16

force installation What do you mean by force installation?

papr 21 October, 2020, 21:46:44

@user-908b50 I think there is a small typo in your command. You typed -v twice. It needs to be -v -c 🙂

user-908b50 21 October, 2020, 21:47:17

What do you mean by force installation? @papr sorry, i meant fast-forward.

user-908b50 21 October, 2020, 21:48:01

thanks for catching that!

message.txt

papr 21 October, 2020, 21:49:28
extension module 'cv2' executed from '/home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so'
papr 21 October, 2020, 21:51:40

Now, we need to figure out, how it got there if conda list does not even know that it is there.

user-908b50 21 October, 2020, 21:52:42

So I installed using sudo apt and then followed the instructions here (https://stackoverflow.com/questions/37188623/ubuntu-how-to-install-opencv-for-python3/37190408#37190408) to get it to work with my program. In brief, i symlinked openCV.

papr 21 October, 2020, 21:53:58

Ah, ok, so /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so symlinks to /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so?

Can you confirm that by running ls -l /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so?

user-908b50 21 October, 2020, 21:55:18

seems like it: lrwxrwxrwx 1 fiza fiza 65 Aug 6 11:01 /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so -> /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so.

user-908b50 21 October, 2020, 21:55:46

the set-up worked! this is exactly what i did.

papr 21 October, 2020, 21:55:53

Ok, then I must apologize. Our documentation is wrong. Installing opencv via apt does indeed not work.

papr 21 October, 2020, 21:56:29

Please uninstall opencv via apt, and rebuild it using these instructions: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu17.md#opencv

user-908b50 21 October, 2020, 21:56:59

don't worry about it! The set-up does link to the stack overflow. its trial and error anyway.

papr 21 October, 2020, 21:57:41

Please share the output of the cmake -DCMAKE_BUILD_TYPE=RELEASE -DWITH_TBB=ON -DWITH_CUDA=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON .. before continuing with make -j2

papr 21 October, 2020, 21:58:23

We need to confirm that the python installation is correctly found such that we can symlink the self-built opencv version into the environment as you have done it before.

user-908b50 21 October, 2020, 22:05:28

This is what I am doing because libopencv files cannot be found otherwise. sudo apt-get autoremove opencv-data Reading package lists... Done Building dependency tree
Reading state information... Done The following packages will be REMOVED: libfprint-2-tod1 libllvm9 libnvidia-cfg1-440 libnvidia-common-440 libnvidia-compute-440 libnvidia-compute-440:i386 libnvidia-decode-440 libnvidia-decode-440:i386 libnvidia-encode-440 libnvidia-encode-440:i386 libnvidia-extra-440 libnvidia-fbc1-440 libnvidia-fbc1-440:i386 libnvidia-gl-440 libnvidia-gl-440:i386 libnvidia-ifr1-440 libnvidia-ifr1-440:i386 nvidia-compute-utils-440 nvidia-dkms-440 nvidia-kernel-common-440 nvidia-kernel-source-440 nvidia-utils-440 opencv-data xserver-xorg-video-nvidia-440 0 upgraded, 0 newly installed, 24 to remove and 70 not upgraded. After this operation, 80.5 MB disk space will be freed. Do you want to continue? [Y/n]

papr 21 October, 2020, 22:07:20

sudo apt remove -y python3-opencv libopencv-dev should do the trick

user-908b50 21 October, 2020, 22:19:47

Please share the output of the cmake -DCMAKE_BUILD_TYPE=RELEASE -DWITH_TBB=ON -DWITH_CUDA=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON .. before continuing with make -j2 @papr here it is!

message.txt

papr 21 October, 2020, 22:22:19

@user-908b50 I have to leave (eu timezone 😬).

--   Python 3:
--     Interpreter:                 /home/fiza/anaconda3/envs/pupillabs/bin/python3 (ver 3.8.5)
--     Libraries:                   /usr/lib/x86_64-linux-gnu/libpython3.8.so (ver 3.8.5)
--     numpy:                       /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/numpy/core/include (ver 1.19.1)
--     install path:                lib/python3.8/site-packages/cv2/python-3.8

This is green light to proceed though! After compiling has finished, run the remaining install instructions. It will tell you where it installs the cv2*.so file. You will have to symlink it in the same way as you have done previously with the apt version.

papr 21 October, 2020, 22:23:23

Good luck! Make sure to delete the old symlink first and let us know how it went. 🤞

user-908b50 21 October, 2020, 22:23:26

@papr alright, thanks a lot for your help!! Good night =]. I will run both source and bundle version and let you tom.

user-908b50 22 October, 2020, 04:38:40

I'm still getting the same error as earlier (attaching the error message) with the source version. Pupil player (bundle) also freezes in a similar manner.

message.txt

user-908b50 22 October, 2020, 04:54:37

I ran the same checks as last time. I believe the symlink isn't working properly within the conda environment and I am not sure why that is.

message.txt

papr 22 October, 2020, 08:48:48

@user-908b50 Why do you suspect the symlink to not work as expected?

'cv2' loaded from '/home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so' This looks correct. What is the output of ls -l /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so ?

user-430fc1 22 October, 2020, 12:45:19

@user-430fc1 To be honest, I am not sure what causes the 3d ellipse to jump there. The model is stable. The 2d detection looks solid. I ran the offline pupil detection and the jump reduced to a single scene frame. It will take some time to look into this issue. @papr Were you able to investigate this any further? I've been running into the same issues quite frequently. 2d detection is fine, confidence is great, but the 3d model jumps around and is unstable.

user-908b50 22 October, 2020, 19:25:50

@user-908b50 Why do you suspect the symlink to not work as expected? This looks correct. What is the output of ls -l /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so ? @papr (pupillabs) [email removed] ls -l /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so lrwxrwxrwx 1 fiza fiza 60 Oct 21 20:27 /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so -> /usr/local/lib/python3.8/site-packages/cv2/python-3.8/cv2.so I can't see why the gui would freeze on the surface tracker plugin, right when I am selecting legacy markers. You suggested the last time this happened that thus could be because of opencv. I believe conda list | grep cv2 does not output anything because opencv was not used to download cv2 anyway. Anyway, the freeze also happens with the bundle version as well. The error messages on the terminal refer to pyglui and cpython packages.

user-908b50 22 October, 2020, 19:33:46

I did leave the program on overnight though. What I can do is try it on Windows. I wonder if its a Ubuntu 20 problem.

papr 22 October, 2020, 19:36:00

@user-908b50

I believe conda list | grep cv2 does not output anything The reason for that is that you symlinked it, correct. error messages on the terminal refer to pyglui and cpython packages. Yeah, I noticed this part, too. Nonetheless, if I understood the problem is that there is an interaction between the background and foreground detection that causes the issue. The traceback might not showing us the real reason for the issue.

Giving Windows a try should fix the issue. Unfortunately, I don't have an Ubuntu 20 instance to reproduce the issue.

user-908b50 22 October, 2020, 19:37:58

yeap, conda was not used to download opencv*

user-908b50 22 October, 2020, 19:38:17

I have Windows too. I will give that a try today.

user-7daa32 23 October, 2020, 00:54:25

Hello everyone.

Please is this possible?

The system will command a participant to look at a starting point and then command the participant to look at a AOI A

And then tell the participant to go back to the starting point

Then tell the participant to look at AOI B.

And then tell the participant to return to the starting point.

And on and on...

All these will be in one video

Is this possible?

Please if you have any idea, please let me know. Thank you

papr 23 October, 2020, 07:00:15

@user-7daa32 hi, yes, this is definitely possible using Psychopy or other experiment building software.

user-7daa32 23 October, 2020, 07:43:48

@user-7daa32 hi, yes, this is definitely possible using Psychopy or other experiment building software. @papr I am glad it's . Thanks. So I can Google it and download it. Do you have expertise or you know any anyone using it or have used it. A publish source might help. Thanks

user-7daa32 23 October, 2020, 07:48:18

Please can capture identify when the participant look at A. The system or capture need to know when the participant has locate an AOI in order to give further commands. I am trying to figure out how to flag times in the videos

user-19f337 23 October, 2020, 19:43:41

Hello, I collected some pupil data from a participant. At the beginning I had a great model from both eyes. But during the capture, the eye model changed as I did not check the "Freeze model" button. Is it possible to freeze the one I had at the beginning and recompute the pupil data according to that model ?

papr 23 October, 2020, 22:26:06

@user-7daa32 You can do this by subscribing to surface-mapped gaze data but you will need to program this functionality.

papr 23 October, 2020, 22:27:06

@user-19f337 Yes, you can run the post-hoc pupil detection. Once the model is well fit, you can pause the detection, freeze the models and restart the detection with the frozen model.

user-19f337 23 October, 2020, 22:34:41

It worked, thanks !

user-7daa32 23 October, 2020, 22:53:45

@user-7daa32 You can do this by subscribing to surface-mapped gaze data but you will need to program this functionality. @papr thanks... I am not sure I understand this. You mean I will need to write a code to do this ? I am still using old version of pupil Lab software, is that a new features? Although, I have been using surface plugin.

papr 23 October, 2020, 22:55:19

@user-7daa32 Yes, you would have to write code. This is not a new feature. You would have to setup the AOIs as surfaces and subscribe to the surface-mapped gaze data (like in this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py) to check when the subject is gazing onto the AOI.

user-b292f7 25 October, 2020, 18:19:28

Hi, I want to ask you again somthing about the fixation? you recomand we use the default suggestion but I steal have somthing wrong on my data I dont get Normal distribuationand I try lot of plying , please cab you suggest me what to do? how can I send you a file?

papr 25 October, 2020, 18:20:35

@user-b292f7 You can send data to [email removed]

user-b292f7 25 October, 2020, 18:20:48

Thank you

papr 25 October, 2020, 18:21:13

@user-b292f7 Please attach an explanation on what you are analysing and what you are expecting.

user-594d92 26 October, 2020, 12:02:07

@papr hello. We used the Apriltags to track the surface, but it is not gaze point shows in the screen. So may I ask Is it still need to calibrate in Pupil Capture?

papr 26 October, 2020, 12:02:39

@user-594d92 Correct, you need to calibrate first before you get gaze data.

user-594d92 26 October, 2020, 12:22:09

I use 2 screen, I put apiltags in the screen No.1 and the stimulu images, and screen No.2 is to show the Pupil Capture. So did I need to calibrate the screen No.2 as well? Thank you @papr

papr 26 October, 2020, 12:25:27

@user-594d92 Please note, the calibration is independent of the surface tracking. Nonetheless, I would recommend doing the calibration on screen no. 1. and making sure that the Pupil Capture window is not visible in the scene video when doing the calibration. Else, you might get duplicated marker detections which can affect the calibration negatively.

user-594d92 26 October, 2020, 12:27:34

Thanks so much @papr I will try it soon 🙂

user-a6e660 26 October, 2020, 12:33:41

@user-a6e660 Are you plotting pupil or gaze data? Also, do you split the data by eye id? Pupil data is relative to the eye camera and is therefore not comparable between eyes. @papr I have now divided the data into EYE0 and EYE1 and save the data in a text file, but the values are still not meaningful because they do not change. Here is the code ... Does any of you recognize why there is no change. The data about the recording function seem to make sense, since a pupil rash is detected there.

Chat image

papr 26 October, 2020, 12:35:38

@user-a6e660 which version of Capture do you use?

papr 26 October, 2020, 12:36:35

Also, could you attach one of the resulting text files? Ideally one for each eye

user-a6e660 26 October, 2020, 12:55:50

txtdata_EYE0.txt

user-a6e660 26 October, 2020, 12:55:51

txtdata_EYE1.txt

user-a6e660 26 October, 2020, 12:59:11

We are using version V.2.4.0. The data from the text file correspond to a recording of 25s. -> 5 seconds straight out -> 5 seconds left -> 5 seconds right -> 5 seconds above -> 5 seconds below

It's strange that there is a lot more data in a recording. Any ideas?

papr 26 October, 2020, 13:41:47

@user-a6e660 Since v2.0, we run both detectors, 2d and 3d. Therefore, you should get 2 (detectors) x 2 (eyes) x FPS pupil datums per second.

papr 26 October, 2020, 13:53:29

@user-a6e660 You can use the method key to differentiate the results from the two different detectors

papr 26 October, 2020, 13:57:11

@user-a6e660 Also, yes, it is indeed weird that your text files only include 140 samples each. I would suggest saving the timestamp, too, to check if your script drops intermediate results or if your scripts starts recording too late/stops too early.

user-b7ea86 26 October, 2020, 14:06:19

Hello @user-a6e660, is it possible that you process too much data with Python and the computing power of your computer is not sufficient ? The data you store in pupil_position is very large, this can lead to runtime delays. Have a look at the task manager to see what percentage of your CPU is running. @papr is there a way to get the data directly from the data stream instead of saving them all?

user-a6e660 26 October, 2020, 14:12:50

Yes @user-b7ea86 I looked at the utilization and it runs at 100%. It also looks as if the data are written to the serial monitor (python shell) with an extremely delayed amount of time, you can reduce the data stream to a certain extent, e.g. only save every 10th data frame.

user-a6e660 26 October, 2020, 14:14:56

@papr Thank you for the tips. How do I then apply the method in Python exactly?

papr 26 October, 2020, 14:18:46

@user-a6e660 I just saw that you are opening and closing the file handle for each received datum. This is definitively too slow, as predicted by @user-b7ea86. Just store the values in an array, and once you are done recording, serialize the array. Regarding the method, you can access it via pupl_position["method"] it either returns 2d c++ or 3d c++.

papr 26 October, 2020, 14:19:33

Also, yes, I would not print the results to the shell as this can be very slow as well, depending on which terminal you use.

user-265907 26 October, 2020, 14:53:29

Hi, I just collected data using a eye-tracker from pupil, I applied the post hoc protocol to get the unix timestamp, but apparently something went wrong because I got timestamps from 2019, someboy have had this problem? has somebody figured out how to fixed this problem?

papr 26 October, 2020, 14:54:06

@user-265907 Could you share the info.player.json file with us?

user-265907 26 October, 2020, 14:54:23

Yes, jus give me a moment

user-265907 26 October, 2020, 14:54:52

Info

info.player.json

user-3ede08 26 October, 2020, 21:41:37

Hi, I have recorded head pose data, as well as gaze, blink and fixation data. I am wondering while my recording variables are all empty. e.g. 2020_26_10/000/gaze.pldata, 2020_26_10/000/gaze_timestamps.npy and so on Is there any reasons for this ?

papr 26 October, 2020, 21:57:46

@user-3ede08 Have you made sure that you calibrated before you started your experiment? You should see the gaze point once the calibration was successful.

papr 26 October, 2020, 21:58:46

@user-265907 What post-hoc protocol are you referring to exactly? This one? https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-3ede08 26 October, 2020, 22:03:20

Then I have exported the data (using pupil player). There is a file called pupil_position.csv which seems to contain data concerning the 2 eyes. I don't understand why there is always 2 pupil_stamps corresponding to a unique (the same number twice) eye_id. Suppose I would like to plot pupil_timestamps vs confidence for each eye, how would I use this files data ?

Chat image

user-3ede08 26 October, 2020, 22:04:31

@user-3ede08 Have you made sure that you calibrated before you started your experiment? You should see the gaze point once the calibration was successful. @papr I think so, because head pose data are available.

papr 26 October, 2020, 22:10:57

@user-3ede08 So which files are empty and which are not? Pupil positions has each timestamp twice, because we run 2d and 3d detector in parallel. Check the method column to see which is which.

user-3ede08 26 October, 2020, 22:28:40

There is a ||method_id|| column which alternatively contains the number 1 and 6.

Chat image

user-3ede08 26 October, 2020, 22:29:06

When I look at the two first values of the pupil_timestamps, I thought, each of them corresponds to one eye.

user-3ede08 26 October, 2020, 22:34:33

@user-3ede08 So which files are empty and which are not? Pupil positions has each timestamp twice, because we run 2d and 3d detector in parallel. Check the method column to see which is which. @papr I think the recorded files are empty, and they are data in exports file

papr 26 October, 2020, 23:07:06

@user-3ede08 I think you are mistaking model_id for method

papr 26 October, 2020, 23:08:39

@user-3ede08 If the export contains data, and you did not run post-hoc detection, the intermediate files are not empty.

user-3ede08 26 October, 2020, 23:16:34

@user-3ede08 I think you are mistaking model_id for method @papr Oh yes, you are right.

user-3ede08 26 October, 2020, 23:26:16

For a single eye_id and at a specific timestamps we have the 2d and 3d model. But we don't have the 2d and 3d of the other eye_id at the same timestamps. Does it mean that it simultaneously records data ?

Chat image

user-3ede08 27 October, 2020, 06:42:10

@user-3ede08 If the export contains data, and you did not run post-hoc detection, the intermediate files are not empty. @papr I got it, the problem was the calibration.

papr 27 October, 2020, 07:01:30

For a single eye_id and at a specific timestamps we have the 2d and 3d model. But we don't have the 2d and 3d of the other eye_id at the same timestamps. Does it mean that it simultaneously records data ? @user-3ede08 correct

user-b292f7 27 October, 2020, 08:10:21

@user-b292f7 Please attach an explanation on what you are analysing and what you are expecting. @papr Hi can you please check if you got my emails?

papr 27 October, 2020, 08:19:48

@user-b292f7 I can confirm that we received your emails. I still need to review the data. I will come back to you via email.

user-b292f7 27 October, 2020, 08:40:29

Thank you!!

user-b259f6 27 October, 2020, 09:37:49

Hi, I'm sorry to come back to you again, but I still have a problem about "shifted" data. I tried what you told me, calibration post hoc, and the plugin you gave me. It's a bit better but I still have the problem. It seems that it's always the same for all participants : the data are systematically shifted towards the center. I send you a picture as an example : hotspots are not exactly on what the participant really stared at, they're a bit more centered... Is it because the target is too small and the device cannot measure such small variations in gaze? Is there any other solution ? Thank you again !

Chat image

user-265907 27 October, 2020, 13:23:20

@papr yes, that one.

papr 27 October, 2020, 13:25:16

@user-265907 Interesting. Do you get this issue for all your recordings or only for this particular one?

user-265907 27 October, 2020, 13:26:38

Let me see, I just prove the protocol with one recording.

papr 27 October, 2020, 13:28:24

@user-b259f6 I think the target is sufficiently large. I think the center-bias might be related to the 3d calibration. Have you tried 2d gaze estimation already?

user-b259f6 27 October, 2020, 13:39:09

@papr yes I did it with 2d gaze estimation

papr 27 October, 2020, 13:48:03

@user-b259f6 Actually, I misunderstood your previous question regarding the target size and need to correct my previous response.

The target size depends on your calibration/validation accuracy. The lower the accuracy, the large the target needs to be.

user-b259f6 27 October, 2020, 13:56:25

@papr ok I understand. So if the problem persists even after a post hoc calibration to try to have a better accuracy, there is nothing to do ?

user-265907 27 October, 2020, 14:15:23

@papr this error only happen with that recording, tomorrow I will collect more data and would try to put attention, to identify the error.

user-765368 27 October, 2020, 15:10:25

Hi. is it possible to use the data export of the pupil player without opening the player?(i just want the pupil x and y positions) (im new in this, sorry if the answer is too obvious).

papr 27 October, 2020, 15:37:29

@user-765368 Not with first-party tools. Check out the "Scripts" section of our community work. It includes some example scripts that extract data from the intermediate data format without having to open Player for that. https://github.com/pupil-labs/pupil-community#scripts

user-6f397b 27 October, 2020, 16:26:50

Hi All, I am looking at getting an eye tracking solution and my choice seems to be between the Pupil Lab and Tobii. I like the thought of the Pupuil lab and open source but some reviews suggest the headset moves a lot and calibration fails. Any advice or links to information on the two wouild be really helpful. Thank you.

user-8d1ce2 27 October, 2020, 19:50:03

Hello there! I'm new to Pupil Labs -- and eye tracking in general. My goal is to measure where individuals look as they walk along a 6 meter gait mat while they hold a tray with stable or tippy objects. I'm looking for advice on how to best calibrate the eye tracker for this task. I think during an experimental trial, they will look ahead toward the end of the mat as well as at the tray they are holding (particularly if the objects are tippy). Is it possible to calibrate for both the environment and the tray? Should I use the printed calibration markers and if so, how many should I use and what would be good locations to place them? Looking forward to learning from everyone!

user-c563fc 27 October, 2020, 20:05:38

hey @papr
Does this script made by you still works with current Pupil player?

user-c563fc 27 October, 2020, 20:05:41

https://gist.github.com/papr/743784a4510a95d6f462970bd1c23972

papr 27 October, 2020, 20:07:38

@user-c563fc I think so, yes

user-c563fc 27 October, 2020, 20:16:07

@user-c563fc I think so, yes @papr and how does it works, I put the script in recording folder and just run the script?

papr 27 October, 2020, 20:26:35

@user-c563fc you run the script and pass the recording folders. The resulting csv file will be written into each according recording folder

user-594d92 28 October, 2020, 03:26:46

Hi 🙂 I got expert data [gaze_positions.csv] from Pupil Player, and how can I calculate the pixel coordinates from normalized coordinates?

user-03a2fe 28 October, 2020, 14:21:14

Hi! I want to use the online blink detection plugin in order to detect forced blinks and use them as human-machine interface input sources (i.e. discrete events that activate something, depending on the number of consecutive blinks). However each time that I blink the detector triggers kind of a random number of onset and offset blinks (e.g. onset, onset, onset, offset, offset, offset). For me, 1 blink=1 event (don't care if it's onset or offset). I tried to play with the plugin parameters and managed to deactivate onset or offset by setting the threshold to one, but still I obtain more than one event per blink. Do you have any suggestion? 🙂 thanks

papr 28 October, 2020, 14:37:57

@user-6f397b I would suggest writing to info@pupil-labs.com in this regard. Also, I can recommend having a look at the publications citing Pupil Core https://pupil-labs.com/publications/ Maybe, there are similar projects to your which you can use as reference.

papr 28 October, 2020, 14:48:05

@user-8d1ce2 I think using single marker calibration with physical markers makes most sense for you. Please make sure that the eye cameras setup such that the pupil is visible in both environments. Especially, when looking down, it is possible that the pupil might not be always easily visible. Adjusting the scene camera down before doing the calibration might also be necessary.

papr 28 October, 2020, 14:51:16

@user-594d92 You need to know the video frame size in order to denormalize the positions. See Cell 10 of our frame extraction tutorial for details: https://nbviewer.jupyter.org/github/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb

papr 28 October, 2020, 15:39:51

@user-03a2fe A blink consists of an onset and an offset. Due to the real-time aspect, you might multiple detection of the same event (onset/offset). Your deduplication strategy depends on your requirements. Lowest possible latency -> use the first encountered onset. Highest accuracy -> use the onset with the highest confidence.

That said, I do not think it is easily possible to differentiate "forced" and "natural" blinks.

user-c563fc 28 October, 2020, 18:23:33

@papr in this photo range is represented with 2.6-5.1mm. mm here is millimetre?

Chat image

papr 28 October, 2020, 18:28:31

@user-c563fc correct

papr 29 October, 2020, 18:28:44

@user-c563fc I think I need to correct myself. I think the y axis limits are -2.6 (not +2.6) to 5.1. The term "y axis limits" might be more accurate than "Range" here.

user-b259f6 30 October, 2020, 13:01:27

@papr Hi, I'm really sorry to insist but I really can't find a solution. Whatever I try, the fixations are not exactly where they should be, there is still a shift, for all my participants. Do you think you could have a look to an extract of my data, to see if there is nothing to do ? If it's possible for you, I can send you what you need of course. Thank you for all your answers

papr 30 October, 2020, 13:04:50

@user-b259f6 Please send one of the recordings that is especially difficult to correct to [email removed] We can have a look at in the next week.

user-b259f6 30 October, 2020, 13:05:12

Thank you so much I'll do this !

End of October archive