💻 software-dev


user-66a3ee 03 February, 2020, 12:21:48

Hello, I am trying to create annotations via the pupil remote, which is sending the annotations from matlab via udp (pupil middleman method). I have also tried the matlab helpers from github. However, the annotations I have sent do not appear anywhere in the data. The only way that I can seen annotations is if I manually add them via the Capture GUI. I am using the matlab helpers from github on a windows 10 machine with the 1.21.5 version of Capture.

papr 03 February, 2020, 12:30:12

@user-66a3ee hey 👋 Which topic do you use?

user-66a3ee 03 February, 2020, 12:30:55

annotation is the subject, right?

user-66a3ee 03 February, 2020, 12:31:21

topic would then be notify.annotation?

papr 03 February, 2020, 12:31:39

in this case, you send annotations as "notifications" which are a special case of messages

papr 03 February, 2020, 12:32:12

Sending annotations as notifications has been removed. Annotations have their own topic now

user-66a3ee 03 February, 2020, 12:33:47

I just checked the matlab pupil helpers and there is now an annotations function. I will try this out. Thanks for your help!

user-ddc8dd 04 February, 2020, 21:18:05

Hi, I'm working on offline calibration (based on calibrate_and_map in gaze_producers of the old API). I saw the 3 steps in a previous message and was wondering where I can find the functions that call the controllers in gaze_from_offline_calibration?

user-ddc8dd 04 February, 2020, 22:18:58

Also, is there a way to get the gaze positions without calibrating?

user-220849 04 February, 2020, 23:55:31

Hi! (not sure if this should be in here or 🥽 core-xr). I'm trying to update the Unity plugin with the latest version of hmd-eyes. The data format on the pupil developer guide site (https://docs.pupil-labs.com/developer/core/overview/#pupil-datum-format) and the actual data class in hmd-eyes (https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/PupilData.cs) appear to differ in some property names. (for instance, the guide calls one variable "norm_pos", while the class calls it "NormPos"). I'm not sure if this is a known discrepancy or something I'm misunderstanding -- but I just wanted to see if anyone knew about this!

user-220849 05 February, 2020, 00:10:26

Never mind -- I dug a little deeper and found that the field for the dictionary for the data class is 'norm_pos', but it will access the variable NormPos. Thanks!

user-2be752 05 February, 2020, 03:28:07

Hi there! I am trying to load the gaze.pldata using your api. Could you please explain what each of the things in data mean? It's not clear to me where exactly there is the mapped x and y gaze coordinates.

papr 05 February, 2020, 08:46:40

@user-ddc8dd You might be interested in these functions: - Setup: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_producer/gaze_from_offline_calibration.py#L37-L38 - Reference marker detection + calibration + mapping https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_producer/controller/calculate_all_controller.py#L30

Regarding gaze without calibration I would recommend to have a look at https://pupil-labs.com/products/invisible/

user-c5fb8b 05 February, 2020, 08:59:29

Hi @user-2be752 by

each of the things in data I assume you are using the file_methods.load_pldata_file() and/or file_methods.PLData, correct?

The data attribute is a deque (double ended queue) of Serialized_Dict objects. The Serialized_Dict class handles deserialization of the underlying msgpack data. This is useful because it only deserializes on demand. You can access it like any other dictionary. The keys normally are: 'topic', 'eye_centers_3d', 'gaze_normals_3d', 'gaze_point_3d', 'confidence', 'timestamp', 'base_data', 'norm_pos'.

By

mapped x and y gaze coordinates I assume you want to access the norm_pos key, which stores the gaze position in relative coordinates to the world image (from 0.0 to 1.0 in x and y). Assuming that data is the deque of Serialized_Dict instances you get from load_pldata_file, you can get e.g. all gaze coordinates with:

for datum in data:
    time = datum["timestamp"]
    norm_pos = datum["norm_pos"]
    print(f"{time} -> {norm_pos}")
user-2be752 05 February, 2020, 16:45:08

Hi @user-c5fb8b, yes, exactly what I needed to know, thank you very much!! I assume this gaze is already computed from the calibration I did during the recording, so no need to recalibrate, right?

papr 05 February, 2020, 16:45:27

@user-2be752 Correct!

user-06ae45 06 February, 2020, 00:51:17

Hi there - I'm developing a plugin to help with handheld target calibration by simply plotting a circle in each of the 9 locations for 9-point calibration. I'm using draw_points_norm to make the circles, but am running into the problem that the circle's sizes are constant regardless of window size. I found a related github issues with cygl (https://github.com/pupil-labs/cygl/issues/9) from @papr , but it is now closed. Any thoughts? Thanks v. much!

user-83773f 06 February, 2020, 11:21:09

Hi guys, I'm currently trying to analyse VOR (vestibular induced nystagmus of the eye) using the PL system. While doing recordings I noticed that my eye model kept 'shrinking' (pupil fit remained accurate though) from a normal eye fit to a fit of just the iris, resulting in an amplitude scaling of the supposed eye movements being made. So what I would like to do is an offline check of my eye model to visually check when it is accurately tracking the eye versus when it is not (so i can discard false data). However in Pupil Player, I can only find a window for visualizing the eyecam + pupil fit, yet no eye model is being depicted here... Does anyone know if this is possible? Many Thanks! Jesse

user-d98d40 07 February, 2020, 07:47:32

Hi there- I want to check some info, I am always using eye tracking glasses in closed space like supermarkets with mobile. I just want to test banners in the streets, from the car. If you know to help me is it possible? how is the process of calibration? many thanks.

papr 07 February, 2020, 12:57:52

@user-83773f Visualizing the 3d model outline in the eye overlay view is a small, simple feature which we can add to our next release. Currently, there is no way to display it with this plugin. @user-c5fb8b @user-764f72 Could one of you please take on this task?

user-c5fb8b 07 February, 2020, 13:36:26

@papr I can take care of this.

user-a6cc45 09 February, 2020, 10:49:06

Hello everyone, I'm trying to run pupil from source on Windows 10 and I am following this steps: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md but when it comes to build optimization_calibration module I get an error:

python setup.py build
running build
running build_ext
building 'calibration_routines.optimization_calibration.calibration_methods' extension
error: Unable to find vcvarsall.bat

I've added catalog with vcvarsall.bat to PATH and still nothing...

user-a6cc45 09 February, 2020, 23:09:46

In general I'm looking for an easy way to run pupil player (with my custom plugin that has to be run from source) on any computer. I thought that maybe running pupil software in Docker will be the easiest way but I don't have much experience with Docker so I got stuck with this problem:

I created Dockerfile based on https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu18.md but when I try to docker run myimage it get:

  File "/bin/sh", line 1
SyntaxError: Non-UTF-8 code starting with '\xc0' in file /bin/sh on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
user-a6cc45 09 February, 2020, 23:11:41

Here is my Dockerfile. I want to make container with environment ready to run pupil apps.

Dockerfile

wrp 10 February, 2020, 03:15:47

@user-a6cc45 this repo is depreciated (e.g. no longer maintained and not up to date re dependencies) but if you want a reference docker file here - here you go: https://github.com/pupil-labs/pupil-docker-ubuntu/blob/master/Dockerfile

Note - I really do not know if docker will enable you to launch a GUI. The docker image linked above was intended to be used as a build/bundling system and not for actually running the software.

If you have access to a linux/mac system (edit: saw that you were using win 10 above) and are developing source code, then it might actually be easier to install dependencies and run from source.

user-c5fb8b 10 February, 2020, 09:20:07

@user-a6cc45 vcvarsall is a tool from visual studio, which seems to be missing. Did you install visual studio with the required options as documented here: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md#install-visual-studio ? Side note: Are you aware that you can run custom plugins also with the bundled versions of Pupil? You only need to run from source if you actually modify Pupil source code.

user-bc8123 12 February, 2020, 01:05:28

Hi, I have a question about the 2D detection. I would like to know the position of the two ellipses centers of the eyes respect to the world camera screen, or in the same coordinates. I understand that this is obtained using a polynomial regression (am I right?), and I was able to read the parameters of the polynomial regression in the notify documents. If I am not wrong it is used a regression with 13 parameters, and two with 7 (for the two eyes). Does the two sets of parameters refers to the coordinates? Can I use them to the data in pupil_positions.csv ? Thanks, and sorry if I was not so clear

Chat image

papr 12 February, 2020, 04:38:26

@user-bc8123 the position of the pupil ellipse centers in world camera coordinates is called gaze and exported to gaze_positions.csv check out the norm_pos field specifically.

The polynomial that Capture uses to map pupil positions to gaze has a specific structure. I can look it up later today.

user-bc8123 12 February, 2020, 09:34:32

ah, ok. Thank you for your reply

user-dfa378 13 February, 2020, 10:37:24

Hey, I'm new and trying to use pupil core for some gaze mapping. I wanted to know if its possible to connect to the device and collect data without using the pupil softwares capture and player. I'm using windows 10 and don't have administrator rights to it. Hence I can't install these softwares. My programming and open source knowledge is quite good. Thanks

papr 13 February, 2020, 10:38:49

@user-dfa378 Unfortunately, Pupil Capture is required. The administrator rights are only necessary for the driver installation.

user-dfa378 13 February, 2020, 10:41:36

So there is no way around it? Using a python library etc since a lot of code is opensource so I expect there would be something

user-dfa378 13 February, 2020, 10:42:21

@papr The last resort would be to get pupil capture

papr 13 February, 2020, 10:45:42

@user-dfa378 You can of course try to extract the parts of the pipeline from the Pupil project, but not everything is nicely separated as one would like it. 😕

papr 13 February, 2020, 10:46:49

But even if you do all that, we assume pyuvc (https://github.com/pupil-labs/pyuvc) to detect the cameras. And on windows, we require the libusbk drivers for that.

user-dfa378 13 February, 2020, 10:53:35

Ok, then I guess I go with the installation path. Thanks for your help @papr

user-fd4a45 13 February, 2020, 16:09:44

Hello guys! I would like to know if it's possible to create a dynamic heat map easily with the pupil capture/player? I mean a heat map for dynamic content (videos, videos game...) and not only a heat map for static image. Thank you in advance.

user-06ae45 13 February, 2020, 21:35:45

Hi there- just checking in again about my earlier question re: draw_points_norm and circle sizes. Thanks! 🙂

wrp 14 February, 2020, 05:41:13

@user-fd4a45 Are you asking about creating a heatmap for each frame of a video?

Example: 10 participants watch the same film. You want to aggregate gaze on frame [0, 1, 2, ... n] for each frame of the film the participants watched. Does this example capture your use case?

wrp 14 February, 2020, 08:21:53

@user-c5fb8b or @papr please see @user-06ae45's question above (and from before).

user-c5fb8b 14 February, 2020, 10:10:05

@papr should have a look at that as he seems to have been involved in the matter already. I wouldn't know what to do here without spending some time on this.

papr 14 February, 2020, 10:10:43

I can respond to the question on Monday.

user-fd4a45 14 February, 2020, 13:08:59

@wrp Yes that's exactly what I'm asking. The use case is not exactly what I had in mind though, but it's still useful to be able to do that.

As making heat map for dynamic content is useless, I would like to be able to create dynamic heat maps from a specific frame to another from a single participant. The idea is to visualise zones of interest for each section of a video (from x frame to y frame).

papr 14 February, 2020, 13:11:29

@user-fd4a45 Unfortunately, this is not possible to generate in an automated fashion in Pupil Player. Instead, you will have to set the trim marks to the time range over which you want to aggregate the surface gaze to a heatmap and hit export. Afterwards, adjust the trim marks to the next time range and repeat.

user-fd4a45 14 February, 2020, 13:15:48

Ok thank you @papr and to create heat maps I need to use the marker tracking right?

papr 14 February, 2020, 14:02:05

@user-fd4a45 correct.

user-c9d205 16 February, 2020, 10:49:16

I recorded some videos using Pupil Remote through python code and it seems that in all of my recordings, the eye cameras are only starting to record 4 seconds after the front facing camera starts recording, Is this a known issue and are there fixes? @papr

user-dfa378 17 February, 2020, 14:03:14

Hey!! I need to use the pupil core data along with the data I collected from another camera. Since the starting timestamp of the pupil data is arbitary, what is the most robust/error-free way to convert the pupil timestamps to epoch time. I'll be using collected data so real-time conversion is not a necessity. Thanks

papr 17 February, 2020, 14:31:47

@user-c9d205 Hey 👋 on what OS are you running the script?

papr 17 February, 2020, 14:33:07

@user-dfa378 Checkout the info.player.json file. It contains the recording start time in system time (unix epoch) and pupil time. You can use these two timestamps to calculate the offset between unix epoch and pupil time and apply it to your other data.

user-bc8123 17 February, 2020, 14:55:48

HI. @papr did you have the time to check the polynomial to map the pupil positions into the gazes?

papr 17 February, 2020, 15:20:25

@user-bc8123 All relevant functions can be found here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_routines/calibrate.py

user-bc8123 17 February, 2020, 20:38:00

thank you ! @papr

user-c9d205 18 February, 2020, 10:40:51

@user-c9d205 Hey 👋 on what OS are you running the script? @papr Ubuntu 16.04

user-bc8123 18 February, 2020, 15:03:49

Hi, is there a way to obtain a depth (z) value from the 2d detection ?

papr 18 February, 2020, 15:04:24

@user-bc8123 no, the 2d detection operates solely on the image plane.

user-bc8123 18 February, 2020, 15:16:24

@papr do you believe is there an easy way to obtain at least eyes angles from the pupils image?

user-2be752 18 February, 2020, 23:57:37

Hello, I've noticed inside gaze.pldata there are 3 norm_pos, 2 inside 'base_data'and one as its one key. Which one is the one we should use for x y coordinates of gaze?

user-c5fb8b 19 February, 2020, 08:06:45

Hi @user-2be752 when your gaze was mapped binocularly, a datum from left and right eye are combined to produce a good "average" result. The two entries in base_data are the left and right pupil data that resulted in this single gaze datum. The norm_pos key of the gaze datum resembled the best matching of those two. If you have monocular gaze there will be only one entry in base_data, which will have the same norm_pos as the gaze datum. Did that answer your question?

user-2be752 19 February, 2020, 18:38:06

Hi @user-c5fb8b yes! so I have binocular data, that means that I should use the norm_pos key of the gaze datum. Thank you!

user-06ae45 19 February, 2020, 19:46:50

Hi again - I have a question regarding accessing the recording directory from a plugin, so that I can save additional files in the same recording directory. I am able to access objects like the sync time through 'g_pool.get_timestamp()', but 'g_pool.rec_dir' throws the error 'AttributeError: 'types.SimpleNamespace' object has no attribute 'rec_dir''. Am I missing something here about how the g_pool object works? Thank you!

user-2be752 19 February, 2020, 20:21:20

Hello again 🙂 I'm trying to use your fixation detector code but I'm missing some ipc_pub parameter in gpool. Any idea how I can get it? Thank you!

papr 19 February, 2020, 20:59:36

@user-2be752 I fear @user-c5fb8b wasn't entirely correct. Gaze is not equal to Pupil data. Gaze is mapped pupil data. Gaze can be mapped based on one or two pupil data points. You are looking for the top-level norm_pos key of the gaze datum.

papr 19 February, 2020, 21:00:46

@user-2be752 the IPC pub can be instanciated with zmq.Context.instance.socket(zmq.PUB) if I am not mistaken.

papr 19 February, 2020, 21:01:04

You might need to bind it to a random port.

papr 19 February, 2020, 21:02:01

@user-06ae45 that should work. Could you share the plugin code with [email removed] We will have a look at it and come back to you tomorrow.

user-2be752 19 February, 2020, 21:42:52

@papr thank you! is there anyway I can bypass this since I only want to use the fixation detector...

user-2be752 19 February, 2020, 22:26:14

Re: if I assign zmq.Context.instance.socket(zmq.PUB) to ipc_pub then it says that the socket has no such option NOTIFY

user-06ae45 19 February, 2020, 22:46:59

@papr Sure, I'll send it over now. Thank you!

user-c9d205 20 February, 2020, 09:55:51

I recorded some videos using Pupil Remote through python code and it seems that in all of my recordings, the eye cameras are only starting to record 4 seconds after the front facing camera starts recording, Is this a known issue and are there fixes? @papr @user-c9d205 Hey 👋 on what OS are you running the script? @papr Ubuntu 16.04 @user-c9d205 is a known issue and are there fixes? @papr Anything new in this matter?

user-c5fb8b 20 February, 2020, 11:11:20

Hi @user-c9d205 until @papr responds: the recordings will never start at the exact same time (although 4 seconds seems uncommon to me). We match timestamps of every video in order to match frames and pupil/gaze data afterwards. So you should not rely on any workflow that requires the world and eye videos to start simultaneuously.

user-dfa378 20 February, 2020, 13:17:07

Hey!! I wish to move from the world camera of pupil core to the actual world coordinates (I'm wearing them while driving so the car coordinate frame). So basically I want to account for the head position as well. Probably its not possible but just want to confirm it. Is there some procedure to follow say during calibration that can achieve this? @papr

papr 20 February, 2020, 13:21:12

@user-dfa378 Checkout our head pose tracking plugin!

user-dfa378 20 February, 2020, 13:32:41

Thanks a lot @papr Could you please send a link, I can't seem to find it. I found Asistiva but that is not what I'm looking for

papr 20 February, 2020, 13:37:37

@user-dfa378 https://docs.pupil-labs.com/core/software/pupil-player/#analysis-plugins

user-2be752 20 February, 2020, 17:38:16

To use the fixation detector code, do you recommend me to use the offline or online detector? It seems that difference is that in the online the fixations will overlap?

user-c9d205 20 February, 2020, 20:21:03

Hi @user-c9d205 until @papr responds: the recordings will never start at the exact same time (although 4 seconds seems uncommon to me). We match timestamps of every video in order to match frames and pupil/gaze data afterwards. So you should not rely on any workflow that requires the world and eye videos to start simultaneuously. @user-c5fb8b Thanks for the response, I was expecting one second mismatch maximum, but I consistently get 4 seconds mismatch, is there anything that can be done?

user-220849 20 February, 2020, 23:04:24

(Not sure if this should go here or in 🥽 core-xr ) Hi, I'm trying to in Unity put markers for the left and right eye in independently where the gaze in 2D is directed at the display (independent of where gaze is directed in 3D)... basically just putting a red circle for the left eye and blue for the right with a constant depth and size. In prior code from a few years ago, this was accomplished using the "norm_pos" variable for each eye and then setting it as the localPosition for each rectTransform marker. However I'm having a lot of difficulty using the newer hmd-eyes plugin for Unity to do the same thing. I was trying to adapt the GazeVisualizer demo to use the norm_pos from the monocular GazeData (one for each eye) but it seems very intermittent (it will switch between updating the position of the markers every few seconds rather than both simultaneously) and it seemed like the variable wasn't correctly capturing the eye position. I also tried using the eye direction as a 3D vector and then obtaining the position from that as GazeVisualizer did, but that worked very consistently, but only for when I rendered one eye at a time. Any ideas of which variable of to use or how to approach this? Thanks!

user-c5fb8b 21 February, 2020, 12:53:21

@user-2be752 You can read about the differences in the terminology section: https://docs.pupil-labs.com/core/terminology/#fixations As you said, the online detector can produce overlapping fixations. There is also no way to specify a maximum duration (as in the offline mode). Since normally you would want mutually exclusive fixations, I would recommend the offline mode for any post-hoc analysis. The online fixation detector is required when you need real-time information while running the experiment, e.g. to provide feedback or control the flow of the experiment.

user-c5fb8b 21 February, 2020, 12:59:27

@user-c9d205 We haven't experienced a case with such a big delay before. Are you using any special headset version, or a very old version? What specs does your computer have? Also, can you send me the log file after starting and stopping a recording? You can find it in your home folder > pupil_capture_settings > capture.log. Please note that this file gets overwritten every time you restart capture.

user-c5fb8b 21 February, 2020, 13:02:03

@user-220849 Please repost the question in 🥽 core-xr

user-6b818e 21 February, 2020, 14:44:04

Hi, I'm a newbie, I need a CSV file to review export data. Can anyone help me with that?

user-c5fb8b 21 February, 2020, 16:06:26

Hi @user-6b818e you can export the data from a recording in Pupil Player into a csv file. Make sure the Raw Data Exporter plugin is enabled and press "e" (or click the "export" button with the arrow down on the left). The export will now be processed, you will find all exported data in the folder of your recording > exports > "export_number". E.g. there is a file gaze_positions.csv which contains all gaze data. Did that answer your question?

user-2be752 21 February, 2020, 18:18:17

@user-c5fb8b , that makes sense. so I should input the gaze data into recent_events (from Offline_Fixation_Detector class)?

user-220849 21 February, 2020, 22:13:36

@user-c5fb8b thank you. I realized some things yesterday that may change my question so I'll repost an edited version if I'm still having issues.

user-6b818e 22 February, 2020, 03:21:38

Hi @user-6b818e you can export the data from a recording in Pupil Player into a csv file. Make sure the Raw Data Exporter plugin is enabled and press "e" (or click the "export" button with the arrow down on the left). The export will now be processed, you will find all exported data in the folder of your recording > exports > "export_number". E.g. there is a file gaze_positions.csv which contains all gaze data. Did that answer your question? @user-c5fb8b I don't have any devices to test :D. I'll oder one ASAP, but can you have any sample export data, i need one file to test my back end 😄

papr 23 February, 2020, 09:49:44

@user-6b818e https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing This is the example recording from our website. In the export folder you can find the gaze_positions.csv file.

user-6b818e 23 February, 2020, 13:38:20

Thanks @papr

user-c9d205 25 February, 2020, 08:46:52

@user-c9d205 We haven't experienced a case with such a big delay before. Are you using any special headset version, or a very old version? What specs does your computer have? Also, can you send me the log file after starting and stopping a recording? You can find it in your home folder > pupil_capture_settings > capture.log. Please note that this file gets overwritten every time you restart capture. @user-c5fb8b I just restarted the program and there is no major delay. Very wierd

user-c9d205 25 February, 2020, 12:07:59

I now face a different problem, I upgraded the version of the software and whenever I try to record it from python code the front camera just freezes The log is pretty much filled with lines in this format: world - [DEBUG] video_capture.uvc_backend: Received non-monotonic timestamps from UVC! Dropping frame Removing this line solved it: socket.send_string('T 0.0')

papr 25 February, 2020, 12:39:30

@user-c9d205 Thank you very much for letting us know! This is a regression of a change that we made. We will try to fix it as soon as possible. (for reference: https://github.com/pupil-labs/pupil/pull/1812)

papr 25 February, 2020, 13:45:52

@user-c9d205 We have merged a fix to master. Could you give it a try please?

papr 25 February, 2020, 15:23:13

We have also updated the bundle releases to include this fix.

user-220849 25 February, 2020, 17:49:51

If anyone was following my earlier question about the position of eye markers not being updated often and NormPos -- just wanted to update that the issue was that I set the confidence threshold too high (.8) and it was basically throwing out most of the data!

user-2be752 25 February, 2020, 18:57:53

Hi there again! I'm still trying to figure out the fixation_detector code. I'm using the offline detector, and I wanted to know if the correct function to pass the gaze data into is recent_events. Do I need to pass all of the data at once or one by one? Thank you!

papr 27 February, 2020, 14:11:37

@user-2be752 I would recommend to simply call this function: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L157-L244

End of February archive