💻 software-dev


user-7e60fc 05 July, 2018, 05:52:20

Has anyone tried to run two separated instances of Pupil on the same system? I am trying to get two pupil (binocular) to run in a Linux, and the bandwidth of the USB port seemed unable to handle streaming from 6 cameras (2 worlds + 4 eyes) at the same time, and at most I was only able to catch frames from 5 cameras (2 worlds + 3 eyes). I am using two USB 2.0 ports to connect the pupils into my system. Do you think changing them (with adapters) to USB 3.0 or USB C would help?

mpk 05 July, 2018, 05:54:12

@user-7e60fc you will need a laptop that has two USB controllers. Unsig usb3 or a hub will not help.

mpk 05 July, 2018, 05:55:09

Use lsusb in terminal to see what devices are attached to which bus.

user-7e60fc 05 July, 2018, 05:56:47

Also, my understanding is that all the frames (world + eye(s)) are grabbed from the camera and transmitted through the IPC. Then we will process the frames (turn into grayscale for eye, superimpose gaze for world) then publish the frame on the screen. If I am trying to intercept the signal and do something on the frame (say, superimpose bounding boxes for object recognition) and want it to be shown on the screen, should I grab the frames with the frame publisher --> process the frames --> send them back to the IPC so it can be shown on the screen?

user-7e60fc 05 July, 2018, 05:57:34

@mpk So you mean stream two pupils using two separated ports that are connected to the system?

mpk 05 July, 2018, 05:57:53

Yes

mpk 05 July, 2018, 05:59:13

@user-7e60fc I would make a pupil plugin and use recent_events as well as gl_display callbacks to look the the world image and them draw into the canvas

user-7e60fc 05 July, 2018, 06:01:19

I'll try that out. Thank you! @mpk

mpk 05 July, 2018, 06:01:53

You are welcome!

user-bde2d3 09 July, 2018, 18:51:16

Hi everyone. I tried using the audio plugin but it did not register any audio source, even though I confirmed that the laptop actually registered the audio device. I am using Ubuntu latest version. According to the logs it seemed like an ALSA and JackD pci card problem. Has anyone encountered this?

user-b91aa6 09 July, 2018, 19:44:44

Question 1: Should I get fixation by doing the same thing as obtaining gaze?

user-b91aa6 09 July, 2018, 19:44:45

sub.setsockopt_string(zmq.SUBSCRIBE, 'pupil.')

sub.setsockopt_string(zmq.SUBSCRIBE, 'gaze') sub.setsockopt_string(zmq.SUBSCRIBE, 'notify.') sub.setsockopt_string(zmq.SUBSCRIBE, 'logging.')
user-b91aa6 09 July, 2018, 19:45:21

Question 2: Can we get fixation from Pupil service

user-b91aa6 10 July, 2018, 07:55:46

Do we have an example code for how to obtain fixation?

papr 10 July, 2018, 07:56:27

@user-b91aa6 you need to subscribe to fixations

papr 10 July, 2018, 07:56:36

everything else should be the same

user-b91aa6 10 July, 2018, 07:57:04

sub.setsockopt_string(zmq.SUBSCRIBE, 'pupil.')

sub.setsockopt_string(zmq.SUBSCRIBE, 'gaze') sub.setsockopt_string(zmq.SUBSCRIBE, 'notify.') sub.setsockopt_string(zmq.SUBSCRIBE, 'logging.')

Question 2: Can we get fixation from Pupil service

user-b91aa6 10 July, 2018, 07:57:07

like this?

papr 10 July, 2018, 07:57:32

No, because you only subscribe to pupil. in this code.

papr 10 July, 2018, 07:57:40

You need to subscribe to fixations.

user-b91aa6 10 July, 2018, 07:58:00

sub.setsockopt_string(zmq.SUBSCRIBE, 'fixations')

papr 10 July, 2018, 07:58:02

sub.setsockopt_string(zmq.SUBSCRIBE, 'fixations')

papr 10 July, 2018, 07:58:05

correct

user-b91aa6 10 July, 2018, 07:58:19

Understand. Thanks

user-b91aa6 10 July, 2018, 07:58:45

But will we receive fixatios like gaze 120 hz?

papr 10 July, 2018, 07:58:53

And Service does not enable the fixation detector. You need to run Capture if you want to have fixations.

user-b91aa6 10 July, 2018, 07:59:01

or we only receive fixations when they are detected

papr 10 July, 2018, 07:59:45

https://docs.pupil-labs.com/#fd-usage163

user-b91aa6 10 July, 2018, 08:01:54

This page doesn't say whether they are published when they are detected or not

user-b91aa6 10 July, 2018, 08:02:30

or they are publised like gaze

papr 10 July, 2018, 08:05:49

"It publishes the fixation as soon as it complies with the constraints (dispersion and duration). This might result in a series of overlapping fixations."

user-b91aa6 10 July, 2018, 08:06:17

All right. Thank you very much.

user-b91aa6 10 July, 2018, 08:43:11

May I ask that what if I am not using world camera? Then I don't have world camera coordinate system

Chat image

user-b91aa6 10 July, 2018, 08:43:32

So this fixation can't be used in HMD for 2D detection, is this correct?

user-b91aa6 10 July, 2018, 08:43:37

@papr

papr 10 July, 2018, 08:45:00

Correct. The solution here is to use 3d pupil detection in combination with 2d HMD calibration.

papr 10 July, 2018, 08:45:58

The 3d pupil data is a super set of the 2d pupil data. This means that the fixation detector can make use of the 3d model while the calibration makes use of the 2d part of the data

user-b91aa6 10 July, 2018, 08:47:06

How to use both 3D pupil detection in combination with 2D HMD calibration?

Chat image

user-b91aa6 10 July, 2018, 08:47:25

In current system, when we choose the detection method, the mapping mode is also chosen

user-b91aa6 10 July, 2018, 08:47:42

So we have to develop this by our self?

user-b91aa6 10 July, 2018, 08:47:45

@papr

papr 10 July, 2018, 08:49:17

You are right, and usually this is the case. But the HMD calibration is an exception. The HMD Calibration uses explicitly only the 2d data while the HMD Calibration 3d explicitly uses 3d data.

papr 10 July, 2018, 08:49:56

So you won't have to develop anything youself

user-b91aa6 10 July, 2018, 08:50:45

If I want to detect fixation in 2D mode, then, what should I do?

papr 10 July, 2018, 08:51:29

You will need to find out what the intrinsics of your HMD are.

user-b91aa6 10 July, 2018, 08:51:48

what if I find

user-b91aa6 10 July, 2018, 08:51:52

And then?

papr 10 July, 2018, 08:53:37

Actually, I need to correct myself. I do not know if that is possible in the hmd case.

user-b91aa6 10 July, 2018, 08:53:59

OK. But still, If I want to detect fixation in 2D mode, then, what should I do?

user-b91aa6 10 July, 2018, 08:54:08

in VIVE HMD

papr 10 July, 2018, 08:55:11

Currently, that is not possible. And as I said, I don't know if that would be possible without major changes to the fixation plugin.

papr 10 July, 2018, 08:56:55

I described the best work around above.

user-58d5ae 10 July, 2018, 09:00:03

I'm not sure I understand, what should I implement to have 2d calibration + 3d fixation ? 2d calibration gives me good results but slippage is a big problem in hololens (which could be solved by 3d fixation, right ?) @papr

user-b91aa6 10 July, 2018, 09:02:35

All right. Thanks

papr 10 July, 2018, 09:02:39
  1. Set detection mode to 3d, either throught the menu or the set_detection_mapping_mode notification
  2. Start the HMD_Calibration method, either through the menu or the start_plugin notification
  3. Start the Fixation_Detector method, either through the menu or the start_plugin notification
  4. Start the calibration procedure through the unity client
papr 10 July, 2018, 09:02:55

@user-58d5ae 👆

user-58d5ae 10 July, 2018, 09:03:21

thanks !

papr 10 July, 2018, 09:04:49

Be aware that you do not have slippage compensation through this method!

user-58d5ae 10 July, 2018, 09:06:35

Oh alright, then what's the advantages above 2d calibration and detection ? And is there a way to compensate for the slippage ?

papr 10 July, 2018, 09:08:34

Currently, the advantage is that 2d calbration works more reliably than the 3d hmd calibration 😬 We are working on improving 3d calibration for the hmd use-cases.

papr 10 July, 2018, 09:09:03

And no, we don't have a way to comensate for 2d slippage at the moment

user-58d5ae 10 July, 2018, 09:10:18

Ok, thanks a lot, somehow I believed fixation meant slippage compensation

user-b91aa6 10 July, 2018, 10:23:25

In the fixation detection, here are not that clear to me: "If we do not set a maximum duration, we will also detect smooth pursuit (which is acceptable since we compensate for VOR)"

user-b91aa6 10 July, 2018, 10:23:50

Question1 :Why you can detect smooth pursuit if you don't set maximum duration?

user-b91aa6 10 July, 2018, 10:26:03

Since smoothy pursuit is a moving trajectory, dispersion based method detects whether a list of samples data is within a fixed region. For smooth pursuit, the eye is following a moving object, then the list of samples can't be always within a region.

user-b91aa6 10 July, 2018, 10:26:28

Question 2: what does the VOR mean?

user-b91aa6 12 July, 2018, 08:46:37

@papr

papr 12 July, 2018, 08:57:16

VOR refers to https://en.wikipedia.org/wiki/Vestibulo%E2%80%93ocular_reflex

user-b91aa6 12 July, 2018, 13:38:57

Thank you very much. How do you think the question 1?

user-b91aa6 12 July, 2018, 13:39:04

Question1 :Why you can detect smooth pursuit if you don't set maximum duration?(edited) Since smoothy pursuit is a moving trajectory, dispersion based method detects whether a list of samples data is within a fixed region. For smooth pursuit, the eye is following a moving object, then the list of samples can't be always within a region.(edited)

user-b91aa6 12 July, 2018, 13:39:58

Question about getting eye image, should I do it like this? sub.setsockopt_string(zmq.SUBSCRIBE, 'eyeImage')

user-b91aa6 12 July, 2018, 13:40:03

@papr

user-ed6bcd 12 July, 2018, 17:17:18

Hi Team,

I'm having issues with the Pupil capture app: 1. Drivers updated and installed. 2. Most recent Windows Update aquired 3. Most recent Pupil labs apps downloaded. 4. Was able to record this morning no problem, then the computer got hot (has since cooled) but has not allowed me to record since.

HALP. 😬

MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. world - [INFO] launchables.world: Application Version: 1.7.42 world - [INFO] launchables.world: System Info: User: Ginny, Platform: Windows, Machine: DESKTOP-EP8KVHA, Release: 10, Version: 10.0.17134 Running PupilDrvInst.exe --vid 1443 --pid 37424 OPT: VID number 1443 OPT: PID number 37424 Running PupilDrvInst.exe --vid 1443 --pid 37425 OPT: VID number 1443 OPT: PID number 37425 Running PupilDrvInst.exe --vid 1443 --pid 37426 OPT: VID number 1443 OPT: PID number 37426 Running PupilDrvInst.exe --vid 1133 --pid 2115 OPT: VID number 1133 OPT: PID number 2115 Running PupilDrvInst.exe --vid 6127 --pid 18447 OPT: VID number 6127 OPT: PID number 18447 Running PupilDrvInst.exe --vid 3141 --pid 25771 OPT: VID number 3141 OPT: PID number 25771 world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied. world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [1280, 720] world - [INFO] camera_models: No pre-recorded calibration available world - [WARNING] camera_models: Loading dummy calibration world - [WARNING] launchables.world: Process started. Running PupilDrvInst.exe --vid 1443 --pid 37424

user-ed6bcd 12 July, 2018, 17:27:35

The whole enchilada

Chat image

user-82e7ab 13 July, 2018, 09:12:11

Hi, i'm trying to run the 'HMD Calibration' from a C++ application (zmq/msgpack) following the hmd-eyes unity pupil plugin example.

I got this far: - request socket connected - 'T 0.0' (initialize timestamp) - 'SUB_PORT' - subscribe to topic 'notify.calibration' - 'notify.start_plugin' - { name: 'HMD_Calibration' } - 'notify.calibration.should_start' - { hmd_video_frame_size, outlier_threshold } - 'notify.calibration.add_ref_data' - { ref_data: [ { norm_pos, timestamp, id }, {... ] } - 'notify.calibration.should_stop' - {}

Up until here my application received the following notifications (subscription 'notify.calibration'): - 'notify.calibration.should_start' - 'notify.calibration.started' - 'notify.calibration.add_ref_data' (x5) - 'notify.calibration.should_stop'

then pupil service/capture crashes with the following output:

Starting Calibration world - [INFO] calibration_routines.hmd_calibration: Starting Calibration

Stopping Calibration world - [INFO] calibration_routines.hmd_calibration: Stopping Calibration world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables\world.py", line 445, in world File "shared_modules\calibration_routines\hmd_calibration.py", line 75, in on_notify File "shared_modules\calibration_routines\hmd_calibration.py", line 104, in stop File "shared_modules\calibration_routines\hmd_calibration.py", line 121, in finish_calibration File "shared_modules\calibration_routines\hmd_calibration.py", line 121, in <listcomp> TypeError: list indices must be integers or slices, not str

world - [INFO] launchables.world: Process shutting down. eye1 - [INFO] launchables.eye: Process shutting down. eye0 - [INFO] launchables.eye: Process shutting down.


Any idea what I'm doing wrong?

mpk 13 July, 2018, 10:15:35

@user-82e7ab looks like you are sending data in the wrong format?

mpk 13 July, 2018, 10:15:41
 { norm_pos, timestamp, id }
mpk 13 July, 2018, 10:15:53

is a set or list, but we need to use a mapping/dict.

mpk 13 July, 2018, 10:16:37
{ "norm_pos":norm_pos, "timestamp":timestamp, "id":id}
user-82e7ab 13 July, 2018, 10:23:35

hi @mpk , thanks for taking the time. this is exactly how I send it. Maybe I have shortened the description too much to fit in the message. The data looks like this before packing via msgpack:

struct AddCalibrationReferenceElementStruct {
    norm_pos norm_pos;
    float timestamp;
    int id; // eye id
    MSGPACK_DEFINE_ARRAY(norm_pos, timestamp, id);
};
struct AddCalibrationReferenceDataStruct {
    std::string subject;
    AddCalibrationReferenceElementStruct ref_data[100];
    MSGPACK_DEFINE_MAP(subject, ref_data);
};
user-82e7ab 13 July, 2018, 10:24:35

oh

user-82e7ab 13 July, 2018, 10:25:07

it has to be MSGPACK_DEFINE_MAP(norm_pos, timestamp, id);

user-82e7ab 13 July, 2018, 10:32:46

anyway - thank you

user-82e7ab 13 July, 2018, 11:54:48

is there any detailed documentation of the remote protocol, including required fields and data types of messages?

user-ba85a3 13 July, 2018, 14:00:26

Hi everyone! i have a headset of eyetracking glasses, model pupil w120 e200b. i saw on the official website, that i could run pupil capture software in linux too. So, i was wondering how to do that. i'm novice in linux and for now i just tried to perform some acquisition by running "pupil capture SW" and then "pupil player SW" in Windows. Can anyone help me?

papr 13 July, 2018, 14:07:18

@user-ba85a3 Download the linux bundle from the github release page, install and run it. There is no big setup required.

user-ba85a3 13 July, 2018, 14:23:26

@papr Thank you a lot. can you suggest me where i can find it, directly by linking? Furthermore could the software run on a raspberry pi 3 b+?

papr 13 July, 2018, 14:39:57

@user-ba85a3 http://pupil-labs.com/software -- I own a a RB Pi 3b+ but I have never run Capture on it. From my experience with other software, I highly doubt that it would work well.

user-ba85a3 13 July, 2018, 14:42:25

@papr very much appreciated!!

user-b91aa6 16 July, 2018, 07:48:57

Question1 :Why you can detect smooth pursuit if you don't set maximum duration? Since smoothy pursuit is a moving trajectory, dispersion based method detects whether a list of samples data is within a fixed region. For smooth pursuit, the eye is following a moving object, then the list of samples can't be always within a region. Question 2: About getting eye image, should I do it like this? sub.setsockopt_string(zmq.SUBSCRIBE, 'eyeImage')

user-b91aa6 16 July, 2018, 12:01:16

@mpk

papr 16 July, 2018, 14:49:23

@user-b91aa6 in regard to Q2: you need to subscribe to frame.eye to subscribe to both eyes or to frame.eye.0 for id0 images, or frame.eye.1 for id1 images

user-c1220d 17 July, 2018, 09:57:15

Hi guys! I also would like to use the glasses with a raspberry in order to make them wireless. As you said, capture won't probably work well on it. I wonder if ,at least ,i can read and identify the 2eyeCameras and the world camera with the RB. If this is possible i would search a section of the code implementing acquisition and data stream from the cameras and use it on my RB with the required libraries.. Thank you

papr 17 July, 2018, 09:59:55

@user-c1220d That should work. We use https://github.com/pupil-labs/libuvc to access the cameras.

papr 17 July, 2018, 10:04:13

Additionally, you could use pyuvc+pyndsi to send the data to Capture. See this example: https://github.com/pupil-labs/pyndsi/blob/master/examples/uvc-ndsi-bridge-host.py

mpk 17 July, 2018, 10:08:56

@user-c1220d but in this case why not use an android phone and pupil mobile app?

user-c1220d 17 July, 2018, 10:21:57

@mpk yes but i need synchronize more devices with them, so a trigger signal is needed in my application. I would write a part of the software on my RB, just for my needs, but obviously i need first cameras identifications and the right libraries

mpk 17 July, 2018, 14:03:03

@user-c1220d then you could use Pupil time sync. This allows you to control the clock of Pupil mobile.

user-b91aa6 18 July, 2018, 08:14:21

May I ask that why the pupil detection accuracy decreases when switched to 3D mode compared with 2D pupil detection?

user-c1220d 18 July, 2018, 12:54:56

thank you all guys. I'm going to try some solutions and let you know if i need something more

user-82e7ab 20 July, 2018, 08:54:22

hi again, I'm trying to integrate pupil to an custom OpenGL application. I'm able to connect, calibrate 2d (norm_pos) or 3d (mm_pos) and to receive tracking data via subscription for topic: "gaze". But the gaze data contains 2d tracked gaze norm_pos only - regardless of whether I'm using HMD_Calibration or HMD_Calibration_3D. What I'm looking for is something like gaze_normal0_x which can be found in this doc on data format. Somewhere else, I read about something like a 3d vector gaze mapper and then found this

public static void StartBinocularVectorGazeMapper ()
{
    Send (new Dictionary<string,object> { { "subject","" }, { "name", "Binocular_Vector_Gaze_Mapper" } });
}

in the hmd-eyes unity plugin here. But when trying to start this gaze mapper pupil plugin (remotely via zmq+msgpack) pupil capture crashes with the following output():

user-82e7ab 20 July, 2018, 08:55:51
Starting Calibration
world - [INFO] calibration_routines.hmd_calibration: Starting Calibration

Stopping Calibration
world - [INFO] calibration_routines.hmd_calibration: Stopping Calibration
world - [INFO] calibration_routines.calibrate: Reflection detected
world - [INFO] calibration_routines.calibrate: Reflection detected
Ceres Solver Report: Iterations: 33, Initial cost: 1.078242e+03, Final cost: 1.485864e+01, Termination: CONVERGENCE
(multiple times the same lines from ceres solver)
world - [ERROR] accuracy_visualizer: Accuracy visualization is disabled for 3d hmd calibration
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
  File "launchables\world.py", line 443, in world
  File "launchables\world.py", line 292, in handle_notifications
  File "shared_modules\plugin.py", line 321, in add
TypeError: __init__() missing 2 required positional arguments: 'eye_camera_to_world_matrix0' and 'eye_camera_to_world_matrix1'

world - [INFO] launchables.world: Process shutting down.
eye1 - [INFO] launchables.eye: Process shutting down.
eye0 - [INFO] launchables.eye: Process shutting down.

any idea what I'm missing?

papr 20 July, 2018, 09:03:16

Hey @user-82e7ab This plugin is not supposed to be initialized manually but is started by Capture after the calibration. Please be aware that you need to enable 3d pupil detection for 3d gaze mapping.

I think that the linked code location is not correct or incomplete.

papr 20 July, 2018, 09:04:18

The missing arguments 'eye_camera_to_world_matrix0' and 'eye_camera_to_world_matrix1' are the result of the 3d calibration

user-82e7ab 20 July, 2018, 09:07:58

hi @papr thanks for your time. on startup, I ensure 3d mode via set_detection_mapping_mode: "3d" the output after HMD_Calibration_3D without this plugin looks like this:

{"topic":"pupil","circle_3d":{"center":[-1.87313,1.64452,34.0346],"normal":[-0.650676,0.0690923,-0.756205],"radius":1.99072},"confidence":0.795505,"timestamp":1.39489,"diameter_3d":3.98144,"ellipse":{"center":[166.873,229.891],"axes":[52.0427,72.6392],"angle":-8.39662},"norm_pos":[0.417182,0.425273],"diameter":72.6392,"sphere":{"center":[5.93498,0.81541,43.1091],"radius":12},"projected_sphere":{"center":[285.358,211.727],"axes":[345.171,345.171],"angle":90},"model_confidence":0.0916953,"model_id":1,"model_birth_timestamp":120.496,"theta":1.63994,"phi":-2.28133,"method":"3d c++","id":0}
{"topic":"gaze.2d.0.","norm_pos":[0.837798,0.93075],"confidence":0.879012,"id":0,"timestamp":0.336607,"base_data":[{"topic":"pupil","circle_3d":{"center":[-0.47434,1.93127,33.0256],"normal":[-0.53411,0.0929881,-0.840285],"radius":1.66878},"confidence":0.879012,"timestamp":0.336607,"diameter_3d":3.33757,"ellipse":{"center":[191.799,236.16],"axes":[51.7945,62.769],"angle":-14.3252},"norm_pos":[0.479498,0.409601],"diameter":62.769,"sphere":{"center":[5.93498,0.81541,43.1091],"radius":12},"projected_sphere":{"center":[285.358,211.727],"axes":[345.171,345.171],"angle":90},"model_confidence":0.190211,"model_id":1,"model_birth_timestamp":120.496,"theta":1.66392,"phi":-2.137,"method":"3d c++","id":0}]}
papr 20 July, 2018, 09:13:17

@user-82e7ab ~~That's a pupil datum, not a gaze datum.~~ Edit: Scratch that.

papr 20 July, 2018, 09:17:30

Definitively a gaze.2d.0. datum

papr 20 July, 2018, 09:19:04

Please try the following: 1. Start Capture 2. Start this script https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/notification_monitor.py 3. Run the hmd calibration procedure

I would like to see the output of the script in 2.

user-82e7ab 20 July, 2018, 10:38:06

sorry for the delay - lunch break :)

It sems like I had too little patience ... now i receive data for gaze.3d.0., gaze.3d.1. and gaze.3d.01. No idea why they didn't show up before this gaze mapper plugin attempt. Maybe I had too little patience, because finishing the calibration takes a few seconds .. maybe I only checked the very first data messages that arrived directly after calibration.

anyway - thank you @papr

user-82e7ab 20 July, 2018, 10:39:56

subscription to topic "notify." while HMD_Calilbration_3D

log.txt

user-82e7ab 20 July, 2018, 10:40:04

(for the sake of completeness)

user-82e7ab 23 July, 2018, 10:15:58

hi again, I'm stuck with unpacking the received 3d data:

gaze: {"topic":"gaze.3d.0.","eye_centers_3d":{0:[-181.835,-7.05468,133.635]},"gaze_normals_3d":{0:[0.383765,-0.297437,-0.874217]},"gaze_point_3d":[-149.779,-31.8997,60.6118],"confidence":0.101774,"timestamp":132.468,"base_data": ...

Until now, I always used a struct with msgpack::unpack<>() matching the message structure, e.g.

struct GazeStruct {
    std::string topic;
    struct {...} norm_pos;
    float confidence;
    float id;
    float timestamp;
    struct {...} base_data;
    MSGPACK_DEFINE_MAP(topic, norm_pos, confidence, id, timestamp, base_data);
};

But for 3d gaze data, I don't know how to create a MAP-struct with numbers as keys (like for gaze_normals_3d) I'd like to avoid using a msgpack::visitor to go through received messages.

EDIT: In C/++ I can not create a struct with an Integer as variable name - which would be required for the included maps eye_centers_3d and gaze_normals_3d.

How do you typically solve this? Thank you all

user-82e7ab 23 July, 2018, 11:06:11

msgpack seems to only support string as type for map keys (in C/++)

struct define_map<...> {
...
void msgpack_unpack(msgpack::object const& o) const
    {
        if(o.type != msgpack::type::MAP) { throw msgpack::type_error(); }
        std::map<std::string, msgpack::object const*> kvmap;
        for (uint32_t i = 0; i < o.via.map.size; ++i) {
            if (o.via.map.ptr[i].key.type != msgpack::type::STR) { throw msgpack::type_error(); }
            kvmap.insert(
                std::map<std::string, msgpack::object const*>::value_type(
                    std::string(
                        o.via.map.ptr[i].key.via.str.ptr,
                        o.via.map.ptr[i].key.via.str.size),
                    &o.via.map.ptr[i].val
                )
            );
        }

        {
            std::map<std::string, msgpack::object const*>::const_iterator it = kvmap.find(a0);
            if (it != kvmap.end()) {
                it->second->convert(a1);
            }
        }

        {
            std::map<std::string, msgpack::object const*>::const_iterator it = kvmap.find(a2);
            if (it != kvmap.end()) {
                it->second->convert(a3);
            }
        }

    }
user-82e7ab 24 July, 2018, 06:12:36

any suggestions?

papr 24 July, 2018, 06:33:32

This might actually be the reason why our Matlab examples fails when recv gaze...

papr 24 July, 2018, 06:34:20

I wonder why the python msgpack implementation does not fail. 🤔

user-82e7ab 24 July, 2018, 06:48:53

i could imagine that there is no problem with non-string keys in python - in general there is also no problem with non-string key maps in c++. it's the msgpack implementation that forces string keys

papr 24 July, 2018, 06:51:52

I will look up the msgpack definition to see if it is a valid yo use anything but strings as keys. If it is, the c++ implementation is incomplete and needs to be fixed.

user-82e7ab 24 July, 2018, 06:54:09

thx ... for now I'll try to extract the data using a msgpack visitor as a workaround

papr 24 July, 2018, 06:55:41

@user-82e7ab What speaks against using this generally?

user-82e7ab 24 July, 2018, 07:01:56

its rather inconvenient .. or maybe I didn't find a good enough example .. my current approach requires some kind of state machine to parse through the data and find the correct values. (I'm using a derived msgpack::null_visitor ... the only thing I came across until now)

user-82e7ab 24 July, 2018, 07:45:37

I'm sorry ... msgpack correctly parses the following struct:

struct mm_pos {
  float x;
  float y;
  float z;
  MSGPACK_DEFINE_ARRAY(x, y, z);
};
struct Gaze3dStruct {
  std::string topic;
  std::map<int, mm_pos> eye_centers_3d;
  std::map<int, mm_pos> gaze_normals_3d;
  mm_pos gaze_point_3d;
  float confidence;
  float timestamp;
  MSGPACK_DEFINE_MAP(topic, eye_centers_3d, gaze_normals_3d, gaze_point_3d, confidence, timestamp);
};

I was thinking to complicated and tried to create another struct (like mm_pos) for eye_centers_3d and gaze_normals_3d to be able to declare them as msgpack-map via MSGPACK_DEFINE_MAP(...). But that doesn't seem necessary. It's working with a simple std::map<int, mm_pos>.

Thanks anyway @papr

papr 24 July, 2018, 08:22:18

Great! Pleased to hear that it could be resolved.

user-82e7ab 24 July, 2018, 14:52:47

one more question - sorry :) It's about the relation between target positions during calibration and gaze directions during actual tracking.

My current calibration looks like this: - generate n viewing directions - for each direction from these directions: -- show marker at headPosition + direction -- watch the marker -- send calibration sample (ref data) with mm_pos being the direction (without headPosition) - finish calibration

Now when I receive 3d gaze data from Pupil (Gaze3dStruct from the post above) my question is:

What is the coordinate system of gaze_normals_3d ? Or in other words... If I'm looking at one of the calibration positions, would gaze_normals_3d contain (approx.) the corresponding direction?

My calibration positions are not straight in front of the viewer. Instead, the whole calibration pattern is rotated upwards. During calibration, my head is rotated upwards as well such that the pattern fills my visual field. The problem is that during the actual tracking, I need to (with my eyes, not my head) look up this far to get gaze_normals_3d pointing straight ahead.

user-82e7ab 24 July, 2018, 14:54:23

(i hope it's not too confusing)

papr 24 July, 2018, 14:54:35

The idea is that calibrate relative to the field of view of the user and not relative to the environment

user-82e7ab 24 July, 2018, 14:55:07

yes - makes sense 😃

papr 24 July, 2018, 14:56:36

The resulting gaze_normals_3d are relative to the field of view as well. Therefore you will need a second transformation step where you translate the normals to the environment based on the headPosition+direction

user-82e7ab 24 July, 2018, 15:01:58

ok, so gaze_normals_3d are not interpolated from the mm_pos samples sent in the ref_data during calibration? Or are mm_pos vectors interpreted as relative to the FOV and I need to create a symmetrical calibration pattern in front (-Z) of the user?

papr 24 July, 2018, 15:03:55

Eeh, I do not know that, sorry.

papr 24 July, 2018, 15:04:19

I am not sure what this mm_pos is

user-82e7ab 24 July, 2018, 15:24:44

oh yes, i missed that. during calibration, I send one AddCalibration3dReferenceDataStruct per calibration direction:

struct AddCalibration3dReferenceElementStruct
{
  float[3] mm_pos;
  float timestamp;
  int id;
};
struct AddCalibration3dReferenceDataStruct
{
  std::string subject;
  AddCalibration3dReferenceElementStruct ref_data[100];
};

Here, I store the position of the current calibration position/direction in mm_pos

papr 24 July, 2018, 15:27:43

Ok, in this case the mm_pos is interpreted relative to the FOV

papr 24 July, 2018, 15:28:31

And yes, you need to create a calibration pattern in front (-Z) of the user. It does not really have to be symmetrical though

user-82e7ab 24 July, 2018, 15:29:57

ok, thank you!

user-b91aa6 25 July, 2018, 20:05:00

What's the problem for one of my eye tracking camera, no matter how I adjust the focus, the image is always blured?

user-b91aa6 25 July, 2018, 20:05:38

I am wondering if the camera is broken

wrp 26 July, 2018, 00:42:15

@user-b91aa6 can you remind us what eye cameras are you using? Note that 200hz eye cameras are not focusable and may appear a little blurry in the preview but pupil detection will remain robust.

user-82e7ab 26 July, 2018, 10:47:25

@wrp Can you say why it is not necessary to have focused images when using 200Hz cameras? (they can run at lower frame rates, right?)

wrp 26 July, 2018, 10:49:18

@user-82e7ab the images may appear unfocused, but the camera's optics are designed for a specific range (close to the eye - specific to our hardware). It is just that the image might not look like it is in focus

user-82e7ab 26 July, 2018, 10:58:20

ah ok .. thx! maybe you should write this somewhere in the dev-docs .. I bet I'm not the only one asking for this ; )

wrp 26 July, 2018, 10:59:01

@user-82e7ab thank you for the feedback. We have made a note in the dev docs that the 200hz eye cameras are not focusable - but can make this more clear

wrp 26 July, 2018, 10:59:21

if you (or anyone) would like to contribute to docs, you can make PRs to https://github.com/pupil-labs/pupil-docs

wrp 26 July, 2018, 11:00:47

https://docs.pupil-labs.com/#focus-cameras

user-82e7ab 26 July, 2018, 11:11:51

yes, I know this section .. already searched for it before asking here. currently I don't have much time, but I'll think about sharing the insights I gained while getting to know pupil once I'm done with the integration to our framework (and then hopefully I'm sure enough what I'm talking about 😅 )

wrp 26 July, 2018, 11:16:46

PR's are welcome and so is feedback 😸

user-b91aa6 26 July, 2018, 11:22:52

@wrp I am using 120 hz eye tracking camera for VIve HMD, have you met this problem before?

user-c1220d 26 July, 2018, 13:43:47

Hi guys, we are developing a ROS application which includes the pupil glasses. We performed some test and we have been able to acquire the images from the 3 cameras, one camera at a time. Now the problem is to acquire the 3 signals at the same time, actually we can't do it. I wonder if there was any library or package used in pupil-capture that may allow us to solve this issue. As in capture the 3 signals are acquired and displayed at the same time.

papr 26 July, 2018, 13:56:57

@user-c1220d you need 3 separate processes accessing the cameras and one that gathers the images

user-b91aa6 26 July, 2018, 14:01:10

Question1. May I ask that have you met that no matter how you adjust the focus, the eye tracking image is always blured?

user-b91aa6 26 July, 2018, 14:01:41

Question 2: How do you clean the eye tracking camera, it's so small, it's realy hard to clean the surface of the camera

user-b91aa6 26 July, 2018, 14:02:04

Question3. I am wondering maybe the camera is broken, may I ask that can we send it back to be repaired?

user-b91aa6 26 July, 2018, 14:02:12

Thank you very much.

user-b91aa6 26 July, 2018, 14:03:25

I am using 120 HZ vive eye tracking, one of the eye tracking camera is always blured.

wrp 26 July, 2018, 14:04:16

@user-b91aa6 please send a short sample eye video to info@pupil-labs.com along with your order id so that we can provide you with concrete feedback

user-b91aa6 26 July, 2018, 14:04:31

Thank you very much.

user-c1220d 26 July, 2018, 14:11:05

@papr ok, we tried with different instances to acquire the cameras but we can't run more than one at time, otherwise they crash. What do you mean with separate processes?

papr 26 July, 2018, 14:15:44

@user-c1220d what package do you use to access the cameras? I mean starting multiple nodes of that package. Do not put them into a single launch file since this would run them as nodelets, what makes them behave differently

papr 26 July, 2018, 14:26:21

@user-c1220d what package do you use to access the cameras? I mean starting multiple nodes of that package. Do not put them into a single launch file since this would run them as nodelets, what makes them behave differently

user-c1220d 26 July, 2018, 14:34:10

we are using USB CAM package, would you suggest anything else?

papr 26 July, 2018, 15:07:32

@user-c1220d No, I think that should be fine. Just make sure to run them in three different nodes.

papr 26 July, 2018, 15:08:20

I do not know if the topic name overlap is an issue though. Maybe you need to remap the topic

user-29e10a 31 July, 2018, 11:36:46

@mpk is there a reason that you don't save the 2D detected pupil size and center in 3D detection mode? Since the 3D detection is not deterministic (detect one recording over and over yields slightly different results) – I think because the fittings of the pupil on the sphere – but the 2D algorithm is, it would be great to have the information, which 2D pupil led to phi or theta i.e..

mpk 31 July, 2018, 11:43:18

@user-29e10a you are right about that. I think it would be nice to keep the 2d result. We are working on a complete refactor of the 3d model and will keep this in mind!

End of July archive