🕶 invisible


user-11dbde 04 June, 2020, 12:02:03

Hi, Is there an api/way to get eye data from invisible in realtime to be consumed in my own app (desktop and web)?

papr 04 June, 2020, 12:09:25

@user-11dbde Yes, there is. We use it for our Pupil Invisible Monitor app, too. https://docs.pupil-labs.com/developer/invisible/#network-api

user-bab6ad 04 June, 2020, 12:14:20

@user-11dbde @papr just in case you need it @user-11dbde : You will (currently) not get less data (so no 3D eye model or pupil diameter or so) but mainly the gaze point in space (which is still cool). Because the angle of the cameras is very challenging atm. (just because I know @user-11dbde in real life and I thought he might aim for that)

papr 04 June, 2020, 12:15:37

Correct, Pupil Invisible does not produce pupillometry data, yet. Also, the Pupil Invisible network api differs from the network api of Pupil Core.

user-11dbde 04 June, 2020, 12:15:40

Thank you @papr and @user-bab6ad 🙂

user-11dbde 04 June, 2020, 12:17:45

@user-bab6ad I see that we can also get IMU data..!?

papr 04 June, 2020, 12:18:36

This is correct.

user-11dbde 04 June, 2020, 12:18:47

Thanks

user-11dbde 04 June, 2020, 12:22:05

Just one more thing...any SW included for eye-gaze data analysis?

papr 04 June, 2020, 12:26:57

@user-11dbde The recommended workflow is to upload and process your Pupil Invisible recordings in Pupil Cloud https://cloud.pupil-labs.com/

user-11dbde 04 June, 2020, 12:27:37

I have seen this option, however I could not find specific info about the analysis features...

wrp 04 June, 2020, 13:06:40

Hi @user-11dbde we are very close to releasing updates to Pupil Cloud (within Q2) that will enable you to enrich data from multiple recordings to power your multi-participant/multi-recording data analysis. The first set of enrichments for Pupil Cloud will include marker-based AOI tracking (aka surface tracking in Pupil Player); "standard" gaze data classification algorithms; gaze, metadata, and enrichment data export --> csv. We have a number of exciting data enrichment features in the pipeline, that we plan on adding to Cloud on a rolling basis after the initial release.

wrp 04 June, 2020, 13:11:26

I would also like to add that Pupil Cloud has a API, we also have a little python client lib that you can use: https://github.com/pupil-labs/pupil-cloud-client-python

user-df9629 04 June, 2020, 13:20:23

Pupil Cloud API is going to make data processing very smooth. Thank you guys!

user-11dbde 04 June, 2020, 13:44:43

@wrp Thank you

user-11dbde 04 June, 2020, 15:42:25

Sorry, however I have an additional question: is there any plan to have a pupillabs invisible device that could be used by a baby (maybe with a head cap)?

wrp 05 June, 2020, 01:33:18

@user-11dbde The geometry of Pupil Invisible will not be able to accommodate babies (infants). We will consider your suggestion for future hardware iterations though 😸

user-11dbde 05 June, 2020, 07:17:10

@wrp Thank you. This would be very valuable for the research we are conducting at the medical university

user-627fd4 05 June, 2020, 07:41:30

Hey..may I know the focal length(mm) of pupil invisible?

user-627fd4 05 June, 2020, 07:43:55

Is it 766?

marc 05 June, 2020, 07:45:20

Hey @user-627fd4! Pupil Player contains a hard-coded set of default camera intrinsics for Pupil Invisible that you can find here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L145

The value noted in there is indeed 766! There is a little bit of variance between different instances of scene cameras, but this value is roughly correct.

user-627fd4 05 June, 2020, 07:46:06

thank you!

user-599b15 05 June, 2020, 17:34:26

Hi, I would like to ask if the eight distortion coefficients for Pupil Invisible are k1, k2, p1, p2, k3~k6 in opencv https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html I used the values to undistort the image but it did not work well, so I'm wondering if I misunderstood something.

I would like to use it to estimate gaze angle, or is it better to just use: angle = 2arctan(0.5*distance to center/focal length)

papr 05 June, 2020, 17:48:37

@user-599b15 This is how we undistort images in Pupil Player https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L520

This is how we unproject 2d gaze into cyclopean 3d gaze directions https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L140-L148 (where capture.intrinsics is an instance of the Camera_Model class, first link)

user-599b15 05 June, 2020, 18:14:53

Thank you!

user-8f829f 08 June, 2020, 14:56:00

Hi again, I asked few questions on Core board ha but here comes same questions again for Invisible. How do I improve longer distance over the horizon gaze accuracy for driving record (plz see image)? Is my nose support silicone patch on the glasses nose bridge causing this issue?

Chat image

marc 08 June, 2020, 14:58:15

Hey @user-8f829f! Could you clarify what you mean with "nose support silicone patch on the glasses nose bridge"? Have you put an additional silicone patch onto your device?

marc 08 June, 2020, 14:59:15

If you notice a constant offset in your predictions (which happens noticeably for about 10-20% of the population), you can add an "Offset Calibration" in the wearer profile.

user-8f829f 08 June, 2020, 15:00:10

sure my nose is not big so the original glass slips. the nose patch as below

Chat image

marc 08 June, 2020, 15:02:41

Ok, thanks for the clarification. This may indeed influence the predictions. Have you checked if this error persists without those nose pads? Alternatively to those pads, you could potentially use a sports strap to fixate the glasses better. See here: https://pupil-labs.com/products/invisible/accessories/

user-8f829f 08 June, 2020, 15:02:56

The offset is required for horizontal view only. For incabin, central and side mirror gazes are fine so I can't apply universal offset. Does sun glare cause this issue? I was driving right under sun light for the video.

marc 08 June, 2020, 15:07:43

Sunlight should not influence the predictions. However the predictions suffer a little bit from what is called parallax error, which in your case essentially means that targets that are close to you have no error, while targets that are far away do have an offset. Due to the limited ability of the gaze estimation pipeline to estimate depth, we can currently not correct for this source of error.

marc 08 June, 2020, 15:08:35

This parallax error should lead to an error that is mostly horizontal though. In the image you shared it looks mostly like a vertical error, which should, in theory, be compensatable using the offset correction.

marc 08 June, 2020, 15:09:45

Either way, the offset correction is currently the only way to correct for errors.

user-8f829f 08 June, 2020, 15:29:20

thanks for help @marc

user-aed714 09 June, 2020, 09:20:00

Hi all, I would like to measure head movements of a volleyball player tracking a ball. Is there a way I can pinpoint a fixed location in the gym (e.g. a point on the net), and then make an export or analysis which tells me how much and in what direction the head was moving? Thanks in advance!

marc 09 June, 2020, 10:14:32

Hey @user-aed714! There are a couple things you could do. There is an IMU integrated into the Pupil Invisible frame, which allows you to measure head-movement. This would however measure relative movement and not immediately give you a head-pose in a reference coordinate system that is related to the net somehow.

Alternatively we have a head-pose tracking plugin available in Pupil Player, see here: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking This plugin is based on markers that need to be placed in the environment. In your case you might place markers on the net and around the rest of the field. As long as markers are visible in the scene camera view, the plugin will give you a head-pose measurement in relation to the field. If however the players look straight upwards towards a high ball and no markers are visible in the scene video, you would stop getting head-pose estimates.

I hope this gives you an idea!

user-aed714 09 June, 2020, 10:27:25

Hi Marc, thanks! Since I have already done the measurements, the second option does not work for me. Regarding the first option, how can I see/export/analyze the data of the IMU?

marc 09 June, 2020, 11:10:11

@user-aed714 You can find the documentation of our recording format here: https://docs.pupil-labs.com/developer/invisible/#recording-format This also describes the format of the IMU data that should already be present in your recordings. The IMU coordinate system is visualized here: https://docs.pupil-labs.com/developer/invisible/#imu-coordinate-system

user-aed714 09 June, 2020, 11:51:34

Hi Marc, thank you, I am almost there. I can see the file, but I cannot find a way to open this extimu ps1.raw file. Do you know what kind of application I need?

marc 09 June, 2020, 15:10:28

@user-aed714 Our support for the IMU data is still limited. These are binary files that can't be opened in 3ed party software easily. As of now, you can only read the binary files in accordance to the recording format in for example a Python script, from where you could export them to whatever format you desire.

To load the data in Python you can use the following code:

import numpy as np

DTYPE = np.dtype(
    [
        ("gyro_x", "<f4"),
        ("gyro_y", "<f4"),
        ("gyro_z", "<f4"),
        ("accel_x", "<f4"),
        ("accel_y", "<f4"),
        ("accel_z", "<f4"),
    ]
)

path_raw="extimu ps1.raw"
raw = np.fromfile(path_raw, dtype=DTYPE).view(np.recarray)
path_ts = "extimu ps1.time"
ts = np.fromfile(path_ts, dtype="<u8")
user-627fd4 10 June, 2020, 09:10:31

Thanks!... Is it Pupil Cam1 ID2?

papr 10 June, 2020, 09:10:40

Correct

user-627fd4 10 June, 2020, 09:15:09

I used this calibration parameters to determine the camera pose. Unfortunately the z axis is bit off. My image dimension is 1280x720 hence I'm using https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L47

papr 10 June, 2020, 09:21:20

@user-627fd4 These are pre-recorded camera matrices and are not guaranteed to be accurate for all cameras. I would suggest to run the camera intrinsics estimation in Capture to get more accurate camera matrices for your specific hardware.

papr 10 June, 2020, 09:23:21

If you have further software development related questions please post them in the 💻 software-dev channel.

user-627fd4 10 June, 2020, 09:28:35

Thank you!

user-5543ca 18 June, 2020, 11:47:27

Hello! I have a couple of quick questions about Invisible. How many eye cameras does it have?

user-5543ca 18 June, 2020, 11:48:37

(2) How does it compare to the newly released Tobii Glasses 3 in terms of accuracy? (we are considering both these devices for a project and wondering what you could say about them :-))

marc 18 June, 2020, 13:54:05

Hey @user-5543ca! The Pupil Invisible glasses have 2 eye cameras, one for each eye embedded into the temporal side of the frame.

We did not yet have a chance to try out the Tobii Glasses 3 ourselves and afaik there is no performance report out yet either, so we can't really comment on how they compare. Maybe some other member in the community already had a chance to try them out?

We will publish a detailed performance report on Pupil Invisible within the next 2-3 weeks, which could be the basis for a comparison with the Tobii Glasses 3, once similar information becomes available for them as well.

user-5543ca 18 June, 2020, 13:55:36

Thanks Marc! Looking forward to the report

marc 18 June, 2020, 13:55:53

Your welcome @user-5543ca!

user-a9ff0d 22 June, 2020, 12:44:50

@papr Hi, I've got a problem when I tried to load recording data obtained from one participant into Player. Recordings from the other participants were all successfully loaded except for this particular set of files. So, I don't think it's related to the pc environment or software versions. List of files in the unsuccessful set is given in the attached pic, which seems to me that there's no missing file. Any help?

Thanks,

Env: Mac OS 10.15.5, Player v2.0.175 The Player says "This recording cannot be opened in Player. Please reach out to info@pupil-labs.com for support!".

Info.json file: $recording_id [1] "fa76ae20-317d-4f03-bbf8-c65c00998016"

$data_format_version [1] "1.2"

$app_version [1] "0.8.20-prod"

$start_time [1] 1.592702e+18

$duration [1] 4.6904e+11

$android_device_id [1] "305dec7f146428dd"

$android_device_name [1] "OnePlus 6"

$android_device_model [1] "OnePlus6"

$glasses_serial_number [1] "8s7um"

$scene_camera_serial_number [1] "j7brl"

$wearer_id [1] "d2db5427-0c00-4061-800d-aedd5514c8b9"

$template_data $template_data$id [1] "57516b3b-245f-431c-aef1-c2b125b31b82"

$template_data$version [1] "2019-10-23T14:18:44.584025Z"

$template_data$recording_name [1] "2020-06-21_10:17:35"

$template_data$data $template_data$data$e969d7ea-9465-45a2-a986-f781b305782d [1] ""

$gaze_offset [1] 24.28204 69.45239

$calib_version [1] 1

$pipeline_version [1] "1.3.0"

Chat image

papr 22 June, 2020, 13:39:19

@user-a9ff0d Thank you for letting us know. I have forwarded the issue to our Android development team. We will come back to you in this regard.

user-eb13bc 25 June, 2020, 09:16:32

Hi, good morning, is it possible to export recordings and send them via mail for example?

marc 25 June, 2020, 09:50:28

@user-eb13bc You can download Pupil Invisible recordings from Pupil Cloud (or from the Companion device directly) to your computer and send it around from there. A Pupil Invisible recording can be opened on any computer using the Pupil Player software. Sharing a recording directly from Pupil Cloud, similar to how you could share a file on e.g. Google Drive with any 3rd party, is not yet available.

End of June archive