Hi, Is there an api/way to get eye data from invisible in realtime to be consumed in my own app (desktop and web)?
@user-11dbde Yes, there is. We use it for our Pupil Invisible Monitor app, too. https://docs.pupil-labs.com/developer/invisible/#network-api
@user-11dbde @papr just in case you need it @user-11dbde : You will (currently) not get less data (so no 3D eye model or pupil diameter or so) but mainly the gaze point in space (which is still cool). Because the angle of the cameras is very challenging atm. (just because I know @user-11dbde in real life and I thought he might aim for that)
Correct, Pupil Invisible does not produce pupillometry data, yet. Also, the Pupil Invisible network api differs from the network api of Pupil Core.
Thank you @papr and @user-bab6ad 🙂
@user-bab6ad I see that we can also get IMU data..!?
This is correct.
Thanks
Just one more thing...any SW included for eye-gaze data analysis?
@user-11dbde The recommended workflow is to upload and process your Pupil Invisible recordings in Pupil Cloud https://cloud.pupil-labs.com/
I have seen this option, however I could not find specific info about the analysis features...
Hi @user-11dbde we are very close to releasing updates to Pupil Cloud (within Q2) that will enable you to enrich data from multiple recordings to power your multi-participant/multi-recording data analysis. The first set of enrichments for Pupil Cloud will include marker-based AOI tracking (aka surface tracking in Pupil Player); "standard" gaze data classification algorithms; gaze, metadata, and enrichment data export --> csv. We have a number of exciting data enrichment features in the pipeline, that we plan on adding to Cloud on a rolling basis after the initial release.
I would also like to add that Pupil Cloud has a API, we also have a little python client lib that you can use: https://github.com/pupil-labs/pupil-cloud-client-python
Pupil Cloud API is going to make data processing very smooth. Thank you guys!
@wrp Thank you
Sorry, however I have an additional question: is there any plan to have a pupillabs invisible device that could be used by a baby (maybe with a head cap)?
@user-11dbde The geometry of Pupil Invisible will not be able to accommodate babies (infants). We will consider your suggestion for future hardware iterations though 😸
@wrp Thank you. This would be very valuable for the research we are conducting at the medical university
Hey..may I know the focal length(mm) of pupil invisible?
Is it 766?
Hey @user-627fd4! Pupil Player contains a hard-coded set of default camera intrinsics for Pupil Invisible that you can find here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L145
The value noted in there is indeed 766! There is a little bit of variance between different instances of scene cameras, but this value is roughly correct.
thank you!
Hi, I would like to ask if the eight distortion coefficients for Pupil Invisible are k1, k2, p1, p2, k3~k6 in opencv https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html I used the values to undistort the image but it did not work well, so I'm wondering if I misunderstood something.
I would like to use it to estimate gaze angle, or is it better to just use: angle = 2arctan(0.5*distance to center/focal length)
@user-599b15 This is how we undistort images in Pupil Player https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L520
This is how we unproject 2d gaze into cyclopean 3d gaze directions https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L140-L148 (where capture.intrinsics
is an instance of the Camera_Model
class, first link)
Thank you!
Hi again, I asked few questions on Core board ha but here comes same questions again for Invisible. How do I improve longer distance over the horizon gaze accuracy for driving record (plz see image)? Is my nose support silicone patch on the glasses nose bridge causing this issue?
Hey @user-8f829f! Could you clarify what you mean with "nose support silicone patch on the glasses nose bridge"? Have you put an additional silicone patch onto your device?
If you notice a constant offset in your predictions (which happens noticeably for about 10-20% of the population), you can add an "Offset Calibration" in the wearer profile.
sure my nose is not big so the original glass slips. the nose patch as below
Ok, thanks for the clarification. This may indeed influence the predictions. Have you checked if this error persists without those nose pads? Alternatively to those pads, you could potentially use a sports strap to fixate the glasses better. See here: https://pupil-labs.com/products/invisible/accessories/
The offset is required for horizontal view only. For incabin, central and side mirror gazes are fine so I can't apply universal offset. Does sun glare cause this issue? I was driving right under sun light for the video.
Sunlight should not influence the predictions. However the predictions suffer a little bit from what is called parallax error, which in your case essentially means that targets that are close to you have no error, while targets that are far away do have an offset. Due to the limited ability of the gaze estimation pipeline to estimate depth, we can currently not correct for this source of error.
This parallax error should lead to an error that is mostly horizontal though. In the image you shared it looks mostly like a vertical error, which should, in theory, be compensatable using the offset correction.
Either way, the offset correction is currently the only way to correct for errors.
thanks for help @marc
Hi all, I would like to measure head movements of a volleyball player tracking a ball. Is there a way I can pinpoint a fixed location in the gym (e.g. a point on the net), and then make an export or analysis which tells me how much and in what direction the head was moving? Thanks in advance!
Hey @user-aed714! There are a couple things you could do. There is an IMU integrated into the Pupil Invisible frame, which allows you to measure head-movement. This would however measure relative movement and not immediately give you a head-pose in a reference coordinate system that is related to the net somehow.
Alternatively we have a head-pose tracking plugin available in Pupil Player, see here: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking This plugin is based on markers that need to be placed in the environment. In your case you might place markers on the net and around the rest of the field. As long as markers are visible in the scene camera view, the plugin will give you a head-pose measurement in relation to the field. If however the players look straight upwards towards a high ball and no markers are visible in the scene video, you would stop getting head-pose estimates.
I hope this gives you an idea!
Hi Marc, thanks! Since I have already done the measurements, the second option does not work for me. Regarding the first option, how can I see/export/analyze the data of the IMU?
@user-aed714 You can find the documentation of our recording format here: https://docs.pupil-labs.com/developer/invisible/#recording-format This also describes the format of the IMU data that should already be present in your recordings. The IMU coordinate system is visualized here: https://docs.pupil-labs.com/developer/invisible/#imu-coordinate-system
Hi Marc, thank you, I am almost there. I can see the file, but I cannot find a way to open this extimu ps1.raw file. Do you know what kind of application I need?
@user-aed714 Our support for the IMU data is still limited. These are binary files that can't be opened in 3ed party software easily. As of now, you can only read the binary files in accordance to the recording format in for example a Python script, from where you could export them to whatever format you desire.
To load the data in Python you can use the following code:
import numpy as np
DTYPE = np.dtype(
[
("gyro_x", "<f4"),
("gyro_y", "<f4"),
("gyro_z", "<f4"),
("accel_x", "<f4"),
("accel_y", "<f4"),
("accel_z", "<f4"),
]
)
path_raw="extimu ps1.raw"
raw = np.fromfile(path_raw, dtype=DTYPE).view(np.recarray)
path_ts = "extimu ps1.time"
ts = np.fromfile(path_ts, dtype="<u8")
@user-627fd4 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L27-L76
Thanks!... Is it Pupil Cam1 ID2?
Correct
I used this calibration parameters to determine the camera pose. Unfortunately the z axis is bit off. My image dimension is 1280x720 hence I'm using https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L47
@user-627fd4 These are pre-recorded camera matrices and are not guaranteed to be accurate for all cameras. I would suggest to run the camera intrinsics estimation in Capture to get more accurate camera matrices for your specific hardware.
If you have further software development related questions please post them in the 💻 software-dev channel.
Thank you!
Hello! I have a couple of quick questions about Invisible. How many eye cameras does it have?
(2) How does it compare to the newly released Tobii Glasses 3 in terms of accuracy? (we are considering both these devices for a project and wondering what you could say about them :-))
Hey @user-5543ca! The Pupil Invisible glasses have 2 eye cameras, one for each eye embedded into the temporal side of the frame.
We did not yet have a chance to try out the Tobii Glasses 3 ourselves and afaik there is no performance report out yet either, so we can't really comment on how they compare. Maybe some other member in the community already had a chance to try them out?
We will publish a detailed performance report on Pupil Invisible within the next 2-3 weeks, which could be the basis for a comparison with the Tobii Glasses 3, once similar information becomes available for them as well.
Thanks Marc! Looking forward to the report
Your welcome @user-5543ca!
@papr Hi, I've got a problem when I tried to load recording data obtained from one participant into Player. Recordings from the other participants were all successfully loaded except for this particular set of files. So, I don't think it's related to the pc environment or software versions. List of files in the unsuccessful set is given in the attached pic, which seems to me that there's no missing file. Any help?
Thanks,
Env: Mac OS 10.15.5, Player v2.0.175 The Player says "This recording cannot be opened in Player. Please reach out to info@pupil-labs.com for support!".
Info.json file: $recording_id [1] "fa76ae20-317d-4f03-bbf8-c65c00998016"
$data_format_version [1] "1.2"
$app_version [1] "0.8.20-prod"
$start_time [1] 1.592702e+18
$duration [1] 4.6904e+11
$android_device_id [1] "305dec7f146428dd"
$android_device_name [1] "OnePlus 6"
$android_device_model [1] "OnePlus6"
$glasses_serial_number [1] "8s7um"
$scene_camera_serial_number [1] "j7brl"
$wearer_id [1] "d2db5427-0c00-4061-800d-aedd5514c8b9"
$template_data $template_data$id [1] "57516b3b-245f-431c-aef1-c2b125b31b82"
$template_data$version [1] "2019-10-23T14:18:44.584025Z"
$template_data$recording_name [1] "2020-06-21_10:17:35"
$template_data$data
$template_data$data$e969d7ea-9465-45a2-a986-f781b305782d
[1] ""
$gaze_offset [1] 24.28204 69.45239
$calib_version [1] 1
$pipeline_version [1] "1.3.0"
@user-a9ff0d Thank you for letting us know. I have forwarded the issue to our Android development team. We will come back to you in this regard.
Hi, good morning, is it possible to export recordings and send them via mail for example?
@user-eb13bc You can download Pupil Invisible recordings from Pupil Cloud (or from the Companion device directly) to your computer and send it around from there. A Pupil Invisible recording can be opened on any computer using the Pupil Player software. Sharing a recording directly from Pupil Cloud, similar to how you could share a file on e.g. Google Drive with any 3rd party, is not yet available.