Does Pupil Service send gaze data? I can only seem to subscribe to 'pupil.' from it. Trying to subscribe to 'gaze.3d' doesn't seem to do anything
Ohh is it because it needs calibration first? Hmm
Ok I don't think this is quite what I'm looking for but it's always good to have more to reference. Thank you!
Speaking of other third party projects, do you know if anyone has made any plugins or other ways of tracking eyelids through pupil labs?
None that I am aware of π You can find a list of community projects here https://github.com/pupil-labs/pupil-community
Hello, I am trying to better understand the timestamp data from pupil invisible/cloud. I understand that it is nanosecond data from the start of UTC. However, I wanted to convert this data to a more understandable date and time for reference but I end up regardless of the standardized calendar used with what I assume is a rounding error of about 8 to 11 minutes. I know I can fix this by calculating with both a know starting timestamp and date time reference value but I was wondering if this has come up or if you currently have a preferred conversion method as I have some other equipment that is recorded in date & time that I was hoping to sync up with the pupil invisible data.
The easiest way to convert the recorded timestamps to datetime is using pandas to_datetime()
function https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html It takes nanoseconds as input so no conversion is necessary.
The simplest answer to why your data is out of sync by that much is that the clock on the phone was not synced at the time of the recording. If you want you can share your conversion code here for me to review.
Thank you @papr the pandas to_datetime() and Timestamp() worked perfectly and I was able to confirm that the phone was synced correctly with it.
Hi! Iβm wondering in the Pye3d python package, what unit (cm,mm, etc) is the focal length support to be in?
pye3d expects the focal length in pixels.
At the moment we are having issues with pye3d giving a incorrect pupil center location:
Could you specify which of the pye3d output fields you are drawing?
So on the camera specifications sheet that we are using it says the Focal length would be 1.8mm, how would we go about translating this to pixels?
Technically you can convert that by looking about the pixel density of your sensor. Practically, it might be easier to run your own camera calibration using OpenCV
Ok thank you!
@paprHi, When I calibrate in Ubuntu, the red dot in the center of the calibration will not turn into a green dot, and calibration is unsuccessful; there is no such problem when calibrating under Windows. Why do the calibration fail under Ubuntu?Thank you for your help!
Hi, that usually means that the marker is not visible in the scene video. Please make sure that the scene camera is adjusted correctly.
I keep the same posture, but calibration under Ubuntu is unsuccessful.
Can you please share a Pupil Capture recording of you attempting the calibration with [email removed] Then we will be able to give more concrete feedback
Hi, I found errors by looking at the log, how should I solve it? Thank for your help!
These warnings can happen if frames are not fully transferred, e.g. due to loose connection. This warning is only an issue if it happens very frequently
thanksοΌI try to reconnect, but there is a new error.
This is just a debug message. You can ignore it.
OK, thanks!
Hi! I was wondering if there was a way I could get the distance between the projected sphere and the pupil ellipse as a float from pye3d?
Both fields are exposed as ellipses (center, angle, minor + major axis length). You should be able to calculate this distance based on that.
Iβm sorry to bother you guys so much but I was wondering if pupil labs has an eyelid tracking implementation?
No, we don't have that. But if you find an open source implementation I might be able to help you build a plugin for Pupil Capture
Getting this error in pupilplayer 3.5.1 when trying to activate the blinkdetection plugin (which had previously seemed to work well) which crashes the program obviously.
Deleted my user settings which allows me to load recordings and activate other plugins without issue
Any direction on this would be great as blink rate is one of our core metrics for our project
Hi, please use this plugin as a replacement https://gist.github.com/papr/c02bf229ac9a94e9fbee633cd53113db
This is how you can install it: https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin
Working correctly with the fixed plugin, thank yoU!
Hi, i need to know if i can use pupil lab open source software with Gaze Point GP3 hardware? thank you so much
Please see my response here https://discord.com/channels/285728493612957698/633564003846717444/975652666610974770
Good morning folks, I have a question regarding my coding flow/ workflow in my project. I have the subjects driving with the Pupil Labs Core during day and night. Additionally I have a static scene camera (RaspiPi HQ). I now want to match the view/gaze of the PupilLabs Core to the static camera.
For this I calculate the following for each frame pair :
Is the sequence correct ? Would you leave out the undistortion, because norm_pos_x/y refers to the distorted frame ? What do I have to pay attention to ?
Kind regards, Cobe
Good morning π
I think the sequence makes sense. You should undistort norm_pos_x/y before applying the homography. See this tutorial on point undistortion https://github.com/pupil-labs/pupil-tutorials/blob/b3d8e404bbce84baf1f84d5576e635344f43cb20/11_undistortion_and_unprojection.ipynb
Specifically, the undistort_points_2d(points_2d, camera_matrix, dist_coefs)
function (requires pixel location input).
To denormalize from norm to pixel values use:
pixel_x = norm_pos_x * width
pixel_y = (1.0 - norm_pos_y) * height
Thank you !
Here's my current result for a single frame during a night time drive. The left image is the undistorted pupil frame with the undistorted gaze point in blue. The middle image is the warped perspective after homography calculation and the right image shows the mapped gaze point on the static raspberry pi camera.
I took the annotated world video to compare. I think the result looks quite good. Thanks again!
Nice!
@papr Hi papr! So I've been working on narrowing the differences in gaze quality between the realtime gaze estimation process and the post-hoc gaze estimation process for VR headsets (specifically with the PosthocGazerHMD3D
gazer class that's in the custom branch of Pupil Labs Core you made for VR headsets). If you remember, we've been having an issue where the results of the realtime gaze estimation for in-VR-headset projects that use HMD-Eyes are far more accurate than the same data processed post-hoc.
We tried adjusting the translations of the eye cameras, changing the hardcoded ref_depth value, and verifying that the intrinsics from Unity are accurate. We also ended up modifying Pupil Capture to export the reference locations it receives from HMD-Eyes and then added some code so that the post-hoc sequence uses the exact same reference point locations as the realtime sequence. None of these offered any improvements to gaze accuracy that would bring the results of post-hoc gaze estimation closer to the results of the realtime gaze estimation.
I ended up trying to replicate the realtime calibration sequence code as closely as I could for post-hoc calibration, but it didn't change anything in regards to quality.
Do you have any other ideas of differences that may exist between the realtime calibration/gaze estimation sequence and the post-hoc calibration/gaze estimation sequence? Or, ideas of what might cause the VR headset data to have this decreased post-hoc accuracy when the Pupil Core headset does not?
I ended up trying to replicate the realtime calibration sequence code as closely as I could for post-hoc calibration If you have the 3d realtime data, you should be able to run realtime gazer post-hoc, without any/with a minimum of modifications.
Here's an example of the decreased accuracy I'm talking about, where the first image shows realtime gaze data clouds relative to the fixation targets the wearer was fixating on (the big blue dots connected to the point clouds via the grey line), and the second image shows the results of post-hoc analysis of the same data recorded to the eye0, eye1, and world.mp4 files. The shapes of the gaze clouds are about the same, indicating minimal difference in precision from realtime to post-hoc, but the gaze clouds are much further away from their fixation targets, showing that decrease in accuracy.
Hi , is it possible to get viewing distances from 3d Gaze ? thanks, Cobe
Technically, yes, but the depth estimate is very noisy/inaccurate.
How would you approach this ? gather gaze on targets at specific distances ? -> calculate depth calibration ?
I suggest having a look at the Headpose Tracker Plugin, if your testing allows for the use of AprilTag markers: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
@user-b91cdf βοΈ that approach can be very reliable. In my comments above, I was referring to the depth estimate of the gaze pipeline (which uses vergence to esitmate depth). You can access this data by via the z
-component of the gaze gaze_point_3d
field.