πŸ’» software-dev


user-1b6057 02 May, 2022, 00:22:03

Does Pupil Service send gaze data? I can only seem to subscribe to 'pupil.' from it. Trying to subscribe to 'gaze.3d' doesn't seem to do anything

user-1b6057 02 May, 2022, 00:43:01

Ohh is it because it needs calibration first? Hmm

user-1b6057 02 May, 2022, 17:14:28

Ok I don't think this is quite what I'm looking for but it's always good to have more to reference. Thank you!

Speaking of other third party projects, do you know if anyone has made any plugins or other ways of tracking eyelids through pupil labs?

papr 04 May, 2022, 12:49:26

None that I am aware of πŸ™‚ You can find a list of community projects here https://github.com/pupil-labs/pupil-community

user-5882af 03 May, 2022, 22:40:31

Hello, I am trying to better understand the timestamp data from pupil invisible/cloud. I understand that it is nanosecond data from the start of UTC. However, I wanted to convert this data to a more understandable date and time for reference but I end up regardless of the standardized calendar used with what I assume is a rounding error of about 8 to 11 minutes. I know I can fix this by calculating with both a know starting timestamp and date time reference value but I was wondering if this has come up or if you currently have a preferred conversion method as I have some other equipment that is recorded in date & time that I was hoping to sync up with the pupil invisible data.

papr 04 May, 2022, 12:51:59

The easiest way to convert the recorded timestamps to datetime is using pandas to_datetime() function https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html It takes nanoseconds as input so no conversion is necessary.

The simplest answer to why your data is out of sync by that much is that the clock on the phone was not synced at the time of the recording. If you want you can share your conversion code here for me to review.

user-5882af 04 May, 2022, 15:38:35

Thank you @papr the pandas to_datetime() and Timestamp() worked perfectly and I was able to confirm that the phone was synced correctly with it.

user-2e522e 04 May, 2022, 23:03:41

Hi! I’m wondering in the Pye3d python package, what unit (cm,mm, etc) is the focal length support to be in?

papr 05 May, 2022, 09:14:46

pye3d expects the focal length in pixels.

user-2e522e 04 May, 2022, 23:12:43

At the moment we are having issues with pye3d giving a incorrect pupil center location:

papr 05 May, 2022, 09:14:33

Could you specify which of the pye3d output fields you are drawing?

user-2e522e 05 May, 2022, 13:35:12

So on the camera specifications sheet that we are using it says the Focal length would be 1.8mm, how would we go about translating this to pixels?

papr 05 May, 2022, 13:40:19

Technically you can convert that by looking about the pixel density of your sensor. Practically, it might be easier to run your own camera calibration using OpenCV

user-2e522e 05 May, 2022, 16:36:02

Ok thank you!

user-5fbe70 07 May, 2022, 07:14:33

@paprHi, When I calibrate in Ubuntu, the red dot in the center of the calibration will not turn into a green dot, and calibration is unsuccessful; there is no such problem when calibrating under Windows. Why do the calibration fail under Ubuntu?Thank you for your help!

Chat image

papr 07 May, 2022, 07:46:52

Hi, that usually means that the marker is not visible in the scene video. Please make sure that the scene camera is adjusted correctly.

user-5fbe70 07 May, 2022, 08:17:36

I keep the same posture, but calibration under Ubuntu is unsuccessful.

papr 07 May, 2022, 08:32:31

Can you please share a Pupil Capture recording of you attempting the calibration with [email removed] Then we will be able to give more concrete feedback

user-5fbe70 07 May, 2022, 09:26:48

Hi, I found errors by looking at the log, how should I solve it? Thank for your help!

Chat image

papr 07 May, 2022, 10:05:56

These warnings can happen if frames are not fully transferred, e.g. due to loose connection. This warning is only an issue if it happens very frequently

user-5fbe70 07 May, 2022, 13:59:52

thanks!I try to reconnect, but there is a new error.

Chat image

papr 07 May, 2022, 14:22:20

This is just a debug message. You can ignore it.

user-5fbe70 07 May, 2022, 14:25:34

OK, thanks!

user-2e522e 08 May, 2022, 04:23:48

Hi! I was wondering if there was a way I could get the distance between the projected sphere and the pupil ellipse as a float from pye3d?

papr 09 May, 2022, 08:34:21

Both fields are exposed as ellipses (center, angle, minor + major axis length). You should be able to calculate this distance based on that.

user-2e522e 09 May, 2022, 15:57:33

I’m sorry to bother you guys so much but I was wondering if pupil labs has an eyelid tracking implementation?

papr 10 May, 2022, 07:47:27

No, we don't have that. But if you find an open source implementation I might be able to help you build a plugin for Pupil Capture

user-2530a3 12 May, 2022, 22:50:10

Getting this error in pupilplayer 3.5.1 when trying to activate the blinkdetection plugin (which had previously seemed to work well) which crashes the program obviously.

Deleted my user settings which allows me to load recordings and activate other plugins without issue

Any direction on this would be great as blink rate is one of our core metrics for our project

Chat image

papr 13 May, 2022, 07:10:21

Hi, please use this plugin as a replacement https://gist.github.com/papr/c02bf229ac9a94e9fbee633cd53113db

This is how you can install it: https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin

user-2530a3 13 May, 2022, 13:18:32

Working correctly with the fixed plugin, thank yoU!

user-328d3b 13 May, 2022, 18:43:02

Hi, i need to know if i can use pupil lab open source software with Gaze Point GP3 hardware? thank you so much

papr 16 May, 2022, 09:00:32

Please see my response here https://discord.com/channels/285728493612957698/633564003846717444/975652666610974770

user-b91cdf 18 May, 2022, 05:49:18

Good morning folks, I have a question regarding my coding flow/ workflow in my project. I have the subjects driving with the Pupil Labs Core during day and night. Additionally I have a static scene camera (RaspiPi HQ). I now want to match the view/gaze of the PupilLabs Core to the static camera.

For this I calculate the following for each frame pair :

  1. undistort Pupil Frame
  2. undistort Raspi Frame
  3. interest point matching
  4. find Homography
  5. perspective transform of (norm_pos_x,norm_pos_y,1) to the Raspberrypi camera
  6. store data

Is the sequence correct ? Would you leave out the undistortion, because norm_pos_x/y refers to the distorted frame ? What do I have to pay attention to ?

Kind regards, Cobe

papr 18 May, 2022, 07:16:52

Good morning πŸ™‚

I think the sequence makes sense. You should undistort norm_pos_x/y before applying the homography. See this tutorial on point undistortion https://github.com/pupil-labs/pupil-tutorials/blob/b3d8e404bbce84baf1f84d5576e635344f43cb20/11_undistortion_and_unprojection.ipynb

Specifically, the undistort_points_2d(points_2d, camera_matrix, dist_coefs) function (requires pixel location input).

To denormalize from norm to pixel values use:

pixel_x = norm_pos_x * width
pixel_y = (1.0 - norm_pos_y) * height
user-b91cdf 18 May, 2022, 09:12:41

Thank you !

user-b91cdf 20 May, 2022, 09:46:11

Here's my current result for a single frame during a night time drive. The left image is the undistorted pupil frame with the undistorted gaze point in blue. The middle image is the warped perspective after homography calculation and the right image shows the mapped gaze point on the static raspberry pi camera.

I took the annotated world video to compare. I think the result looks quite good. Thanks again!

Chat image

papr 23 May, 2022, 13:51:52

Nice!

user-3cff0d 25 May, 2022, 20:47:16

@papr Hi papr! So I've been working on narrowing the differences in gaze quality between the realtime gaze estimation process and the post-hoc gaze estimation process for VR headsets (specifically with the PosthocGazerHMD3D gazer class that's in the custom branch of Pupil Labs Core you made for VR headsets). If you remember, we've been having an issue where the results of the realtime gaze estimation for in-VR-headset projects that use HMD-Eyes are far more accurate than the same data processed post-hoc.

We tried adjusting the translations of the eye cameras, changing the hardcoded ref_depth value, and verifying that the intrinsics from Unity are accurate. We also ended up modifying Pupil Capture to export the reference locations it receives from HMD-Eyes and then added some code so that the post-hoc sequence uses the exact same reference point locations as the realtime sequence. None of these offered any improvements to gaze accuracy that would bring the results of post-hoc gaze estimation closer to the results of the realtime gaze estimation.

I ended up trying to replicate the realtime calibration sequence code as closely as I could for post-hoc calibration, but it didn't change anything in regards to quality.

Do you have any other ideas of differences that may exist between the realtime calibration/gaze estimation sequence and the post-hoc calibration/gaze estimation sequence? Or, ideas of what might cause the VR headset data to have this decreased post-hoc accuracy when the Pupil Core headset does not?

papr 27 May, 2022, 08:32:34

I ended up trying to replicate the realtime calibration sequence code as closely as I could for post-hoc calibration If you have the 3d realtime data, you should be able to run realtime gazer post-hoc, without any/with a minimum of modifications.

user-3cff0d 25 May, 2022, 20:52:38

Here's an example of the decreased accuracy I'm talking about, where the first image shows realtime gaze data clouds relative to the fixation targets the wearer was fixating on (the big blue dots connected to the point clouds via the grey line), and the second image shows the results of post-hoc analysis of the same data recorded to the eye0, eye1, and world.mp4 files. The shapes of the gaze clouds are about the same, indicating minimal difference in precision from realtime to post-hoc, but the gaze clouds are much further away from their fixation targets, showing that decrease in accuracy.

Chat image Chat image

user-b91cdf 27 May, 2022, 12:17:36

Hi , is it possible to get viewing distances from 3d Gaze ? thanks, Cobe

papr 27 May, 2022, 12:36:22

Technically, yes, but the depth estimate is very noisy/inaccurate.

user-b91cdf 27 May, 2022, 14:47:43

How would you approach this ? gather gaze on targets at specific distances ? -> calculate depth calibration ?

nmt 30 May, 2022, 07:09:39

I suggest having a look at the Headpose Tracker Plugin, if your testing allows for the use of AprilTag markers: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking

papr 30 May, 2022, 07:20:44

@user-b91cdf ☝️ that approach can be very reliable. In my comments above, I was referring to the depth estimate of the gaze pipeline (which uses vergence to esitmate depth). You can access this data by via the z-component of the gaze gaze_point_3d field.

End of May archive