πŸ’» software-dev


user-78c370 10 November, 2022, 16:11:28

Hello. Currently I'm using mac and try to run Pupil from Source, and get this error "Could not build wheels for av". Here is the log file. May I ask how this issue caused? Thanks!

Error.txt

user-e3f20f 10 November, 2022, 16:18:14

Please use the develop branch. It is also recommended to build a virtual environment with the intel86 python version

user-e3f20f 10 November, 2022, 16:18:31

Then there are wheels for all dependencies

user-78c370 10 November, 2022, 16:19:00

Thanks for noticing! Currently using Conda for virtual environment

user-e3f20f 11 November, 2022, 08:13:40

Check out this article on how to setup x86 environments with conda on mac OS https://towardsdatascience.com/python-conda-environments-for-both-arm64-and-x86-64-on-m1-apple-silicon-147b943ffa55

user-78c370 11 November, 2022, 08:14:08

Thank you so much!!

user-78c370 11 November, 2022, 08:14:36

I made it work yesterday! The frame rate seems locked at 30fps

user-e3f20f 11 November, 2022, 08:14:51

For which camera?

user-78c370 11 November, 2022, 08:15:03

HD-6000

user-e3f20f 11 November, 2022, 08:15:29

Yes, that is an older model with a maximum framerate of 30 fps

user-78c370 11 November, 2022, 08:15:49

Haven’t removed the case yet. Just a initial test.

user-e3f20f 11 November, 2022, 08:16:37

Just out of interest, are you building the DIY version?

user-78c370 11 November, 2022, 08:16:07

Oki

user-78c370 11 November, 2022, 08:18:41

Yeah, still looking for another hd-6000 haha. Btw, I also saw some camera module OV9281 online. Maybe it’s perfect for this use case too

user-e91538 20 November, 2022, 12:32:37

Hello, I'm using Pupil Invisible to detect the blinks and I would like to export the frames of the video "world" when the eyes are closed. So I installed Blink Detector (https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py) on Pupil Player. And it seems that the plug-in doesn't work with the blinks.csv file. And I've found the plugin Post hoc blink detection, but I don't know if it allow me to export the images of the world video during the blink https://gist.github.com/papr/40ba7d99f572bc0fe388a81aa2f87424.

user-e3f20f 21 November, 2022, 08:44:32

Hi! Pupil Player does not support blink detection for Pupil Invisible recordings. It is based on pupil confidence which cannot be calculated for this type of recordings. You can download blink data from Pupil Cloud though!

user-e91538 20 November, 2022, 12:35:17

Never the less, even the Post hoc blink dectection allow me to do this. I've problem to make it works correctly. I've ERROR message : player - [WARNING] plugin: Failed to load 'batch_exporter'. Reason: 'No module named 'exporter'' player - [WARNING] plugin: Failed to load 'extract_blinks'. Reason: 'No module named 'pandas'' But I've already installed exporter and pandas on my computer...

user-e91538 20 November, 2022, 12:39:22

Does anyone have an idea to solve those problems ? I'm not devlopper and I don't know python, so that's part of the problem. But if any of you have a teacher soul and the patience to help me it could be nice.

user-b14f98 21 November, 2022, 20:22:33

Hey @user-e3f20f , I believe that pye3d uses optimization (ceres solver) to fit the 3D eye model. Can you point me to where this happens in the pye3d package?

user-b14f98 21 November, 2022, 20:22:48

...and is the optimization of reprojection error?

user-b14f98 21 November, 2022, 20:23:57

For some context, we've now played enough with the entire pipeline to develop the strong opinion that this is the step that's contributing the majority of error. Not the binocular model fit.

user-b14f98 21 November, 2022, 20:24:08

Not pupil detection.

user-e3f20f 21 November, 2022, 22:17:05

By that you are referring to the 2d detection?

user-b14f98 21 November, 2022, 20:24:28

(although that may be a related issue, our models aren't improving things)

user-e3f20f 21 November, 2022, 22:15:55

That was only the case for the old 3d detector. Pye3d performs a closed-form optimization.

user-e3f20f 21 November, 2022, 22:16:37

By binocular model fit you are referring to the gaze calibration?

user-b14f98 21 November, 2022, 22:19:57

Yes, binocular model fit = gaze calibration.
Pupil detection = yes, 2D pupil detection, not 3D pupil estimation.

user-e3f20f 21 November, 2022, 22:28:37

That's the closed Form optimization

user-b14f98 21 November, 2022, 22:28:48

Thank you, Pablo!

user-b14f98 21 November, 2022, 22:28:54

One more idea to drop ...

user-b14f98 21 November, 2022, 22:30:22

We've found that the precision of the final gaze estimate is relatively poor when the model is not frozen. The high-frequency sample-to-sample updating seems to be the issue. We could play with the paramters and calm it, perhaps, but my first inclination is that the updates should not be applied directly to the 3D eye pose center.

user-e3f20f 21 November, 2022, 22:33:55

The short term model only affects the gaze direction, not the sphere center

user-b14f98 21 November, 2022, 22:30:44

Instead, I would consider applying them to an attractor towards which the eyeball center accelerates.

user-b14f98 21 November, 2022, 22:32:09

perhaps in a PD-controller-like fashion. This is just one of many possible ways to low-pass filter those updates.

user-b14f98 21 November, 2022, 22:34:34

Ah, that's interesting. I'll have to look into that a bit more closely.

user-50567f 29 November, 2022, 00:50:53

Hello, is it possible to plot x,y coordinate on the screen from norm_pos_x and norm_pos_y?

user-e3f20f 29 November, 2022, 07:27:54

Ho, not directly, since norm_pos_x/y are in scene camera coordinates by default, not in screen coordinates. What you need is https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-f8c051 29 November, 2022, 09:20:53

hey @user-e3f20f thanks for your quick response on github. I just installed C-Make version 3.25 (via snap) and still unable to build pyuvc 😦

user-f8c051 29 November, 2022, 09:22:39

I think the current error is:|

`CMake Error at CMakeLists.txt:103 (target_link_libraries): Target "uvc_bindings" links to:

  PLLibUVC::uvc

but the target was not found.  Possible reasons include:

`

user-e3f20f 29 November, 2022, 09:22:54

Let's move this discussion into #1039477832440631366

user-50567f 29 November, 2022, 20:01:54

Thanks!

So out of 'norm_pos_x', 'norm_pos_y', 'gaze_point_3d_x', 'gaze_point_3d_y', 'gaze_point_3d_z', 'eye_center0_3d_x', 'eye_center0_3d_y', 'eye_center0_3d_z', 'eye_center1_3d_x', 'eye_center1_3d_y', 'eye_center1_3d_z', 'gaze_normal0_x', 'gaze_normal0_y', 'gaze_normal0_z', 'gaze_normal1_x', 'gaze_normal1_y', 'gaze_normal1_z', 'diameter0_2d', 'diameter1_2d', 'diameter0_3d', 'diameter1_3d' can I use anything to plot x,y coordinate on the screen? Any help would be appreciated.

user-e3f20f 29 November, 2022, 20:59:10

None of these contain the information that you are looking for

user-e3f20f 29 November, 2022, 20:56:51

By screen, do you mean computer screen coordinates that the subject was looking at?

user-50567f 29 November, 2022, 20:57:53

Well not really! I have screenshots and I wanted to overlay dots showing where the subject was looking at!

user-e3f20f 29 November, 2022, 20:58:49

OK, you will need to setup surface tracking before you can do that.

user-50567f 29 November, 2022, 21:04:36

I have POV view as well from the main camera of pupil capture. Can I use that overlay dots on the screenshot showing where the subject was looking at?

user-e3f20f 29 November, 2022, 21:11:51

The issue is that pov/main camera are not static in relation to the screenshot. But gaze is estimated in the former, not the latter. You need an additional mapping step. And this is what surface tracking does for you.

user-e3f20f 29 November, 2022, 21:09:54

Where is the difference between POV and main camera for you?

user-50567f 29 November, 2022, 21:10:51

By POV I mean this!

Chat image

user-50567f 29 November, 2022, 21:12:30

I see. So technically speaking there is no workaround for what I want to do?

user-e3f20f 29 November, 2022, 21:12:52

There is, but you need to use the surface tracking.

user-50567f 29 November, 2022, 23:27:28

So I just noticed I have files called surfaces.pldata, pupil.pldata, gaze.pldata, do any of them contain information that I can use for plotting Screenshot coordinates?

user-50567f 29 November, 2022, 21:13:47

I see. I mean I whole bunch of data without surface tracking, so I don't want that data to go to waste.

user-e3f20f 29 November, 2022, 21:15:46

In this case, if I understood your current situation correctly, there is no easy way to map the gaze from POV into screenshot coordinates, sorry. πŸ˜•

user-50567f 29 November, 2022, 21:16:38

Gotcha. Thanks a lot!

user-e3f20f 30 November, 2022, 07:01:24

If ones does, then the surfaces.pldata file. But that sounds as if your recordings do include surface tracking data. Have you reviewed the recording in Player to check?

user-50567f 30 November, 2022, 19:16:00

I am streaming Pupil capture through LSL and these are columns found in the XDF file: 'norm_pos_x', 'norm_pos_y', 'gaze_point_3d_x', 'gaze_point_3d_y', 'gaze_point_3d_z', 'eye_center0_3d_x', 'eye_center0_3d_y', 'eye_center0_3d_z', 'eye_center1_3d_x', 'eye_center1_3d_y', 'eye_center1_3d_z', 'gaze_normal0_x', 'gaze_normal0_y', 'gaze_normal0_z', 'gaze_normal1_x', 'gaze_normal1_y', 'gaze_normal1_z', 'diameter0_2d', 'diameter1_2d', 'diameter0_3d', 'diameter1_3d'

The folder that pupil creates locally are: blinks.pldata eye0.mp4 eye1.mp4 fixations.pldata gaze.pldata notify.pldata pupil.pldata surfaces.pldata world.mp4

End of November archive