💻 software-dev


papr 01 December, 2021, 11:01:35

You can receive the image data via the network API https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py and pass it to YOLO.

user-b7a87f 02 December, 2021, 10:36:44

I m not really able to understand where these images are being stored

user-b7a87f 02 December, 2021, 06:03:21

Thanks for the reply.

user-b91cdf 02 December, 2021, 09:17:28

Hi folks,

is there a way to get the gaze calibration data/validation data and calculate the median and the nMAD and do an outlier removal instead of the simple average and RMS error and hard outlier removal provided by the software ?

thanks 🙂

papr 02 December, 2021, 09:31:19

The software publishes the collected pupil and reference data, and if successful, the gaze estimation model parameters. Together, you should be able to map the collected pupil data and measure the difference to the collected reference data yourself.

Subscribe to notify.calibration to receive the related notifications.

user-b91cdf 02 December, 2021, 09:42:59

Does this work via Pupil Remote then ?

papr 02 December, 2021, 09:44:55

Correct. See this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py#L23 change the highlighted line to use the correct subscription topic

user-b91cdf 02 December, 2021, 09:45:22

cool thank you 🙂

user-9eeaf8 02 December, 2021, 15:52:35

Hello, everyone! When loading the surface definitions in python i see the 4 corners of each surface, but in which coordinate space are they, please? Here is a comparison of the definition in pupil_player vs a plot of the numeric values I see in python. In the right pic, every 4 points close together are from one surface.

Chat image

papr 02 December, 2021, 16:03:28

Surfaces are defined relative to their markers. The surface definitions file should have a reg_markers field with a list of dicts. Their vertices are recorded in normalized surface coordinates. If you are interested in the surface boundaries for each frame, I recommend having a look at this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb

user-9eeaf8 02 December, 2021, 16:19:07

Thank you I will do that!

user-55fd9d 11 December, 2021, 10:58:36

Hey everybody! I am collecting gaze position on a defined surface via the Network API. I have recently noticed that with respect to timestamps the update rate of the gaze position is ca. twice as high the frame rate of the eye-camera. I get the gaze position update rate around 240, but the eye-camera is set to 120fps. What is the reason for that?

papr 11 December, 2021, 16:45:23

That is due to how the matching of the Pupil data from both eyes works. If you search the channel you will find multiple discussions/responses to the topic.

papr 11 December, 2021, 16:46:34

See also this draft https://github.com/N-M-T/pupil-docs/commit/1dafe298565720a4bb7500a245abab7a6a2cd92f

user-55fd9d 11 December, 2021, 23:02:05

Thanks @papr !

user-005292 13 December, 2021, 14:29:50

Hi everybody! I am currently working on an AR system running on a jetson tx2 device. I need to have a custom gaze estimation process in real time without the use of pupil capture software. I have achieved to get the Gaze position for each pupil independently, using the following libraries, pyuvc together with pupil-detectors. How can I compute 2d Gaze from the data obtained for each eye? Thanks in advance!!

papr 13 December, 2021, 14:40:06

Hi, happy to hear you have been able to progress with your project! The only thing you are missing is the calibration and mapping part. Checkout this project: https://github.com/papr/pupil-core-pipeline/ It implements the post-hoc calibration pipeline from Player but can be adapted for a realtime use case, too.

Checkout specifically these two functions: https://github.com/papr/pupil-core-pipeline/blob/main/src/core/pipeline.py#L196-L199

Instead of loading the reference data from disk, you would need to collect it yourself (this happens in Capture during the calibration choreography). With the result of the calibration, the so-called gazer, you can map the pupil data to gaze.

user-005292 13 December, 2021, 14:42:54

I really apreciate your fast help!! I Will inform you with my progres.

user-005292 14 December, 2021, 14:05:30

Hi, I have a litle question regarding how the linear regresión model is feed. the clases gazer_2d has the función "_polynomial_features(norm_xy)", which data represents norm_xy? from detector_2d i get a dictionary with (confidence, diámeter, ellipse, location and timestamp) Keys. I Guess location is the value required which is then normalized respect to the camera resolution. IS this correct?

papr 14 December, 2021, 15:08:34

Both, input (pupil norm_pos data) and output (during fitting: reference norm_pos data), are in normalized coordinates.

user-005292 14 December, 2021, 15:09:18

thanks!

user-005292 15 December, 2021, 09:54:29

Hi, regarding the función "polynomial_features", norm_xy variable IS 2 dimensional, so I Guess that from both pupil positions you compute the mean so you get this 2 dimensional matrix, then normalize It and feed the function.

papr 15 December, 2021, 10:00:19

The 2 dimensions are n samples x k features of one eye. k equals to 2 in case of the 2d gazer (x and y coordinations). The binocular gazer calculates polynomial features for each eye separately and concatenates them afterward: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_2d.py#L145-L146

It should not be necessary for you to think about this too much if you are using the gazer class as described in the pipeline project.

user-005292 15 December, 2021, 10:08:40

so the linear regresar modelo os feed with a fractures from both eyes as X, and reference positions as Y which os a 2d array of (x,y) coordinates. IS this right?

papr 15 December, 2021, 10:16:14

Not sure what you are referring to by fractures, but yes, the description is correct: X is the input data from both eyes and Y the target data in normalized coordinates.

user-005292 15 December, 2021, 10:22:10

features sorry

user-005292 15 December, 2021, 10:22:49

ok, thanks!! I wanted to be sure that I understand the process.

papr 15 December, 2021, 10:23:09

Let us know if you have more questions 🙂

user-005292 15 December, 2021, 10:26:09

Chat image Chat image

user-005292 15 December, 2021, 10:27:56

thanks for your help!! now i can continúe building my prototype!!😁

user-d1efa8 16 December, 2021, 18:01:59

Is there anyway get calibration results (I.e. not just whether or not it succeeded, but also, for example, values for scaling and offset) using the Network API?

If so, would I be able to set these values via the Network API?

papr 16 December, 2021, 18:04:09

What do you mean by scaling and offset? The parameters for the calibrated gazers are being published as part of the calibration.result notification

user-d1efa8 16 December, 2021, 18:11:04

ah…I meant parameters, ignore the part about scaling and offset

would I be able to alter those parameters using the API?

papr 16 December, 2021, 18:11:47

You can send a new start_plugin notification with different parameters, yes. That will init a new gazer and replace the old one

user-d1efa8 16 December, 2021, 18:12:35

cool cool, also; the calibration parameters are different per eye, right ?

papr 16 December, 2021, 18:13:17

the notification contains params for a binocular and two monocular models.

user-d1efa8 16 December, 2021, 18:13:43

gotcha, thank you

user-709f65 18 December, 2021, 23:51:11

hi! is there a way to change the monitor of the calibration? because I'm using two different monitors, one connected via HDMI and another connected via DVI, but it only recognizes the one via HDMI. thanks!

papr 21 December, 2021, 06:49:01

Hi, you can change the monitor in the calibration settings.

user-63150b 21 December, 2021, 14:35:19

@papr , Hello again! I asked a question about C++ support for Pupil Core eye tracker. Let me tell you a bit what I am doing currently. I am working in graphics rendering, and to render the object more efficiently, I need the user gaze position. Now my whole render engine is built with C++, and I actually do not know Python. You previously mentioned "zmq and msgpack can be used to connect to the network API" - I actually did not understand this statement. What are these? Normally what I am looking a static/dynamic link library, or even a simple header only file (C++) that I can integrate the eye-tracker in real time for my project. Can you please give me any suggestion on that?

papr 21 December, 2021, 14:42:47

For msgpack, checkout https://msgpack.org/ and scroll down a bit. It is an encoding standard for which there are libraries in a variety of programming languages, including c++. You will need it to decode the incoming data.

Similar with zeromq. It is a networking framework for which there are multiple implementations. Check out the c++ implementations here https://zeromq.org/languages/cplusplus/

user-b91cdf 21 December, 2021, 15:34:28

Hi, i am currently trying to start a recording via Pupil Remote. I run the code on a raspberry pi connected to my computer via ethernet. Unfortunatly it doesn't work. If i run the script on my computer it works perfectly fine.

ctx = zmq.Context()

open a req port to talk to pupil

addr = "192.168.0.241" # remote ip or localhost req_port = "50020" # same as in the pupil remote gui pupil_remote = zmq.Socket(ctx, zmq.REQ) pupil_remote.connect("tcp://{}:{}".format(addr, req_port))

pupil_remote.send_string("R test_recording") print(pupil_remote.recv_string())

What am i doing wrong ?

papr 21 December, 2021, 15:36:07

If it is working fine on your computer, then it is likely a network setup issue. Have you checked if you can ping the computer running Capture from your RPI?

user-b91cdf 21 December, 2021, 15:37:12

yes, i am actually connected via ssh via vs code to the rasperry pi and executing the script

user-b91cdf 21 December, 2021, 15:42:16

oh sry, i think i misunderstood. I can ping the computer from my raspberry

papr 21 December, 2021, 15:44:52

@user-b91cdf Just to be sure, you are not getting any error messages, correct? The script just hangs on the recv_string line?

user-b91cdf 21 December, 2021, 15:45:04

Yes

papr 21 December, 2021, 15:46:22

Mmh, then it seems like the connection is somehow blocked. On which OS does Capture run?

user-b91cdf 21 December, 2021, 15:47:01

Mac Os BigSur 11.4

user-b91cdf 21 December, 2021, 15:47:48

But i think emulated with rosetta

papr 21 December, 2021, 15:48:35

If you run the script from your mac, do you make any changes, e.g. ip address or port?

user-b91cdf 21 December, 2021, 15:49:58

I tried it with the localhost and the remote ip and both works

user-b91cdf 21 December, 2021, 15:50:09

so no changes - works

user-b91cdf 21 December, 2021, 15:51:33

port stays the same

papr 21 December, 2021, 15:52:45

Could you go to your system settings -> Security & privacy -> Firewall and check if it is off or adjusted accordingly?

papr 21 December, 2021, 15:53:54

If that is already the case, I do not know what else might be blocking the connection 😦

user-b91cdf 21 December, 2021, 15:58:33

Do you mean the raspberry's settings ?

papr 21 December, 2021, 15:59:16

No, on your mac

user-b91cdf 21 December, 2021, 16:00:11

i turned it off, still does not work:(

user-b91cdf 21 December, 2021, 16:06:02

nevertheless thanks !!:)

user-b91cdf 23 December, 2021, 09:29:20

@papr It seems like my ethernet Adapter didn't work out of the box. I had to install a driver und change the security settings for my Mac (with apple silicon). It works now 🙂

For those who are interested:

Ethernet adapter driver: https://www.globalnerdy.com/2021/01/09/what-to-do-when-the-usb-c-ethernet-adapter-for-your-mac-doesnt-work-out-of-the-box/ Security settings: https://support.apple.com/de-de/HT210999 https://support.apple.com/de-de/guide/mac-help/mchl768f7291/mac

papr 23 December, 2021, 09:30:25

Thank you for following up with these details!

End of December archive