You can receive the image data via the network API https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py and pass it to YOLO.
I m not really able to understand where these images are being stored
Thanks for the reply.
Hi folks,
is there a way to get the gaze calibration data/validation data and calculate the median and the nMAD and do an outlier removal instead of the simple average and RMS error and hard outlier removal provided by the software ?
thanks 🙂
The software publishes the collected pupil and reference data, and if successful, the gaze estimation model parameters. Together, you should be able to map the collected pupil data and measure the difference to the collected reference data yourself.
Subscribe to notify.calibration
to receive the related notifications.
Does this work via Pupil Remote then ?
Correct. See this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py#L23 change the highlighted line to use the correct subscription topic
cool thank you 🙂
Hello, everyone! When loading the surface definitions in python i see the 4 corners of each surface, but in which coordinate space are they, please? Here is a comparison of the definition in pupil_player vs a plot of the numeric values I see in python. In the right pic, every 4 points close together are from one surface.
Surfaces are defined relative to their markers. The surface definitions file should have a reg_markers
field with a list of dicts. Their vertices are recorded in normalized surface coordinates. If you are interested in the surface boundaries for each frame, I recommend having a look at this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb
Thank you I will do that!
Hey everybody! I am collecting gaze position on a defined surface via the Network API. I have recently noticed that with respect to timestamps the update rate of the gaze position is ca. twice as high the frame rate of the eye-camera. I get the gaze position update rate around 240, but the eye-camera is set to 120fps. What is the reason for that?
That is due to how the matching of the Pupil data from both eyes works. If you search the channel you will find multiple discussions/responses to the topic.
See also this draft https://github.com/N-M-T/pupil-docs/commit/1dafe298565720a4bb7500a245abab7a6a2cd92f
Thanks @papr !
Hi everybody! I am currently working on an AR system running on a jetson tx2 device. I need to have a custom gaze estimation process in real time without the use of pupil capture software. I have achieved to get the Gaze position for each pupil independently, using the following libraries, pyuvc together with pupil-detectors. How can I compute 2d Gaze from the data obtained for each eye? Thanks in advance!!
Hi, happy to hear you have been able to progress with your project! The only thing you are missing is the calibration and mapping part. Checkout this project: https://github.com/papr/pupil-core-pipeline/ It implements the post-hoc calibration pipeline from Player but can be adapted for a realtime use case, too.
Checkout specifically these two functions: https://github.com/papr/pupil-core-pipeline/blob/main/src/core/pipeline.py#L196-L199
Instead of loading the reference data from disk, you would need to collect it yourself (this happens in Capture during the calibration choreography). With the result of the calibration, the so-called gazer, you can map the pupil data to gaze.
I really apreciate your fast help!! I Will inform you with my progres.
Hi, I have a litle question regarding how the linear regresión model is feed. the clases gazer_2d has the función "_polynomial_features(norm_xy)", which data represents norm_xy? from detector_2d i get a dictionary with (confidence, diámeter, ellipse, location and timestamp) Keys. I Guess location is the value required which is then normalized respect to the camera resolution. IS this correct?
Both, input (pupil norm_pos data) and output (during fitting: reference norm_pos data), are in normalized coordinates.
thanks!
Hi, regarding the función "polynomial_features", norm_xy variable IS 2 dimensional, so I Guess that from both pupil positions you compute the mean so you get this 2 dimensional matrix, then normalize It and feed the function.
The 2 dimensions are n
samples x k
features of one eye. k
equals to 2 in case of the 2d gazer (x and y coordinations). The binocular gazer calculates polynomial features for each eye separately and concatenates them afterward: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_2d.py#L145-L146
It should not be necessary for you to think about this too much if you are using the gazer class as described in the pipeline project.
so the linear regresar modelo os feed with a fractures from both eyes as X, and reference positions as Y which os a 2d array of (x,y) coordinates. IS this right?
Not sure what you are referring to by fractures, but yes, the description is correct: X is the input data from both eyes and Y the target data in normalized coordinates.
features sorry
ok, thanks!! I wanted to be sure that I understand the process.
Let us know if you have more questions 🙂
thanks for your help!! now i can continúe building my prototype!!😁
Is there anyway get calibration results (I.e. not just whether or not it succeeded, but also, for example, values for scaling and offset) using the Network API?
If so, would I be able to set these values via the Network API?
What do you mean by scaling and offset? The parameters for the calibrated gazers are being published as part of the calibration.result
notification
ah…I meant parameters, ignore the part about scaling and offset
would I be able to alter those parameters using the API?
You can send a new start_plugin notification with different parameters, yes. That will init a new gazer and replace the old one
cool cool, also; the calibration parameters are different per eye, right ?
the notification contains params for a binocular and two monocular models.
gotcha, thank you
hi! is there a way to change the monitor of the calibration? because I'm using two different monitors, one connected via HDMI and another connected via DVI, but it only recognizes the one via HDMI. thanks!
Hi, you can change the monitor in the calibration settings.
@papr , Hello again! I asked a question about C++ support for Pupil Core eye tracker. Let me tell you a bit what I am doing currently. I am working in graphics rendering, and to render the object more efficiently, I need the user gaze position. Now my whole render engine is built with C++, and I actually do not know Python. You previously mentioned "zmq and msgpack can be used to connect to the network API" - I actually did not understand this statement. What are these? Normally what I am looking a static/dynamic link library, or even a simple header only file (C++) that I can integrate the eye-tracker in real time for my project. Can you please give me any suggestion on that?
For msgpack, checkout https://msgpack.org/ and scroll down a bit. It is an encoding standard for which there are libraries in a variety of programming languages, including c++. You will need it to decode the incoming data.
Similar with zeromq. It is a networking framework for which there are multiple implementations. Check out the c++ implementations here https://zeromq.org/languages/cplusplus/
Hi, i am currently trying to start a recording via Pupil Remote. I run the code on a raspberry pi connected to my computer via ethernet. Unfortunatly it doesn't work. If i run the script on my computer it works perfectly fine.
ctx = zmq.Context()
open a req port to talk to pupiladdr = "192.168.0.241" # remote ip or localhost req_port = "50020" # same as in the pupil remote gui pupil_remote = zmq.Socket(ctx, zmq.REQ) pupil_remote.connect("tcp://{}:{}".format(addr, req_port))
pupil_remote.send_string("R test_recording") print(pupil_remote.recv_string())
What am i doing wrong ?
If it is working fine on your computer, then it is likely a network setup issue. Have you checked if you can ping the computer running Capture from your RPI?
yes, i am actually connected via ssh via vs code to the rasperry pi and executing the script
oh sry, i think i misunderstood. I can ping the computer from my raspberry
@user-b91cdf Just to be sure, you are not getting any error messages, correct? The script just hangs on the recv_string
line?
Yes
Mmh, then it seems like the connection is somehow blocked. On which OS does Capture run?
Mac Os BigSur 11.4
But i think emulated with rosetta
If you run the script from your mac, do you make any changes, e.g. ip address or port?
I tried it with the localhost and the remote ip and both works
so no changes - works
port stays the same
Could you go to your system settings -> Security & privacy -> Firewall and check if it is off or adjusted accordingly?
If that is already the case, I do not know what else might be blocking the connection 😦
Do you mean the raspberry's settings ?
No, on your mac
i turned it off, still does not work:(
nevertheless thanks !!:)
@papr It seems like my ethernet Adapter didn't work out of the box. I had to install a driver und change the security settings for my Mac (with apple silicon). It works now 🙂
For those who are interested:
Ethernet adapter driver: https://www.globalnerdy.com/2021/01/09/what-to-do-when-the-usb-c-ethernet-adapter-for-your-mac-doesnt-work-out-of-the-box/ Security settings: https://support.apple.com/de-de/HT210999 https://support.apple.com/de-de/guide/mac-help/mchl768f7291/mac
Thank you for following up with these details!