Hi! I'm currently using the pupil labs fork of PyUVC with a pupil core. I notice that some of the camera modes do not seem to be functioning, and I was hoping to figure out a solution for them. For instance, if I use the 120 FPS mode, I receive valid images from the camera. If I use the 200 FPS mode, I receive black images. Monitoring the pixel averages, there is some movement of the pixel average (so not entirely 0) in the 200 FPS mode, but if I put it directly up to my hand it maxes out at 1.0 or 2.0, whereas the 120 FPS mode correctly maxes out at 255. Any thoughts on fixing this?
Hi @user-ffc425 , may I ask if you are referring to the world camera or the eye cameras?
I am using one of the eye camera arms
Ok, and do you roughly know when your Pupil Core was made or purchased? Does it potentially have the original 120 Hz eye cameras (known as Pupil Cam1) ?
May I also ask what resolution setting is used?
It was purchased mid to late last year (2024), and it appears to be Pupil Cam2 or Pupil Cam3. The resolution options I have are 400x400 and 192x192. When using 200 FPS, I was using the 192x192 mode and for 120FPS I was using 400x400
Thanks for the clarification. I will confer with my colleagues in the morning about this.
In the meantime, may I ask out of curiosity, why not use Pupil Capture?
Sure, thanks for your help! I can't use pupil capture as I'm running the camera from custom software on a raspberry pi, so I'd like to minimize the computation required. I also need to package the frames with other sensors' information (also controlled by the same program)
Hi @user-ffc425 , I spoke with my colleagues.
Would you be able to provide a video with the black frames when the device is set to 200Hz via pyuvc? If not, then even just sending 2 or 3 frame images could be helpful. Even in a .npy
file is okay.
You can send it via private DM or to data@pupil-labs.com
Sure! I'll record that today for you
Hi @user-ffc425 , thanks for providing the file and the code snippet.
Note that some UVC settings, such as Auto Focus
, are not compatible with the 180 and 200 Hz framerates of those cameras. Just in case, I'll point out that the focus on those Pupil Core eye cameras, as provided by us, is glued in place and the focus cannot be changed. See here for more details.
You may want to simply copy the UVC settings from the Pupil Capture code. Also, take note of this commit that added Auto Exposure
for those cameras.
Lastly, Pupil Capture uses this form of get_frame
, rather than get_frame_robust
, although there is nothing wrong with get_frame_robust
. It's just a question of what balance you want to strike.
Hi, I'm curious about gaze mapping algorithms. Could you provide detailed information on the calculation method and the code for it?
Hi @user-287b79 , may I ask if you mean a specific algorithm? For example, do you mean the algorithm used for Pupil Core?
Yeah, that's what I'm looking for
Hi @user-287b79 , then you may want to check out this part of the Documentation, including this reference at the end for the 3D eye model used in Pupil Core's algorithm.
The relevant code can be found in the Pupil Capture repository and the pye3d repository.
Thanks, I'll check it out
I mean how can I calculate the sphere (or plane) equation of the front camera using the gaze_vector, sphere_center, and reference gaze points (after calibration) on a 640x480, 120-degree FOV camera. The goal is to later determine the gaze point by finding the intersection between the gaze_vector line and this equation.
Hi @user-287b79 , may I ask for more clarification about the "sphere/plane equation of the front camera"? Do you mean that you want to project the gaze vector onto the image plane of Pupil Core's world camera, when the camera is in 480p mode?
Yes, sorry for my lack of clarity.
No problem! I just wanted to be sure I understood correctly.
Do you want to do this for monocular or binocular gaze estimation?
As the question is currently posed, it sounds rather like you want to project a ray from the eyeball center to the image plane of the world camera, as a way to estimate a single gaze point?
If it helps, that is a bit different from finding the intersection of gaze with the image plane. If you want the intersection of gaze with the image plane, then that value is already provided by Pupil Player in the gaze_positions.csv
file. It is in the norm_pos_x/y
columns.
Also, to clarify, sphere_center
and gaze_point
are in different camera coordinate systems.
Yes, I'm working on monocular gaze estimation using my DIY glasses (with one eye camera and one scene camera), but I'm struggling a bit with calculating the gaze point. 🥲
My glasses are running pye3d, but I'm not sure how to obtain the gaze point. I have an idea to improve slippage prevention for longer usage, and I plan to develop it once I can calculate the gaze point.
So, is finding the intersection of the gaze vector with the image plane not a viable solution? Could you provide guidance on the best approach for this?
I see!
Finding the intersection of the gaze vector with the image plane is a very viable & important solution. It just depends on how you want to conceptualize it. As mentioned, it is one of the standard data outputs from Pupil Player.
In the Pupil software, gaze is a vector with its origin at the center of the scene camera coordinate system. You can see how it is computed for the monocular case here. That code builds on pye3d.
I have trouble with camera intrinsic calibration. I think it's either because the program didn't use my usb camera, or it couldn't recognize the pattern recorded from the usb camera. I'm trying to calibrate on a third party camera. https://streamable.com/bre78p
I don't think you can have two apps use the same camera simultaneously. I'd close everything, then start Pupil Capture by itself and select your camera within the settings. The view from your camera should then be visible in the Pupil Capture window, so you should not need an extra app to see what the camera sees.
thank you, it can use my camera now. By the way, is it normal for it to freeze after pressing I 4 times?
No, that's unexpected. Can you find and share the log file from your pupil_capture_settings
folder?
Here's my capture.log
file
Thanks for sharing the log, @user-ca59ee. It turns out that there is a bug in Pupil Capture on Windows that causes it to crash when the circle grid is not detected. It's fixed in the code already, but that fix happened after v3.5 was made. You have two options: 1. Be very careful to only capture an intrinsics calibration image when all the circles are clearly visible and not too skewed 2. Run Pupil Capture from source code
Thank you for the instructions
I stumbled across this example code and I am still confused how to export custom data gathered in pupil capture. I filled in my custom fields and created my events[CUSTOM_TOPIC] but how do I export this data as a csv after recording? Is that automatically done? Sorry I am not well versed with programming.
Hi @user-db42fc , can you clarify what the custom data is? There might be an alternative, simpler method that already exists.
Hey Rob! Uhm, I just used gaze data to calculate my own vergence angle and I wanted to output timestamp + the angle + maybe the other variables used for the vergence calculation in a csv for further processing.
Hi @user-db42fc , I see. Do you need that data in real-time? It sounds like you want to analyze it after data collection, so after the experiment has completed? If so, then you might be interested in the gaze_normal
and eye_center
columns of the gaze_positions.csv
file that is exported by Pupil Player.
Thanks a lot Rob for helping me out so quickly. Suppose I would love to have it in real-time. These 2 columns are actually already used in my vergence calculations which is then used for other calculations in my "recent_events" method. Currently I only output my values as a log in a logger in there which works in real-time. However, what would you suggest could I add or write to have it then exported to a csv or some kind of other output? Would it make sense to recalculate everything after runtime in a seperate script or is there a cleaner way to have all my calculation data stored after recording?
I see. May I ask though if the vergence data is used in real-time for certain interactive purposes? For example, are certain user interface elements triggered when a user reaches a given vergence angle or is a separate device triggered when they "break vergence"?
I ask because if the main aim is to have the data in a CSV, then it might be easiest to calculate everything post-hoc in a separate script, using the gaze_positions.csv
file after all data has been recorded by Pupil Capture.
If it is indeed a real-time contingent design, then have you considered working with Pupil Core's Network API? It might be overall easier than passing custom events around within Pupil Capture.
I'm thinking of streaming the gaze data from Pupil Labs Neon to Unreal. To confirm: Am I right that I only need a WebSockets connection for this and not rtsp? Also what will the incoming data look like, is it JSON?
Also, is Pupil Labs working on an Unreal Plugin? At the Max Planck Institute for Psycholinguistics we use multiple of your eye trackers heavily and are super interested in an Unreal Plugin. Thanks for all your great work, great stuff! I'm looking forward to your reply
Hi @user-e26971 , you are welcome! Thank you, as well!
I cannot give an estimate on an Unreal plugin just yet, but you can use either RTSP-over-WebSockets or RTSP directly, whichever works well in your case. The details of the Real-time API can be found here.
In either case, the incoming data is in RTP packets and the data format for gaze is described here.
As reference, you can use either the Unity implementation or the implementation in the Python Real-time API.
Hi, I have the calibration data (prerecorded_calibration_result) of my monocular system
{
"gazer_class_name": "Gazer3D",
"timestamp": -861452.059011,
"params": {
"left_model": {},
"right_model": {
"eye_camera_to_world_matrix": [
[-0.3484695425289404, -0.724353620096769, -0.5948788204183916, -130.66384023297988],
[-0.7581322961235649, 0.5910203060031228, -0.27555474858254025, 203.71910832784144],
[0.5511845421490723, 0.3549744088588379, -0.7551084488676022, 435.419631967043],
[0.0, 0.0, 0.0, 1.0]
],
"gaze_distance": 500
},
"binocular_model": {}
},
"record": true,
"version": 2,
"subject": "calibration.result.v2"
}
I'm curious about the input and output of eye_camera_to_world_matrix. Is this how gaze mapping works? If so, since I'm using 1280x720 resolution for the front camera, will it calculate the specific gaze point within this resolution?
Hi, @user-287b79 - the eye_camera_to_world_matrix
represents the position and rotation of that eye in relation to the world camera. Could you tell us a little bit more about what you're trying to achieve?
Thank you a lot for your reply. I'm trying to run gaze_mapping algorithms with pye3d-detectors example to point out the gaze point in the front camera window in real-time. It's a part of my project, and I need a minimal software setup for this (I'm using my DIY monocular glasses). I figured out that I need calibration data to accurately map the gaze point, so I did a calibration on Pupil Capture and got 'prerecorded_calibration_result', I think the 'eye_camera_to_world_matrix' plays the role of transforming some inputs calculated by pye3d to the 3d gaze point. Am I correct in this approach? If not, how can I create a simple gaze pointer algorithm in Python based on pye3d without having to install the entire Pupil software suite?
The data in eye_camera_to_world_matrix
does play a role - it's the basis for the gaze ray. Conceptually, the gaze point in 3D space exists along this ray. In a monocular pipeline, one might assume that the view target exists at a fixed distance from the viewer. With the gaze ray and an assumed or known distance, you can calculate the gaze point in 3D, and then project that onto the image plane of the scene camera.
The _predict_single
method in the Model3D_Monocular
class is probably the most relevant piece of code for your question
Thank you so much! I'll check it out.
Hello team, is there any guidance or examples for mapping real-world gaze data into a 3D immersive virtual world? I want to use motion capture to track the position of the glasses, and we also know the position of our projectors. Any advice on how I should proceed? Thank you in advance
Hi @user-6e3b6d , can you provide some more details on your setup, perhaps even share a photo? This is possible, and with that info, I am in a better position to assist you.
https://github.com/MinhHixn/pye3d-detector/blob/master/examples/newMapping.py
Here is my basic code for gaze mapping of my monocular gaze tracking system, it worked but the accuracy was bad, can you take a look at this and tell me what's wrong about the approach please?
Hi, @user-287b79 - reviewing and troubleshooting a project like this would probably require a custom consultancy package. However, I can provide some basic advice - if the mapping appears to be at least go in the correct direction (e.g., when you gaze upwards the mapped point moves upwards even if it's not where you're looking), then my first guess would be poor calibration
Thank you for all the invaluable advice and support! I appreciate this incredible technology and would love the opportunity to meet the team behind it. While there are some limitations at the moment, I will continue to innovate to bring this technology closer to even more users. I look forward to the day I can finally meet the Pupil Labs team and share this passion together!
Thanks for the help! I succeeded in writing an actor that receives gaze data in Unreal.
Great to hear it! You are welcome!
Hello Pubil Labs
Problem: Can't run "main.py" Neon-Player I'm working on a project and utilizing the Neon glasses and for this I wanted to see the recordings in the offline neon-player. But since I'm working on a "work" computer I can't install the program. This led to trying to download the source code and just run it based on the instructions at https://github.com/pupil-labs/neon-player. (download source code -> instal requirements -> run main.py)
Running it on pyhton 3.12.3 gives an error in installing of requirements on a package for pupil_labs_uvc
.
So instead I tried to run it via python 3.11.5, which could successfully install requirements but fails on python main.py
. I keep getting this git error, I don't get it.
Error message
(venv) PS C:\NeonPlayer\neon-player-4.6.1\pupil_src> python main.py
Error calling git: "Command '['git', 'describe', '--tags', '--long']' returned non-zero exit status 128."
output: "b'fatal: not a git repository (or any of the parent directories): .git\n'"
Traceback (most recent call last):
File "C:\NeonPlayer\neon-player-4.6.1\pupil_src\main.py", line 38, in <module>
app_version = get_version()
^^^^^^^^^^^^^
File "C:\NeonPlayer\neon-player-4.6.1\pupil_src\shared_modules\version_utils.py", line 84, in get_version
version_string = pupil_version_string()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\NeonPlayer\neon-player-4.6.1\pupil_src\shared_modules\version_utils.py", line 58, in pupil_version_string
raise ValueError("Version Error")
ValueError: Version Error
So wanted to ask - what is up with that? - How can I run it without running the default installer and just running the project? - if that is even possible?
Hi @user-1acc0f , did you clone the repository with git
or did you click the "Download code as ZIP" button in Github?
Downloaded source code from latest release. (Source code zip -> neon-player-4.6.1.zip) https://github.com/pupil-labs/neon-player/releases/tag/v4.6.1
I see. To run it from source, you want to clone it via git
. That is what the first error in the message refers to. When run that way, it searches for a git tag to determine the version number.
You can remove or backup that directory and clone a fresh copy with:
Then, I think it should work out for you.
do you think I can get away with copying my venv to avoid having to install requrements again 👀
I think so. It's worth a try
Hi Pupil Labs support, could you please re-open my ticket " ticket-332969685706473472"? It contained crucial information and I am still in the process of taking steps to solve our issue. Unfortunately I did not save the content of the chat locally.
Hi @user-4b18ca , please open a new ticket and I’ll follow up with you there.