Is it possible yet to use to Neon glasses as if it was a Pupil Core headset with a MOS rather than with MacOS and Linux as there had been a mention a while back of developing this facility and our eyetracking analysis software is developed for MOS so it would save us the hassle and expense of requiring one device to capture the data and then connected to another device to analyse the data with our software?
We want to do eye tracking for patients with ALS, MS or similar diseases, in cheap or free. We want to map the gaze to the screen. Do you know an easier way than the following:
I'm asking because I was not able to resize the surface in Capture unfortunately, and I heard your software to Invisible works with less calibration steps.
Any help is appreciated.
Also, as granted by your documentation page https://docs.pupil-labs.com/core/, section Chat, I hereby say "hi".
Also any news on what conferences you will be this year, which are in germany?
Yes! So far, we will be at TeaP2025 in Frankfurt.
Hi @user-84387e ! π Great project!
That is indeed the process with Pupil Capture. However, can you provide more detail with respect to "not able to resize the surface in Capture"?
Since you mentioned Pupil Invisible, have you seen our latest eyetracker, Neon? It is also calibration-free and an improvement over Pupil Invisible in every way. If you want to map gaze to the screen in real-time with it, then AprilTags are still an option, and the setup for that is also simpler.
Hey Rob, thank you for the insight.
more detail with respect to "not able to resize the surface in Capture"? I saw in one of your videos that one can define a surface and then readjust its size. I was not able to, even when freezing the frame. I will provide more info either tomorrow or the day after.
have you seen our latest eyetracker, Neon? Not really. I'm assuming I cannot just use my custom "Core" headset and use the Neon software? We're working with the least minimum β¬ possible. That's why we liked Core so far. If however this works with basically Core, than I'm in. The mouse click video looked awesome.
TeaP2025 in Frankfurt Oh cool. But hm, looks like registration closed already. Sad, that's around the corner. Well, have fun in Frankfurt! Consider checking out the Senckenberg Naturmuseum: https://museumfrankfurt.senckenberg.de/de/
In my Logitech c615 it is uvc compatible but yet it is not giving me pupil capture in world feed
Hi @user-84387e , you are welcome.
If it's alright, I'll respond point-by-point:
edit surface
, as shown in the attached image. If you click that pink dot, then you can drag the four transparent dots that correspond to the corners of the Surface.Rob, you are the best! And I prefer answers point-by-point π
about the Surface adjustment: ... you are kidding me. It was really that easy? Damn. Sorry, I didn't see the obvious. Thank you!
about Pupil Core & Neon: That's sad. But well, Core it is.
about Pupil Core based mouse cursor: okay, I think I need to check out the Alpha Labs more. And thanks for showing me the community repo.
about Conference: Likewise. And well, if any of you goes to a Chaos Computer Club event, shoot me a message. See you!
We are doing an experiment that require participant to stare at stimulus on monitor that is 300 cm away from them, is it possible to do calibration? If possible, should we still be concern about the accuracy of data collected? We are collecting gaze position and pupil diameter
Hi @user-131620! If the calibration marker is clearly visible and well-detected during the calibration routine, then the calibration can be successful. However, keep in mind that at that distance, the marker will likely not cover a large portion of the visual field. The calibration will be most accurate in the area of the visual field that is covered. This may or may not be appropriate depending on your test.
Are you referring to dual-monocular gaze estimates, one for each eye? This is possible with the use of a custom plugin.
In addition, I notice the gaze norm position is generated based on pupil norm position, is it possible to get mapped gaze position from the two pupil's norm position separately, without having to covert them to enforces monocular gaze estimation?
Thank you in advance!
yes, it will be nice to get gaze data sample from monocular base-data, thanks for recommending the plugin. Does the plugin affect the IPC backbone, essentially, after installment, whenver we subscribe to gaze. through ZMQ, it is going to return gaze data on monocular data of each side
Off the top of my head, I do not remember. Best to try it see!
hey, me and my partner are writing our bachelor thesis using the pupil Core, in our project our final goal is to be able to play a game like chess running in python only using eye-tracking.
At the moment we are having trouble with the accuracy of the gaze, it is very jumpy and a bit laggy.
we have setup at surface on a monitor inside the pupil capture software, then using a python script(better_eyetracking.py) using the API we get the surface coordinates and sends them over a socket to our simple test python script(simple_test.py), where we map the βnorm_posβ from the surface to our python script by simply multiplying each coordinates by the height and the width of the test scripts window size.
We have looked a bit into smoothing the eye tracking using 1EUR filter. But we wanted to make sure that we are not missing a smarter way of mapping the surface coordinates to our test script(simple_test.py), which is running in pygame.
My questions are therefore;
is there a smarter way to map the coordinates from the surface?
Is it a bottleneck when the better_eyetracking.py sends the gaze data over a socket to simple_test.py?
This is out first time working with eye-tracking, I hope the question makes sense. Thanks in advance!
(I know I'm not the one you wanted to address, but) Since we're trying to do the same thing with our project, I think what you do sounds about right: Having the Surface Plugin, defining the surface to your display, and then either grabbing the position data via UDP or install an addon called Cursor Control to directly place your mouse position: https://github.com/emendir/PupilCore-CursorControl We do the UDP way, so we have more control and β like you β apply some smoothing.
Can Pupil Core handle multiple Cameras with the same name? If so, how? For us, it gets confused and always picks the first one.
Can you provide a guide while detection of pupils there are many lines light blue , blue, red near the pupils
thanks, nice to know that we are on the right track! but what do you mean by UDP?
according to the one person on the internet which isn't part of the staff, yes, you are on the right track ^^ With UDP I mean the Network API. Pupil Core sends all the tracking data via UDP. Its documented here: https://docs.pupil-labs.com/core/developer/network-api/ (I think they send the messages over a message broker, but for our case we just grab the UDP messages)
Hello
I want to ask why I am having only one eye window open in Capture even though both are enabled?
Hi @user-7daa32 , could you try restarting Pupil Capture with default settings? You find this button in the General settings tab in the menu on the right.
Hi @user-d5bcb0 , would you be willing to share an example recording with [email removed] Then, we can provide direct feedback.
You can also send a screenshot of the Pupil Capture interface here, if you prefer. If AprilTag placement could be improved, for example, then the current setup could be resulting in inconsistent gaze mapping during Surface Tracking.
Otherwise, if the two scripts are running on the same computer and there is no computationally intensive routines that block the gaze sampling, then the data transmission should be consistent.
And, always great to see users helping users, @user-84387e π Just to clarify and if it helps, while the Pupil Time Sync plugin uses UDP, the Network API uses ZMQ in TCP mode, where ZMQ is brokerless by design. While you can build a message broker system on top of ZMQ, the Network API uses the PUB-SUB pattern.
Yes, I can sent you a recording we have made! Thanks!
Yep, nice when users give false-ish information to others π
@user-84387e Ah, just to be clear, apologies if my message at all came across the wrong way! No worries. You did nothing wrong! π
To the DIYers: What world camera would you suggest?
@user-84387e , do you mean they register with the same name to UVC libraries? That is, do you see them listed as separate devices when running this example from pyuvc?
You might just need to change this line in Pupil Capture for your purposes, and it might require an additional change to the downstream code that depends on devices_by_name
.
Is there any way to set the size of the calibration window? When I use the screen marker mode I am able to disable full screen calibration, but then the window is very tiny. Is there a way to control the size of it?
Hi, Iβm currently working on psychopy with the pupil core eye tracker and was wondering if any of you has been successful when trying to connect the 2 and gather data. I have followed the documentation provided, downloaded the necessary plugins and calibrated the eyetracker but I havenβt been able to successfully use the eye tracking during experiments.
Hi, @user-d95f48 - aside installing the plugin, it needs to be configured. Make sure you have "Pupil Labs" selected in your experiment settings, and configure the surface name to match the name you have configured in Pupil Capture. Also, make sure you have the Network API plugin enabled in Pupil Capture
Thanks, what do you mean by surface name ?
Currently I have psychopy_iohub_surface which is the default
Yes, that value needs to match in your Experiment settings in PsychoPy builder as well as in the Surface Tracker plugin in Pupil Capture
Hi @user-d5bcb0 , thanks, we received it and I have taken a look. Although the calibration phase was not included in the recording, the gaze signal that you have looks qualitatively quite good. The 3D eye model has been fit well, the pupil is well centered, and pupil detection confidence is high throughout the entire recording. The Surface Tracking also looks to be working well.
So, to clarify, is this a recording that exhibits the "very jumpy" data that you mentioned in your original post?
Some other questions as a follow-up:
And, a tip:
thanks for the feedback! The recording I sent was one of the best recordings we have, to show the best case. But the "jumpy" behevior, comes when we try to use a dot inside our test game, to show the eyes location on screen. That dot is jumpy, so I removed it, because i felt like it pulled my eyes towards it and made it worse. I can sent you a recording of that, if nesseary.
To answer your questions: 1. We only use one eye camara because that is the setup our university gave us. 2. I have not tried using does scripts, but I think that the lag is produced when our surface coordinates are mapped to our python test game. Because we take the normal coordinates from the surfaces and multiply them by the screen resolution.
Again thanks for take the time to help us!
Hi, we are using the LSL Lab Recorder to combine EEG, Psychtoolbox Markers and Pupil Core. In our pilot data we observed a few thinks I'd like to hear your opinion on: - In the 'time_stamps' column of the Lab Recorder output xdf of the pupil core stream are negative time stamps. We were wondering if that's because of the UDP package exchange, i.e. they are usually correct and we can just resort the time stamps and use them then. - Second, we notices that what the Lab Recorder saves as e.g. 'gaze_normal1_y' differs gravely from what is saved under that name in the gaze.csv. How can you explain that and is there a way that makes sure the data is equivalent? After all, synchronization is useless if the data Γn the recorded via the LSL is not usable.
Thanks a lot for your support!
Hi @user-17e555 , may I first briefly ask why you want to change the calibration window size, in particular to make it smaller?
Im working with a 49-inch ultrawide monitor, and during calibration, the width makes it difficult to accurately fixate on the peripheral calibration points, without significant head movement. This head movement impacts the calibration quite a lot. The application I am developing for the Pupil Core does not occupy the full screen. So my idea was to have the calibration screen match the same screen space as my application.
Hi @user-138bf5 , may I ask:
Hi Rob. I am using the pyxdf.load_xdf function that is recommended by LSL. For the second question: do you mean the recording of LSL vs. the one of Pupil Core directly, so the Gaze.csv? I have not aligned them, I looked at the descriptives and scanned and plotted the data to see they are very different.
Hi, I have a MacBook and opening the Pupil Capture, the cameras do not shown any image and en error appears. I have tried then pupil core in an ubuntu and they work. How could I solve this issue with Mac? it it due to the USB?
Hi @user-6a6d64. Please see this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1195382650089779231
Hello, does anyone know if there are software or plugins that could put April Tags on the PC screen on top of any other software? Thanks!
Probably any modern windowing/widget toolkit. I use Qt, but prefer coding in Python, so I use the official Python bindings for Qt called PySide.
Although the gaze-controlled-cursor-demo
project is made to work with Neon, you can see an example of how to use PySide to overlay tags on top of other windows in the UI code
Thank you @user-cdcab0 !
Hi @user-d5bcb0 , it would actually be helpful to see one of the worse recordings, in particular one that includes a recording of the 3D eye model fitting procedure, as well as the calibration routine, if you have that.
With respect to the lag issue, multiplying norm_pos_x/y
by the screen resolution is a fast operation, especially on modern systems. If you are regularly seeing lag, then the issue is probably somewhere else.
And, you are welcome!
okay thanks, but I dont think we have any super bad recordings and for the moment I think we have made the accuracy work.
At the moment we are working on trying to implement "Ocular vestibular reflex", but we are having a hard time measuring/tracking the head turn. Because we are unsure if their is some where in the API where we could subscribe to something that could help us calculate the head turn while at the same time track a non-moving object on screen.
We have tired using the pupil center and the pupil norm_pos, to claulate the head turn. The problem we have is when, we fixate on some point on screen and turn our head, the difference in the pupil date is not noticeable for us.
i have linked a short recording where I track a non moving square on screen while turning my head, followed by a flick with my eyes. We want to use this movement as a way of selecting an object. link to recording: https://we.tl/t-Eo7rpuaSZ6
A second question we have is, about working with recordings, at the moment we use a python script to read the data from the recordings and send it using a socket to our test game. But we were wondering if there was a way of make the playback of a video act like realtime using the glasses. meaning when playback a video i would send out the same API data as when the recording was made?
best regards Holger and Mikkel
Hi, I am currently working on an academic publication in which I use Pupil Core eye-tracking glasses. The reviewer is asking for an image of the glasses to be added as a Figure. Do you have such an image and can I get permission to use it for publication? I will include the source and include appropriate credit of course. Please let me know how to best proceed.
Hi @user-e7f2e2 π ! Could you email info@pupil-labs.com with the journal name, article title, and your name? This will help us provide the necessary written permission.
Hello, Iβve always been a satisfied user of your Pupillabs Core product. However, I have a few questions.
It is stated that the 3D gaze mapping in the Core product exhibits an accuracy of 1.5β2.5 degrees. Iβm wondering how this accuracy (or error) was calculated. Iβd also like to measure how the accuracy changes as the viewing distance gradually increases.
Is the claimed 1.5β2.5 degrees of accuracy derived by taking into account the sensor size, resolution, focal length, and field of view (FoV) for both the eye camera and the world camera? If so, could you please share the formula or any related academic references/papers?
I look forward to your kind response. Thank you.
So, with regard to the accuracy of gaze mapping, Iβm wondering by what theoretical or quantitative assessment this 1.5β2.5 degree figure is determined. Iβm curious about how exactly itβs evaluated or measured.
Hi @user-fce73e , the Pupil Capture software reports the accuracy whenever you complete a calibration. The reported 1.5-2.5 values can be replicated & confirmed by running a calibration across a few subjects and tabulating those values, after first confirming good pupil detection and 3D eye model fits. Similarly, you can tabulate these values after running a separate calibration for each viewing distance.
The calibration process is contained in the Pupil Core source code. The academic reference with a high-level overview is the Pupil paper. Citation details are here.
Hi! I have a problem with opening the pupil capture and player on my ubuntu. I was able to use it some months ago but now the programs do not open and if IU try to open them using the terminal, I get the following error: Unhandled SIGSEGV: A segmentation fault occurred. This probably ocurred because a complied module has a bug in it and is not properly wrapped with sig_on(), sig_off(). Please find attached the whole error. Any idea on how to solve it?
Hi @user-6a6d64 , are you on a laptop with an Nvidia GPU?
Try running it from the Pupil Capture GUI icon in the Ubuntu App Launcher. Make sure to do it by right-clicking and choosing βLaunch with dedicated GPUβ if that option is available.
That option does not appear..... @user-f43a29 ; it used to appear some months ago but it doesn't appear anymore.
You may have to ensure that the Nvidia drivers are installed. Sometimes, they do not carry over between major upgrades of Ubuntu and you have to install them fresh.
Also, be sure that you are logging into the more "classic" X11 Ubuntu Desktop Environment, and not the new default of Wayland.
Hi Pupil Labs,
Please some world IDs are missing in the gaze_position_on_surface. How can I get all that in the export
Look at the jump from 15898 to I6035 and then ,to ,17242
Hi @user-7daa32 , was the participant looking away from the surface during those time windows?
Nope. Here is what I did; I keep exporting the same data over and over again and those missing samples came up. The participant looked at the surface and never looked aways
Hi, I currently own, in my lab, the old version of Pupil Core which only has one eye camera. I am wondering what would be the advantages of getting the new version of Pupil Core ? Is the data quality better ? Thanks in advance for your advice !
Hi @user-ee70bf , when using two eye cameras on Pupil Core, you will get better gaze estimation accuracy. Also, if your Pupil Core has the original eye cameras, then the more up-to-date model has cameras that can be used to track gaze at up to 200Hz.
Having said that, if you are looking to upgrade, then we would also recommend checking out our latest deep-learning powered eyetracker, Neon. It can provide a consistent 200Hz sampling rate, is calibration-free, and provides consistent gaze estimation accuracy across observers, being also slippage resistant. You simply put it on and you are eyetracking, whilst obtaining more datastreams overall; a growing set, in fact.
Neon looks and feels like a normal pair of glasses, while being modular, fully mobile, and functional in direct sunlight. And, developing for Neon is simpler.
If you'd like to learn more, feel free to schedule a Demo and Q&A call.
@user-f43a29 @user-d407c1 , one eye camera of Pupil Core is switching on and off ... what could be the reason ?
How to resolve this ?
Hi all, my lab is using the Pupil Labs Core and we were looking at the gaze positioning file exported from our recordings. We were hoping to understand how the gaze normal values are calibrated with the device? Does it utilize and eye model using the visual axis or the optical axis? Likewise is it different for both the gaze point and gaze normal data? Thanks!
Hi @user-46e23b , the definitions of those values are provided here. The 3D eye model is used to calculate gaze_normal0/1_x/y/z
and gaze_point_3d_x/y/z
.
The 3D eye model is fit during the initial stages of the Pupil Core setup process, after putting on the headset, but before gaze calibration. Once the 3D eye model is fit, it can be used for estimates of the eyeball center and the eye's rotation; i.e., the visual axis, also known as gaze_normal
.
The gaze_point_3d
is the estimated point in 3D world camera space that the wearer is looking at and is derived from the nearest intersection point of the gaze_normals
, emanating from the eyeball centers. Note that this value is not considered reliable for distances over 1m.
May I ask what your research goal is?
Hi, I am using pupil core on two different windows device, I notice on one device gaze data receive from network API has a bβ prefix (which I think mean byte string), the other one donβt, is there a setting I can toggle or flag I can set to make them align? Thanks
In addition, I received halve the sampling amount within the same amount of time when attempt to read using networking API
HI @user-131620 , are both machines similar, in that both are running similar Windows, Pupil Capture, and Python versions?
Regarding the "halved sampling rate", I am not sure I understand. Do you mean the sampling rate is halved on one computer, but not the other computer?
I know that Pupil Mobile has been deprecated for the Core system, but I wonder what people are doing if they need that capability for this system? I have a couple of Cores that I'd like to use in a student lab for mobile eyetracking, but I'm not sure that capability still exists. Any thoughts and pointers welcome, although I don't have much in the way of funding for this project.
Hi @user-5054b6 π! Pupil Mobile is deprecated β for the most robust data in outdoors and dynamic environments, we recommend checking out Neon, as itβs designed with this kind of use in mind.
That said, since you already have Pupil Core and are working with budget constraints, you can explore different approaches to make it more portable:
Hi all, I'm trying to calculate the distance between the calculated gaze positions and my reference targets, but I'm noticing that the normalized gaze position is on a [0,1] coordinate plane whereas the reference targets seem to follow a different coordinate plane (seen in reference_locations.msgpack). Anyone know how to reconcile this difference, or if there's a better way to calculate this distance?
Hi @user-9447a9 , when you say "reference targets", are you referring to Pupil Capture's calibration targets or custom stimuli that you display on the screen? Are you looking to calculate distance in terms of pixels on the screen surface?
Hi, I'm using the Pupil Core in my project to gather users' gaze positions and fixation data while navigating a Unity virtual gallery. The data collection is based on a monitor rather than a head-mounted XR device. Currently, I can export the data from Pupil Player, but I still can't correlate the Pupil data with the Unity World Space coordinate system to find the exact object the user looked at though I utilized Surface Tracking by adding markers to the corners of my monitor. How can I resolve this? BTW, how should I start if I want to enable real-time Pupil data transmission using the Network API? Thank you
Hi @user-11b0a4 , you might want to reference this thread: https://discord.com/channels/285728493612957698/1248580630430875661. Let us know if that clears things up.
hello, is anyone run pupil core in aarch64 devices? I clone pupil repo in my arrch64 device, jetson nano or raspberry pi , it's so difficult to build wheel to install
Hi @user-da1455 ! We do not have native support for ARM processors, you could try to run it from source and compile wheels, but to my knowledge, there are some additional dependencies which do not support ARM processors either, so, you would need to compile them too. Please have a look at https://discord.com/channels/285728493612957698/1039477832440631366
May I ask why would you like to run it on an ARM processor? If all of this sounds to complicate and you simply want some single board computer (SBC) to run it, some users have reported that the Lattepanda 3 delta works great, and without headaches as it's architecture is x86. https://discord.com/channels/285728493612957698/285728493612957698/1123484310649966602
Thanks for you reply. About you question, all I can tell you is that we have a project that requires a board with the aarch64 architecture, and there is no way to replace it at this time, so we need and try to do our effort to run the pupil core. We attempt to run from source . But meet to much problems, well, we will try again
okay, thank you for the quick answer, I
Forgive me to ask one stupid question: Is that pupil service/capture software auto publish camera video stream via http network that can be subscribe by other program such opencv or website?
I use the python realtime api but cannot find any devices>
https://github.com/pupil-labs/realtime-python-api I use this repo
Hi @user-da1455 ! The API that you link is for π neon / πΆ invisible , for Pupil Core please refer to https://docs.pupil-labs.com/core/developer/network-api/ which uses ZMQ and msgpack.
sincerely thanks for your reply. I got it. Now i have another question, i see the network_api_plugin in source code. Does it mean that if i send this message to the pupil service , the video streaming works? and then I am able to get that stream in my own program? All i want to do is that get the video stream for my own program. Sincerely thanks again
Hi everyone,
Iβm using a Pupil Core device on a Mac. The first time I connected the device, everything worked fine β the software detected the cameras and I could record video normally.
However, when pupil capture software froze, I had to force shut down and restart my computer. After that, the software still opens, but it no longer shows any video feed from the cameras.
Iβve tried reconnecting the device and restarting the software, but it doesnβt help. It cannot connet the device.
Interestingly, this also happened once before, and after I left the device unplugged for about two weeks and tried again, it suddenly worked again.
Has anyone experienced something similar?
Thanks in advance!
Hi @user-918d49 , could you open a Support Ticket in π troubleshooting ? We will follow-up with you there.
[email removed] thank you for your support! Now
HiοΌHow to set Pupil Core Scene Camera to 120Hz mode? Can it actually achieve 120Hz?Thank you
Hi @user-20d14e π ! Yes, the scene camera can achieve 120Hz but only under specific resolution settings. You can verify and adjust these settings within Pupil Capture by clicking over the video source settings (camera icon) in the World view, and modifying the sensor parameters accordingly.
Please note that changing these parameters will also affect the field of view, as outlined in our documentation
Hi everyone, Iβ m using a Pupil Core device on windows and I have some issue with the calibration. Could you please help me?
Hi @user-999a6c , of course. Can you describe the problem you are having?
Hi Rob !Iβm trying to calibrate the Pupil Labs eye-tracking glasses, but I canβt properly calibrate the surface. As a result, I canβt extract accurate data or generate maps
Hi! The core has been great so far, but I've run into an issue with the eye cameras. The right eye stopped connecting, so I tried to plug it on the left eye's cable to see if it was a camera or cable issue, now neither works! No wires pulled out of the connector or anything I can see. In device manager (on Windows) I see 3 "Pupil Cam" devices under "libusbK USB Devices". If I disconnect one of the eye cameras only two devices show up there. So it seems the connection is ok, but I'm not getting any eye cameras in Pupil Capture. Any ideas?
Hi @user-b6b94e , could you open a Support Ticket in π troubleshooting ? Thanks!
Hi there,
Iβm working with Pupil Core eye trackers alongside a RealSense camera, but weβre experiencing a significant drop in the RealSense frame rate after a few seconds. Adjusting the resolution or frame rate of the world camera in Pupil Capture temporarily restores the RealSense frame rate (regardless of the setting), but it quickly plummets again.
The system runs on an Intel 11th-gen i7-11800H (2.3 GHz, 16 cores). Pupil Capture reports a CPU usage of 200, yet system monitoring shows all cores at only 20%. We're using Pupil Capture version 1.21.0 due to compatibility requirements with legacy code, so upgrading isn't an option.
Do you have any suggestions for resolving this issue without upgrading Pupil Capture? I realize this is a tricky constraint, but any insights would be greatly appreciated!
Hi @user-8e1492 ! π βΊοΈ
It's not particularly necessary to use v.1.21 in this case. With the release of v1.22 (https://discord.com/channels/285728493612957698/1274044100752179334/1274101479875022900), the Realsense code was moved out to an example plugin.
That should run on an up-to-date copy of Pupil Capture with minimal changes. It would also allow you to use more up-to-date Realsense code/drivers, although that might require some edits to the Realsense specific parts.
If the Pupil Capture relevant parts require some changes, you would want to reference this Documentation and of course, you can ask for clarifications about Pupil Capture's Plugin system here.
Let us know if that helps!