Hi, I'm started to use pupil labs core and I'm wondering if there is a way to import eye recordings back into a pupil player compatible format after processing the video (my first attempt does not have good pupil detection) e.g. by removing glare from glasses, sharpening image, increasing contrast? I found this https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a which is great, but created world rather than eye videos
Hi @user-6c482a , while this is possible, it will be some work. Was this only a test recording? You can also share the recording with data@pupil-labs.com and we can give direct feedback. These issues can often be resolved without needing to use post-hoc image processing.
Hi How can I solve this problem when I open pupil capture
Hi @user-bd5142 , it seems you installed Pupil Capture to the D drive. It would be better to re-install it to the C drive.
Hi, I am using pupil core and it was working well till yesterday. Yesterday I opened the pupil capture and did some tests and it worked well. Then I closed the pupil capture and re-opened and tried to check the settings. After that I have been getting an error code 1282 saying : world - [ERROR] root: Encountered PyOpenGL error: GLError( err = 1282, description = b'invalid operation', baseOperation = glViewport, cArguments = (0, 0, 1920, 1080) ). To resolve the issue, I deleted the pupil_capture_settings folder, uninstalled the pupil v3.5.1 software and deleted all of pupil drivers. Then I restarted the machine and reinstalled everything but still I am getting the same error. I am using the pupil core on the HP laptop, x64-based processor and Windows 11 Pro. How can I solve the issue?
Hi @user-c7fe6f , thanks for having done the testing. Just following up here, so that others can benefit from the information. We have seen similar errors for other users when Intel integrated GPUs on laptops are involved. It is better to have the Pupil Core software use the dedicated GPU, when available. On Windows, this guide has proven helpful.
Hi @user-c7fe6f , thanks for also opening a Support Ticket in 🛟 troubleshooting . We can continue communication there.
Hello!
We’re running an eye-tracking study in food sensory testing and need to define AOIs for each answer box in our survey. Manually doing this for every recording isn’t feasible due to our participant numbers.
The Automate AOI Masking tool has been tricky for us, it's hard to give a prompt that works since our AOI's aren't clear objects (the prompt with the best results was: "rectangle boxes with alphabets").
Does anyone have tips for: Prompts or strategies to help the automated tool detect each box as a separate AOI? Alternative ideas for efficiently defining AOIs across multiple recordings?
Thanks
Hi @user-1baa3e! Can you confirm which product/analysis tool you're using for this?
Hello, I have a quick Q to ask regarding the pupil capture world camera. I have my eye-tracker connected to a windows laptop using pupil capture. The eyetracker was working well for a year and all the sudden last week, the world camera stopped working while the individual Eye0 and Eye1 cameras are working fine. The world camera shows as 'unknown @ Local USB' under 'video source.' When I click on it, it says - "World: The selected camera is already in use or blocked." It also shows an earlier error message saying "video_capture.uvc_backend: Could not connect to device! No images will be supplied." I've tried unplugging and plugging back in and restarting pupilcapture/laptop but the issue persists. Would you mind advising what we could do to try resolving the issue? Thank you!
Hi @user-3f4037 , as a first step, could you try restarting Pupil Capture with default settings? This button is found in the General Settings tab.
Hi there, after updating my Windows drivers, the Pupil Capture app started giving there errors. No matter where I click on the settings of the app, it would print an openGL error. Do you know how can I fix it? Should I just rollback to an older version of the drivers
Hi @user-0b1050 , are you on a laptop with an Intel CPU?
Hi everyone, I'm encountering a persistent issue with Pupil Core on Windows and would really appreciate your help.
Out of the blue, I started getting repeated PyOpenGL errors (GLError 1282) coming from the World camera module, which completely breaks the rendering and makes the Surface Tracker unusable. The console repeatedly shows red messages such as: “WORLD: Encountered PyOpenGLError: GLError(err = 1282, …)”. The error appears whenever the World camera is active and a Surface is defined. The scene becomes unstable and the rendering buffer stops updating properly.
Troubleshooting attempts so far: reinstalled Pupil Capture/Service/Player, manually deleted all pupil_capture_settings folders, updated all system drivers + full Windows update (mandatory + optional), verified system specifications, recreated Surfaces and Markers from scratch. Despite all of this, the issue persists exactly in the same way.
System specifications: Processor: 12th Gen Intel® Core™ i5-12600; Graphics: Integrated Intel® UHD Graphics 770; Model: Lenovo ThinkStation P360 Tower (SKU: LENOVO_MT_30FM_BU_Think_FM_ThinkStation P360 Tower).
This problem appeared suddenly, without any change to our setup, and I'm unsure how to proceed. If anyone has encountered something similar or has suggestions on what else to try, I would really appreciate your help!
Hi @user-20024e , we also received your email and will respond there, too. It seems Intel recently released an update to their GPU drivers that causes this issue on Windows.
Could you first try restarting with default settings (button found in General Settings tab)?
If that does not resolve it, we are looking into a solution, but until then, you may need to either:
If you can dual boot on that machine, then you could also try a version of Linux as an interim fix, but we understand that this is not always feasible.
Hello!
I’m Jamie and I’m currently conducting research using the Pupil Labs Core to integrate eyeball-center estimation with our system.
While working through the data, I ran into a few technical questions regarding coordinate definitions and marker pose extraction. It would already help a lot if you could answer just these three key questions.
1) Which exact camera coordinate system is the 3D eyeball center defined in? (Front camera? Eye camera? And what are the axis definitions?)
2) Is it possible to obtain a 3D pose (position + rotation) of an ArUco or fiducial marker using the Front Camera? If not, can we get the Front Camera intrinsics so we can compute the pose ourselves?
3) Do you provide the spatial relationship (position/orientation) between the Front Camera and the Eye Cameras? Or is there a recommended way to calibrate this?
Thanks so much, these three answers alone will really help us proceed with our integration!!
Hi @user-ac9977 👋 ! Please find below the answer to your questions.
Eyeball-center coordinate system: The 3D eyeball center from Pye3D is defined in the 3D camera space coordinates system. You can find all axis definitions and coordinates systems here: https://docs.pupil-labs.com/core/terminology/#coordinate-system
Camera Intrinsics : When a recording is started in Pupil Capture, the application saves the active camera intrinsics to the world.intrinsics, eye0.intrinsics, and eye1.intrinsics files within the recording.
Pupil Core software provides default camera intrinsics for all official Pupil Core cameras, but if you use your own or you prefer to estimate your own intrinsics, you can use the Camera Intrinsics plugin in Pupil Capture. See more here.
Instead, we use a bundle adjustment to obtain the relation of the cameras to each other, and uses that to linearly map pupil vectors from pye3d.
If you want to use fiducial markers (Apriltags) to find out the position of the head/eyetracker in the world coordinates, you might be interested in our Head Pose Tracking plugin
Hello,
We are working with Pupil Core to store eye-tracking data from computer users. We connect to the Pupil glasses to extract data and write it directly to a Mongo database, which we then post-process in Python (so Pupil Player is not used for data export). We have several questions regarding different aspects of the workflow:
As far as I understand, the optimal angular accuracy range is 1.5–2.5°. Since accuracy corresponds to the angular offset, would values lower than 1.5° also be acceptable, as they would represent a smaller offset? Do these values refer to calibration or validation accuracy?
When tracking eye data of users working with a computer monitor: if calibration is performed at a specific viewing distance, we assume that this distance must be maintained throughout the recording in order to obtain accurate gaze data. Is this assumption correct?
When exporting blink data, we see “confidence” described as the mean of the absolute filter response during the blink, clamped at 1. What is the relationship between this measurement and the Offset Confidence Threshold? We have noticed that when the threshold is lowered to 0, confidence values start close to 0, whereas with the default threshold of 0.5 they start around 0.5. Which approach is more appropriate: reducing the threshold to 0 and filtering during post-processing, or maintaining a higher threshold? Should these offset values be tuned per user and lighting condition? We have observed that with a 0.5 threshold, several blinks are classified as long blinks.
When the pupil is not detected (for example, when a user looks down while typing on a keyboard), what pupil diameter value is recorded? Are these values random erroneous detections with low confidence?
Thank you very much in advance. We know there are quite a few questions, and we really appreciate your time and help :).
Hi @user-d03324 👋 ! Using Pupil Player to convert and export is our recommended workflow, but if your Python post-processing works well, that’s absolutely fine. Let’s go through your questions:
1) Accuracy values : These values reflect the expected angular offset after calibration when using the 3D gaze mapper. With the 2D pipeline you can often get sub-degree offsets under controlled conditions.
Here, lower is better. Anything below ~1.5° is perfectly fine, that’s essentially better than expected. The number you’re seeing typically comes from the validation step after calibration (when you press T to validate).
2) Viewing distance when using a monitor
Small natural movements toward or away from the screen are okay. Larger changes can introduce slippage (headset moving on the face). The 3D model compensates up to a point, but beyond that, if the relationship of cameras and eyes changes too much, a re-calibration would be needed.
When calibrating, try to present the targets across the same field of view you plan to analyse. If the accuracy visualiser is enabled, you’ll see a rectangle displaying the region of the scene camera used for calibration.
If calibration only covers a small central area, accuracy may decrease in the periphery.
Now, onto the other questions.
3. Blinks Pupil Core actually has two different blink detectors, an online detector used by Pupil Capture and an offline one applied by Pupil Player. You can see both implementations here.
Both of them work over the 2D pupil confidence, which is a parameter of how sure the algorithm is of pupil detection.
In simpler terms, it compares the average eye visibility from a split second ago (history_length) against the visibility right now.
The onset and offset tresholds determine how much the confidence has to fall for a blink to start and how much it has to come back to determine the blink is over.
I'd recommend having a look at our best practices choosing the blink thresholds.
4) Pupil Diameter If the pupil is not detected (e.g., the user looks down at the keyboard), the diameter reported is usually a spurious ellipse fit with very low confidence. These values should be filtered out based on the confidence column. When confidence drops, treat those samples as missing data. https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
On Pupil Player, there is a setting to automatically remove data whose confidence is below 0.6 by default, but you can also tune it depending on your research paradigm. In your case, you would be in charge of removing that data.
Hello people! My apologies for offtop, I did not find a chat for general questions. Please tell me is there a DIY kit i could buy and craft to track objects in real time in open area? For example I'd like to have a glasses or a module that will track the eye dot on fast moving object, like a ball.
Hi @user-919fc7 , we are in the process of transferring our DIY Pupil Core setup to a new provider. In the meantime, may I ask if you need track gaze on the ball in real-time?
Hi, I’m using the core system to analyze people’s gaze behavior when they reading the text in the monitor. I’m wondering if I could have only one surface tracking the area of interest when the area of interest changes in different timestamp.
Hi @user-3a2026 , do you mean the Area of Interest will be moving around on the screen?
@user-f43a29 Understood. But I need the highest accuracy for the lowest cost possible and Neon is pretty expensive.
Alright, we will post here when the Pupil Core DIY kit is available again.
Well. Sorry. I meant gaze tracking on object. No need to identify the object or something. I just need a dot on object (gaze).
Hi @user-919fc7 , as mentioned, eye tracking accuracy is not evaluated in this manner. I recommend giving a look through our blog post on the basics of eyetracking.
Pupil Core and Neon can be used for this task, but the gaze point being on the object also depends on the capacity of the person to fixate & track the object at those speeds. You may want to check previous eyetracking research on baseball or similar sports to get an idea of what people are capable of in those contexts.
In other words, with eyetrackers, you track the person's gaze and simply see if they look at the object.
Thank you. I'll take a look.
Can your devices be used for commercial use in future with any kind of contract? For example if your DIY core device will work in pair with other systems as a product?
Hi @user-919fc7 , with respect to the DIY Pupil Core, as stated on the page, it is for non-commercial purposes only.
Hi! First of all thank you for your previous response! I have another doubt about pupil core cameras. It seems the camera of the left eye is more blurred than the other one, showing less definition than the other camera even though the exposure time, camera parameters and lighting conditions are the same. Why could this be happening? Is it possible to clean the camera, if that were the problem? I have a video that shows this problem, could i send it to you in any way? Thanks in advance 🙂
Hi @user-d03324 ! You are welcome! Pupil Core eye's cameras are arrested with glue to fix the focal length, but it could be that the cameras focal length moved if the glue got loosen. Can you place a grid paper at eye distance from the camera and check if the images appear blurred? You can send us that recording to data@pupil-labs.com
Hello! am comparing pupillometry devices. MY main purpose is pupil diameter. I am drawn to pupil core bc it is a head set and my task involves movement. When I am looking at the specifics of pupil core I am seeing the accuracy is in units of degrees. I understand this is for GAZE, of which I am not interested in. Does anyone know where to find the accuracy of the pye3d? preferably in units mm or else let me know why it is not easy to find
Hi @user-44f055 👋 ! Good question, you’re right that the accuracy values reported in degrees refer to gaze, not pupil size.
For pupillometry in mm, there isn’t a single accuracy number (e.g. in mm) we can reliably report for Pupil Core. The main reason is that there’s no accepted gold standard for measuring physiological pupil diameter over time in unconstrained settings, which makes absolute accuracy hard to define and validate.
A few clarifications on how pupil size is handled in Pupil Core:
The pupil is first detected in image space (dark-pupil detection in pixels).
A 3D eye model is then built using Pye3D (bundle adjustment) to estimate eye parameters and project pupil accounting for corneal refraction and gaze angle, so you have a physiological pupil size in mm, rather than apparent pupil size.
That model is designed to adapt to headset slippage over time. For strict pupillometry, this adaptability can actually be a downside, so you’d typically want to freeze the model to keep the eyeball geometry consistent. See here.
Instead of a single accuracy metric, we provide a confidence value for pupil detection, which is usually the more meaningful quality indicator. https://docs.pupil-labs.com/core/terminology/#confidence
If your primary goal is pupil diameter in millimetres, especially during movement, you may want to also consider Neon. It’s easier to operate for pupillometry, supports long recordings (up to ~4 hours), and provides robust pupil-size measurements both indoors and outdoors. We also have a white paper showing how Neon can be used to replicate traditional pupillometry paradigms in case it is useful.
Hi.I'm a Chinese student trying to use pupil core for attention analysis and behavior analysis.Sorry that my English is poor. When I start the pupil capture with my pupil core connected ,the camera of left eye(eye1) sometimes can't be connected ,somtimes I restart the pupil capture then it is okay,but now I can't connect to the left eye camera with restart a lot of times . Is that I Encountered hardware issues.?
Hi @user-7c9706 , nothing to apologize for.
Could you also try resarting Pupil Capture with default settings (button found in General Settings tab) and then swapping the eye cameras from one side of the frame to the other? This will help determine if the headset or the eye camera is the issue.
I checked JST connectors and make sure it's fine,and then I connect pupil core to my phone using pupil mobile (I know that it been deprecated ) and it still can't connect to eye1.
my windows 11 devicde manager shows that the camera of left eye is not connected to my PC
Thanks @user-f43a29
I swapped the two eye cameras using JST connectors, but the camera on the left side still cannot be recognized regardless of which camera I use. This suggests that the left eye camera is not the issue, and it may be a connection problem
Hi @user-7c9706 , thanks for doing this test. This implies a hardware failure in the frame. Could you please send an email to [email removed] referencing this discussion? Then, a member of the Operations team will coordinate a return for inspection & repair. Please include the original Order ID in the email. Thanks.
This is a picture of the JST connector on the problematic side.
Hello, can you help me please. I am interested in eye-tracking research and found a pair of used glasses on eBay that might potentially fit my budget. Unfortunately, as a student, my budget is strictly limited to 1,000 Euros.
Could you please help me identify this model? The seller is not very knowledgeable about the technology and cannot provide details, and I was unable to find this specific form factor online.
Could you please tell me the model name, its year of release, and how it connects? Would you recommend this device for purchase today given its age?
You can reply here or to my email at [email removed] — whichever is more convenient for you.
Thank you in advance for your help.
Hi @user-b124b3 , while we cannot provide guarantees about devices sold second-hand., that looks like a user-made DIY Pupil Core, so not the standard Pupil Core that we provide.
It looks very similar to the Pupil Labs Core in terms of the eye-tracking modules, but the frame appears to be different, so I am a bit unsure exactly what model this is.
It also seems like it does not include a world camera? That is used for calibration and recording what the person is looking at.
Thank you for your reply!
That is interesting — I initially thought this was just an older discontinued model, but it turns out to be a custom unit.
So, if I understand correctly, this is a wired version that requires a cable connection. Does it require the original proprietary cable, or would any high-quality USB-C cable work?
Also, I have a question about the camera setup (sorry for the basic questions). I found a photo where there seems to be a sensor attached to a plastic mount — isn't that the camera module?
@user-b124b3 The standard Pupil Core is also tethered (i.e., wired) to a computer/laptop.
If you are thinking of our fully mobile eyetrackers, then those would be Pupil Invisible and our latest, Neon.
The original cable is not proprietary. It is a USB-C cable. If possible, then high quality is preferred.
From that photo, I cannot determine if that is a type of camera or rather an attachment point for some other kind of sensor. If you intend to also use a world camera, then the Pupil Capture software requires at least a UVC camera, such as those listed in the DIY documentation.
It could be that the original user made this for the Pupil Mobile app that was for Pupil Core, but that is deprecated.
I have decided not to purchase that device, as I now have concerns regarding its reliability.
Could you please clarify something regarding wired connections? What are the most common solutions for using these glasses in mobile scenarios, such as walking through an art gallery? Are there specific small mobile recording units or other portable devices that are recommended for these conditions?
Thank you for your answers and for your help!
Hi @user-b124b3! The recommended system for mobile studies like walking through an art gallery is Neon. Neon tethers to a smartphone for operation which means the wearer can move freely in their environment. Neon's measurements are also robust to dynamic conditions. Core-based systems, on the other hand, do need to tether to a computer. That said, it is possible to use a small form-factor computer and a backpack. It's not an ideal condition, but can work. There are also more technical options which are overviewed in this message: https://discord.com/channels/285728493612957698/285728493612957698/1353291275834757202