Hello, my name is AdriΓ‘n and I am working on a project with the Pupil glasses. I am developping a program using the Pupil Capture API to get samples and recordings from the eyes. I discovered that in order to use the functions of the Pupil API it is necessary to do a calibration with the Pupil Capture App. I would like to know if it would be possible to save that calibration in order to use the glasses several times without the need of calibrating each time.
Hi @user-85778d ! I assume you are using Pupil Core, so I moved your question to this channel. The π‘ features-requests channel is reserved to as from the name features requests.
When using Pupil Capture, the calibration is saved and automatically loaded when you reopen the app, so youβll still see the gaze preview. On your app you could do the same, check out the code (is open source).
However, that calibration is likely inaccurate if the glasses were removed or if there was slippage, so I would not recommend reusing it without recalibrating.
If calibrating sounds tedious to you, you may wanna check π neon , our latest eye tracker which is calibration free, robust to slippage, and easier to use.
Hello Miguel, thank you for your fast response. I understand you, but, it is necessary in order to use the API, to have installed and opened the Pupil Capture App?
For the Network API ? Yes! Either Pupil Capture or Pupil Service need to be open. See https://discord.com/channels/285728493612957698/446977689690177536/593405267442663434
Ok Miguel, thank you very much.
Hello, I'm designing a plugin for Pupil Capture to capture footage from five cameras. pyuvc seems to be able to help me capture video streams from unofficial cameras. However, after installing the camera's USB driver as libusbk, I can only use IPC to retrieve video from the camera on the Pupil core, and I cannot retrieve the unofficial camera video streams that are already using the libusbk driver via pyuvc. On the other hand, I can call the unofficial cameras normally in Capture. Does this indicate a problem with my backend? I've provided the connection method between the camera and the computer, as well as the libusbk device name shown in Device Manager. Many thanks!
Hello, I'm designing a system that uses Pupil Capture together with other sensors, and Iβd like to know if itβs feasible to build a portable setup using the Pupil Core glasses.
Do the Pupil applications (Capture or Service) have a resource consumption low enough to make such a portable design practical, or is it too demanding?
Is there any documentation or benchmarks available about the resource usage of the apps and the glasses?
Thank you!
Hi @user-85778d , you could connect to a laptop in a backpack or try a LattePanda (note that a Raspberry Pi is not compatible).
While Pupil Service does use less resources, it requires that you program your own calibration routine. Also, as documented here, it does not record the world camera and is intended more for VR/AR scenarios.
To lower the resources needed by Pupil Capture, you can turn off any unneeded plugins. Some plugins can also be run later in Pupil Player, such as the Surface Tracking plugin.
We don't have hard numbers on resource usage, but so long as your machine meets the standard requirements (check the Computing Device section), you should be good to go. The computer in a backpack might just need proper ventilation.
Hi there, I am building a Pupil Core DIY version, and would like to know is there a channel or forum for people sharing information and their experience on similar projects Please let me know. Many Thanks!!!
Hi @user-025238 π ! You can share your experiences here at π core or if you want to flex your set-up /experiment you can also do in the πΈ show-and-tell channel.
Thanks, Miguel. I will share more with a few photos and the parts I used later. I had some initial "successes", but found myself struck at the calibration stage... chances are I need spend more time reading I instructions from u guys... but still it would be nice to get in touch with others who did made the DIY version works out for them, and learn more from their experience... Thanks again.
Hi ,we are working on a project with the pupil core but we are stuck at calibration with screen marker is there anybody who can tell us any tips to go at the next stage which is data collection. Thank you beforehand!
Hi @user-902055 , could you make a Pupil Capture recording of how you are doing the calibration and share that with [email removed] Then, we can provide better feedback.
Thank you very much ,we'll sent it right away.
Thank you very much ,we'll sent it right
Hi @user-f43a29, my name is Jason Chen. I am experiencing an issue with the right-hand side camera of the Pupil Core, which is unable to accurately detect the pupil. Please refer to the screen capture attached. Both the left and right cameras have the same settings, and despite my manual adjustments (indicated by the red circle with a point) to ensure the pupil is within the green circle, the confidence level still displays as 0.
I have tried resetting the Pupil Capture software and reconnecting the eye tracker, but I'm still encountering the same issue. I'm not sure if the problem lies with incorrect settings in the software or if there is an issue with the right-hand side camera sensor. Could you please help me identify the possible problems I may be missing and guide me on how to reset everything once again? Thank you for your assistance.
I have tried switching both cameras and reconnecting to the eye tracker. The original left-hand side camera was connected to the right I/O; the same problems have occurred. I am not sure where the right I/O is broken that happened?I have attempted to switch both cameras and reconnect the eye tracker. The original left-side camera was connected to the right I/O, but the same problems have occurred. I am unsure where the issue with the right I/O lies.
Some participants in previous experiments reported a slight electric shock sensation near their right ear. Could this be caused by an issue with the cable at the connection port?
The right connection port
Hi @user-1a14a8! Thanks for sending the screenshot. I see that eye 0 is reporting a confidence of 0.57. Just to confirm, the ID (0 or 1) refers to which eye is detected, and it's separate from the confidence level.
That said, you'd still want to aim for higher confidence. Since pupil detection confidence is usually unrelated to hardware connection problems, I recommend a quick step: Please try restarting Pupil Capture with default settings and making a short test recording. Feel free to share the full recording directory with data@pupil-labs.com for concrete feedback.
Regarding the shock sensation, that sounds like it might be an electrostatic discharge (ESD). If you are in a lab setting where this is common, a quick search for "ESD prevention in lab settings" should provide useful steps you can take to prevent it.
Hello@user-f43a29οΌin the driving scene built in Unity, delineating dynamic areas of interest for pedestrians crossing the road, I previously attempted but still couldn't make the areas of interest follow the pedestrians. Do I need to attach markers around the computer monitor or on the pedestrians themselves? If markers need to be attached to the pedestrians, one issue is that when the vehicle is far from the pedestrians, the pedestrians appear very small, and the QR codes attached to them cannot be recognized. Can PupilPlayer alone achieve the function of analyzing dynamic areas of interest?
Hello! I'm curious whether the two eye tracking cameras in Pupil Core are the same model? If so, how do you achieve different camera label recognition in pPupil Capture & Service? I'm currently trying to run Pupil Capture with two identical eye-tracking cameras, but eye0 and eye1 often compete for the same camera's access. Many thanks!
Hi, it seems like the world view camera cannot be rotated (i.e., it is stuck), and thus, I cannot make it focused as needed. This may probably be due to a mechanical restriction issue. What is the recommended guideline for this kind of scenario?
Hi @user-d9be4a , while Pupil Player can track AprilTags as they move around, it will not be able to track them if they are too small. In other words, if you cannot see the AprilTags, then the algorithm will generally not be able to see them either.
Pupil Player was not explicitly designed with dynamic areas of interest in mind.
A solution in your case could be to put the AprilTags up on the corners of the screen. Then, you would have gaze in screen-mapped coordinates. You can combine that with a segmentation/tracking computer vision algorithm that could track the pedestrians and tell you when gaze is on the pedestrian.
If your system can provide you with the exact projected pixel coordinates of the pedestrian on the simulator screen, then you would not need a segmentation/tracking network and the process is easier.
Ok , thank you very much
Hi @user-b02f36 , even if two cameras are the same model, they should present themselves as two separate devices to the computer. Are these the custom cameras from your post here (https://discord.com/channels/285728493612957698/285728493612957698/1435544217689788546)?
Yes, @user-f43a29. As shown in this figure, these four cameras can be presented as different devices in the Pupil Capture. However, they don't display as distinct labels on my libusbK device, causing frequent issues when using these cameras in Pupil Capture, where multiple UI elements compete for a single camera process. Is it possible to resolve this by changing the camera labels? Many thanks!
Hi @user-f65018 , is it possible that the world camera lens is already fully tightened (i.e., "fully focused") in that direction?
Hi @user-f43a29, we attempted to "lightly" rotate the lens in both directions, yet the lens does not rotate in either direction.
I am looking to integrate PsychoPy with Core glasses and all the links for information are invalid or no longer found. The error pops up that the routine is looking for the neon glasses but I have no way of selecting core instead of neon. Are they no longer compatible?
Hi @user-bbffe9 , PsychoPy has been been re-designing their documentation.
The Pupil Core is still compatible with the latest PsychoPy and latest Pupil Labs plugin. Do you see a choice called just "Pupil Labs" in the Experiment Settings > Eyetracking tab?
Thanks @user-f43a29 I have this tab in the setting available - I can select Pupil labs in the settings tab. I am able to add eyetracking as a component to routines. however when running the routine - it is looking for settings in the neon. I've checked in the libraries of python (pip show) that all the necessary pupil lab libraries are downloaded. It is looking for Neon glasses but I have nowhere I can select in PsychoPy that I have core glasses. Any documentation (even if old) would be helpful.
Hi @user-4c21e5, thanks for your kind response. I have reset the Pupil Capture software. One critical problem is that when I look at the screen, the right-hand side sensor can not detect the pupil and displays a red point in the correction view. Please take a look at the video and see more details. The same actions on the left-hand side sensor can successfully detect and catch the pupil. Please help figure out the problem.
Thanks once again for your help.
Hi @user-4c21e5, the video link has been provided for you. Please check it out in the personal chatroom. Thank you once again for your kind help.
Thanks for sending that. It looks like one of the 2d detector settings has been tuned incorrectly. Please restart the Capture software with 'default settings'. The button you need is in the main settings of the Capture window (cog icon)
Thanks for your response. I have restarted the Capture Software with "default settings", and reopened and run the software, but the same problem still occurs. (The right-hand side sensor can not detect the pupil and displays a red point in the correction view.)
Can you please make a test recording and share the full recording directory? Not just a screen capture. Thanks!
Hi @user-4c21e5, the video link has been provided for you. Please check it out in the personal chatroom. Thank you once again for your kind help.
I see. Are you trying to use 4 Pupil Core eye cameras at once within Pupil Capture?
Yes, Rob. I wish to use 2 Pupil core eye cameras for pupil detection and the others for video recording.
[email removed] (Pupil Labs), we attempted to "
I see. Please note that Pupil Capture was not designed with this use case in mind.
You will probably need to change the Pupil Capture source code to enable that. I would start with this section of the code for the detection logic.
So Rob, would redesigning and modifying the source code to create a suitable pupil capture solution be a good approach for me?
I cannot say for sure, as I do not know your requirements, but you could also try making a Pupil Capture plugin.
Perhaps adding a new video backend, as was done for RealSense cameras, could be enough for your purposes.
According to your suggestion, Rob, maybe it is possible to achieve my requirements by writing a plugin for Pupil Capture with added video backends for the other cameras?
Hi @user-1a14a8. I'll need the full recording to provide concrete feedback. Not just a screen capture. Full details below to avoid further ambiguity π
1. Ensure Pupil Capture is closed
2. Search on your machine, pupil_capture_settings. Once you've found that, rename the directory to pupil_capture_settings_backup
3. Open Pupil Capture and start a recording
4. Set up the headset as usual, adjusting the eye cameras and making sure the pupils are centred in the eye camera field of view
5. Slowly move the eyes around (e.g. by rolling the eyes) or look around
6. Stop the recording
7. Make a zipped copy of the entire recording folder, e.g. 001.zip and share that
Hi @user-4c21e5, please check it out in the personal chatroom. Thank you once again for your kind help, and I apologize for the inconvenience caused by the problem-solving process.
Hi @user-b02f36! In principle, it's possible. But it's uncharted territory so you may just have to just try it!
Hi Neil, thank you and Rob for your assistance sincerely! Actually, I just want to use IPC Backbone to obtain video and corresponding timestamps from eye cameras, and also capture video from multiple other cameras using video backends designed by myself. Are these two backends compatible in principle?
Thanks for sharing the recording. For a majority of the recording, you are getting excellent pupil detection. As seen in the eye image overlay. If it helps, the debug view you're currently using to examine pupil detection is probably not so helpful. Instead, you should look at the default eye images - a robust red ellipse overlaying the pupil and a blue circle surrounding the modelled eyeball are the goals.
For the instances when eye0 confidence drops, I think this is because the image is over-exposed. You'll probably want to adjust the exposure level. - Change the eye camera exposure settings to optimise the contrast between the pupil and the other regions of the eye - It can also be helpful to set a Region of Interest to only include the eye region, excluding the dark corners of the image. Note that it is important not to set it too small (watch to the end)!
I hope this helps!
Hi @user-4c21e5 , once I manually adjusted the eye camera exposure value, the pupil could be successfully detected. Thank you once again for your kind help. π
Hi @user-b02f36 , you are welcome. Although I have not tried, I think it should work. It is certainly worth a try!
Hi, Rob. I have already implemented my features through Plugin API! I'm now trying to make more visual stimuli based on PsychoPy and design a related plugin by running Capture from source. Thank you sincerely for your kind assistance!
Hi @user-bbffe9 , when you select "Pupil Labs", as you have done there, it searches for Pupil Core. Can you show what you mean by "it is looking for settings in the Neon"? Could you share the full output of the PsychoPy console when you encounter the issue? Thanks.
Hi @user-f43a29 here is the output after the re-install. its been idle for 10 minutes
Hi @user-f43a29 its 2025.1.1. Even though I had downloaded the plugin, the trace back error was saying the module was not found. From reading more it sounds like the folder should be in the \AppData\Roaming\psychopy3\package subfolder so I copied the plugin package from github to that folder. I am now able to select Pupil Labs Core (iohub) from the experiment setttings. The module error no longer pops up but now the routine just starts but does nothing. no script print out. I have to force quit the program.
Thanks. Would you be able to share the full PsychoPy log that is shown in the console, even though not much is printed? Please do so after re-installing the PsychoPy plugin the standard way and restarting PsychoPy anew, to be sure the logs are properly cleared beforehand.
Make sure to also delete that psychopy3 folder in AppData\Roaming before re-installing the standard way, as this makes sure that PsychoPy starts with a fresh cache. It's also important to restart PsychoPy after deleting that folder and before trying to install the plugin.
Also, please note that it is not standard practice to install the package that way. Is it rather that it is found at all in PsychoPy's Package Manager for you?
Which version exactly? 2025.1.1 or 2025.2.1 (Beta)?
2025.1.1
Hi all, is anyone know how to proces "3d_eye_states.csv" to display like in the clould ?
Hi, @user-af95e6! Could you please elaborate more on the specific elements you're trying to display and the intended structure? This will help me give you more meaningful feedback.
Hi Neil, i want to display Eyelid aperture and Pupil diameter. How you can calculate these value ?
Hi @user-af95e6! Thanks for following up. Your question, 'How you can calculate these value?', could refer to a few different stages in the data workflow. To ensure I give you the correct, helpful answer, could you please clarify which of the following you are asking about? 1. Data Availability: Are you reporting that these values are not present within the .csv or data file you downloaded? 2. Calculation Methodology: Are you asking about algorithmic details (formulas, definitions, and assumptions) we use to derive these values from the raw data? 3. Plotting/Display: Do you need help plotting/visualising the existing data in your analysis software (e.g., Python, MATLAB, Excel)?
Once you clarify which stage you need assistance with, we can send over the definitive details!
Hi Neil, thanks for your reply. Yes i want to know 2. Calculation Methodology and 3. Plotting these data with python from the raw data.
Hello there, I am reaching out to you regarding the Recalculate gaze distributions function and mandatory calibration step. Indeed, I am using the surface tracker plugin, but there is no Recalculate gaze distributions button in its parameters in Pupil Capture or Pupil Player ver. 3.5.1. In addition, the screen calibration marker in the bottom right corner of a TV monitor (or a laptop screen) can only be detected by turning the head on the right. Participants of my study sat at around 2 meters in front of the TV monitor. Could you please help me fix these issues? Looking forward to your enlightenment.
Hi @user-4d0d35 , in newer versions of Pupil Player, there is no Recalculate gaze distributions function for the heatmaps produced by the Surface Tracking plugin.
To better understand your second question, are you open to sharing the recording data with [email removed] Then, we can take a closer look.
Hi @user-f43a29 , thank you for your swift reply! I've just sent an email to the address you suggested.
Hi everyone, my lab is having issues with one of our eye-trackers. The world view is functioning fine; however, the eye view constantly disconnects and reconnects. Eventually, the eye view stays disconnected and does not come back. Sometimes when this happens, the eye panel closes, and we cannot reopen the panel (the "detect eye 0" button stays blue). We can swap to the eye 1 panel, but eventually that freezes and closes too. Stopping the recording after this happens tends to make us lose one or both views in the saved files. Starting a few days ago, sometimes the disconnection also results in the computer blue-screening, which also loses views.
We have another eye-tracker which works perfectly fine, no issues at all.
We are using Windows 11. Any help is much appreciated!
This is what appears in the .exe: "eye0 - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting..."
*The world camera also feels hot to touch?
We have tried with no success: - Rebooting the computer; - Unplugging and plugging the eye-tracker back in; - Closing and re-opening the eye panel; - Trying different computers; - Reinstalling Pupil Capture; - Uninstalling all drivers; - Resetting Pupil Capture to default settings; - Using different USB ports and cables; - Holding the eye-tracker and cable at different angles and heights. - Running Pupil Capture as administrator.
Hi @user-e1b2da π, Sorry to hear your eye cameras seem to be disconnecting. Thank you for sharing the debugging steps you've already tried. That helps a lot.
Please try the following: 1. Close Pupil Capture. 2. Disconnect the eye cameras. Each camera is attached with a JST connector. Disconnect these and check that all the pins are straight. 3. Reconnect the cameras. 4. Rename the "pupil_capture_settings" directory on your computer to "pupil_capture_settings_backup". 5. Restart Pupil Capture and connect the eye tracker. Does this make any difference? Let me know how it goes.
No difference, unfortunately. The pins appear straight to me.
Hi @user-4c21e5 , could you help me with this ? Thanks
Thanks for confirming! The calculation methods for these datastreams are proprietary. However, you can read more about the pupillometry in this whitepaper. To plot the data, you might look at something like using Matplotlib. Are you familiar at all with Python?
Yes in familiar with python. So there is no way i can plot or analyze the pupil diameter or eyelid apenture data ? I can only see them in the cloud ?
Yes you can easily download the relevant data to .csv files and then plot them yourself. This is the file you'll want: https://docs.pupil-labs.com/neon/data-collection/data-format/#_3d-eye-states-csv
Thanks for testing. We would then need to receive the system back at our office for inspection/repair. Could you send an email to info@pupil-labs.com mentioning our conversation here? Someone from the operations team will look after you. It would be great if your could include your original order ID.
Thanks, will reach out to the appropriate team members to get the original order ID and further information, and then reach out to that email! π
Hi everyone,
I am working on a research project and trying to receive real-time gaze data from Pupil Core using Python on Windows 11.
My Goal: I want to get gaze coordinates (x, y) in real-time in my Python script to provide feedback to the user.
The Problem: No matter what method I try (ZeroMQ or LSL), I cannot receive any data from Pupil Capture. My Python script connects successfully but hangs at the data receiving step (e.g., await socket.recv_multipart()).
What I found: I suspect the issue is on the Pupil Capture side. When I check the Plugin Manager in Pupil Capture (v3.5.1), I cannot find the standard plugins like: Gaze Mapper (2D) Frame Publisher LSL Outlet
Since Gaze Mapper seems to be missing, I assume no gaze data is being generated/broadcasted, which explains why my Python script receives nothing.
Steps Taken: Installed Pupil Capture v3.5.1 (Windows x64) from GitHub releases. Tried reinstalling it multiple times (both .exe and .zip versions). Deleted pupil_capture_settings folder in my User directory to reset settings, but the plugins still don't appear.
My Question: Is there a specific way to enable Gaze Mapper or LSL Outlet in Pupil Capture v3.5.1? Or should I be using a different version for real-time API access with Pupil Core?
Any advice would be greatly appreciated. Thank you!
To add to @user-f43a29's notes, I would highly recommend running this example script. It will print out pupil data. You'll need to be wearing the eye tracker for it to generate any data. Finally, I note you were trying to stream 'gaze' data in your last message. This requires you to have calibrated Core. If you did not, no gaze data will have been streamed.
Hi @user-d31618 , can you first try with the main download here?
Also, please note that the LSL Outlet is a separate plugin that needs to be installed following the steps here. It is not included by default. You will want to use pylsl version 1.16.2 during the install process.
I am unable to use a generic webcam alongside Pupil Capture, but Pupil Capture is blocking the webcam capture. Is there any way to configure this/exclude this webcam from Pupil Capture?
Hi @user-5b0357 , if you are unable to make the camera selection work via the standard dropdown in the Pupil Capture GUi, then this conflict probably requires editing the Pupil Core source code. You would want to start here.
For example, you might be able to exclude the webcam by name by editing that code, which runs when Pupil Capture initializes.
The source code clarifies quite some things, thank you. It seems that manually selecting a camera should solve the issue
Are you also looking to synchronize the webcam feed with the Pupil Capture recording, or overlay it in some way?
Hello! Iβm not sure if this is the right channel to ask this, but Iβm trying to build a DIY version of the Pupil Core. Iβve been trying to get the glasses frame from the i.materialise store, but it looks like I canβt purchase it anymore. Is the option no longer available?
If so, would it be possible to buy the STEP/STL file for the frame, or is there any alternative you would recommend for building a compatible setup?
Thank you in advance for any guidance!
Hi @user-da7106 , the i.materialise webshop closed and is transitioning to a new framework.
We plan to have the DIY Pupil Core files up on Thangs within 2 weeks.
Thanks for letting me know Rob! Iβll keep an eye out for the Thangs upload then
Hi! Just wanted to follow up and see if thereβs any update on the DIY Pupil Core files on Thangs. Thanks a lot
Hello,
Quick question, is Screen Marker Calibration Choreography only useful for scenarios where a user is looking at a screen or can it be generalized to more complex scenes? In more dynamic/complex scenes where we want to mostly look like gaze data, and not so much AOI. Is it better to use Single Marker Calibration Choreography or Natural Feature?
Hello, @user-4a1dfb π
When using the default 3D pipeline, the calibration does extrapolate outside of the calibrated area. So, to answer your first question, the 5-point screen marker choreography is often fine for both screen-based work as well as more complex scenes.
However, it can be preferable to calibrate by covering a similar area of the visual field that you will record in your experiment. Using a single marker (presented on screen or physically) can be better for this because you can move the marker around whilst keeping the head still, or the participants can rotate their head in a spiral pattern, enabling more coverage of the visual field. Test in on yourself to better understand what I mean.
Natural Features depends on good communication between the wearer and the operator, since you have to tell the wearer exactly where to look and for how long, and subsequently, on how accurately you click. It can work but it's not foolproof.
When you run a calibration, you will get back aN accuracy value in Pupil Capture, so you can assess which calibration method consistently gives better accuracy for your situation (preferably assessed with validations at fixed times during a test run of your experiment).
[email removed] π
Hi everyone,
Iβm using Pupil Core to obtain gaze coordinates while capturing eye images with another eye camera.
In Pupil Core, the infrared illumination is always on for eye tracking. Is there any way to turn this on/off?
Iβd prefer not to physically block the light or cut any wires β Iβd like to handle this purely in software. However, I couldnβt find anything in the code that controls the IR.
Hi everyone,