Hello @nmt,
When I am exporting the processed file from pupil player software, I am getting following two errors: 1. av.AVError: [Errno 12] Cannot allocate memory (libav.mpeg4: Failed to allocate packet of size 11170000) 2. BrokenPipeError: [Errno 32] Broken pipe
For your reference I have attached the screenshot also.... (I am using pupil core eye tracker) Please let me know the solution
Hi @user-00cc6a! Thanks for sharing the screenshot. It's very helpful. It looks like you're trying to export to a OneDrive location. Please note that loading from or exporting directly to virtual drives is not supported and can lead to errors. Try exporting to a local drive in the first instance. Once that's done, you can copy the export folder to whatever location you need. I hope this helps!
Thanks @nmt, the issue is sorted.
Is there a new place to buy the frames if we're going the DIY route? Seems like the Shapeway store link is broken, and the previous two people to ask about this were not given answers.
hI all i am new to developmenet and having a query regarding open source software provided by pupils labs i want to utilise your code for doing some modification and the sequential flow of the code along with the in detail documentation i want to read does any help from community can be provided my main concern is to find the section of the code from which it is projecting the coordinates of eye gaze , that section i want to understand because there are 40+ files so how to dig the code
Hi @user-11d287 👋!
Welcome to the community! Just a quick note: this seems more like a general question rather than a feature request, so I have moved it to the 👁 core channel.
If I understood your query correctly, you want to know where the gaze mapping occurs in the code? There is no single script for that, but the code related to gaze mapping can be found in the following directories:
- pupil_src/shared_modules/gaze_mapper
- pupil_src/shared_modules/gaze_producer
This should help you get started. Good luck with your project, and feel free to ask more questions!
Hi, I am developing a custom script to use Pupil core in a psychophysic experiment
I am having trouble trying to run Pupil Capture on my Macbook pro M1 running MacOs - Sequoia, specifically
Hi @user-131620 ! To add on top of Neil's answer, you don’t need to type the sudo
command every single time. Instead, you can create a capture.sh
file, add the command to this file:
sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
then, you can move that file to your dock. This way, you can simply execute the script whenever need by clicking on that icon! 😊
Note: you might need to click on Get Info
and select open with your terminal application and give permissions to that file to be executed. You can do so by sudo chmod 777 capture.sh
Helps are very appreciated, thanks in advance
Hi @user-131620! Thanks for your message. Could you describe in more detail what you mean by not being able to receive images for each pupil? If you could outline the specific steps you've taken so far, including anything you think is important, it would help us assist you better.
Thank you for your help. PupilCore's world camera has such image quality. Is it broken?
@user-5c56d0 this looks out of focus, could you try rotating the ring around the camera to change the focus
@nmt @user-d407c1 Any info on how I can acquire the frames to go through the DIY route? This is time sensitive. I'm exploring different eye tracking companies and yours stood out because of the DIY route, but if this is not available anymore, I'll turn my attention elsewhere.
@user-d407c1 @nmt Thanks! I will try the script, I think I forgot to turn on eye 0/1 detection on the side menu
In addition, can we see the blink data on exported file using core? I don’t see a blink column in the export file (gaze_positions.csv, pupil_positions.csv). I am guessing that we can use confidence with a threshold to guess blinking, but just wondering if there is an official support of this data since there is a blink detector function in Pupil Capture
Hi @user-131620! Yes. You'll need to turn on the blink detector in Pupil Player. Check out the docs for more: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
Also, I am having trouble understanding the timestamp variable in gaze_positions.csv, is it in seconds? it is usually in form like :13694.37392…etc, is it a offset from 0 seconds? so the above means around 1.3 seconds from beginning
Thanks ahead!
All is explained in this section of the docs. You'll want to look at 'Pupil Time': https://docs.pupil-labs.com/core/terminology/#timing
Hi, I wanted to know if the cables connecting the IR cameras to the usb-c hub are custom-made. I might have accidently damaged one of the connector that goes into the IR camera and am looking to replace it.
Hi @user-a34db0! Those connectors at the cameras are a JST type SH (1.0mm). They can be tricky to replace properly. If you get stuck and need a re-cabling, reach out to info@pupil-labs.com 🙂
Hello, I would like to ask about the calibration process in Linux. I clicked the mouse five times as required but it always says Gaze data error
Wǒ fāxiàn wǒ fànle yīgè cuòwù,5 cì diǎnjī shì jìnxíng qǔxiāo, dàn wǒ zài shǐyòng ScreenMaker jìnxíng jiàozhǔn shí píngmù shì bái píng 47 / 5,000 I found that I made a mistake and 5 clicks were to cancel, but the screen was blank when I calibrated with ScreenMaker
Attached is a calibration video. In addition, I used WSL2 to run from source code
Hi @user-bd106e! Is there a reason you're running from source using Windows Subsystem for Linux? I can't be sure, but there's a high chance this is the reason no calibration marker is being displayed and the calibration is failing. You're better off running Pupil Capture natively, and so I'd recommend running the pre-compiled Windows bundle to get started. You can download it from the link here: https://docs.pupil-labs.com/core/getting-started/
I want to integrate yolov5 object detection with the pupils labs eye gaze tracking software can anyone tell how to do it
Hi @user-11d287 👋! To implement YOLO with Pupil Capture or Player, you can either create a custom plugin or directly modify the source code. For creating a plugin, you can refer to the Plugin API documentation.
If you prefer modifying the source code, this user's fork, though a bit outdated, might still offer some helpful guidance.
Hello, I have a problem with installing the Pupil Capture plugin. I was trying to write a plugin to send the data I received from the AI back to Pupil Capture. My problem is that I can see the plugin is running, but when I check the logs, I encounter the following error. I tried running a simple contour edge detection code based on the example here: https://gist.github.com/papr/b938ddc6315525d0f03da3668568e75c. I would greatly appreciate your help!
Hi @user-becdcb 👋,
I can’t see any errors in these logs related to the plugin—only some issues with the clipboard and GLFW, which shouldn’t affect your setup.
If the snippet/plugin has been placed correctly, you should see a new option for Pupil detection alongside the normal 2D option.
Is that not the case?
Noob question: I want to have live data sent via UDP of where my participants are looking at at the screen. Also I want to have just one eye-cam. Are these markers what I need?
So far I only know that Core sends UDP messages with raw tracking data. And that I can do some kind of calibration.
(image just shows the markers I saw here in chat. So you know what I'm talking about.)
Hi @user-84387e , an important question!
Those AprilTag markers are indeed what you need when using Pupil Core's Surface Tracking plugin.
To get that data in real-time over the Network API, you will want to take a look at the surfaces
topic, including the gaze_on_surfaces
field.
And, do you mean that you only want one eye cam running, or you want to completely remove one of the eye cameras from the headset?
cool, good to know. thank you!
since I have the attention now (😅 ): Are there any trade fairs where Pupil Labs attends to? So one could see your stuff in person?
We do go to conferences! For example, in the past, we've been at VSS, HFES, ACSM, and ECVP, and here was our Announcement about our workshop at the past ECVP.
To where are you typically traveling for conferences?
that's cool. Yea, just found the announcements. Sorry, kinda new to discord.
I mostly stay in germany. But if someone tells me that THE eye tracking event is somewhere else, I would consider it.
Hi @user-84387e , we are planning to be at some conferences in Germany this year. I will update you when we have finalized our plans.
Hi Team, i have recorded a few studies and recently I am encountering the error below when I try to replay the files. I have already tried to reinstall the software but the same thing occurs. I hope there is no data loss as we are currently conducting studies and cant afford to lose these recordings: Update: I tried to replay the files on another laptop and it works so it seems to be an issue with some settings on my laptop. Is there anything that i should try to fix? Thank you.
player - [INFO] video_export.plugins.world_video_exporter: World Video Exporter has been launched. player - [INFO] video_export.plugins.eye_video_exporter: Eye Video Exporter has been launched. player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 661, in player File "src\pyglui\ui.pyx", line 274, in pyglui.ui.UI.configuration.set File "src\pyglui\ui.pyx", line 266, in pyglui.ui.UI.set_submenu_config File "src\pyglui\menus.pxi", line 774, in pyglui.ui.Scrolling_Menu.configuration.set File "src\pyglui\menus.pxi", line 91, in pyglui.ui.Base_Menu.set_submenu_config File "src\pyglui\menus.pxi", line 552, in pyglui.ui.Growing_Menu.configuration.set File "src\pyglui\menus.pxi", line 91, in pyglui.ui.Base_Menu.set_submenu_config IndexError: pop from empty list
player - [INFO] launchables.player: Process shutting down.
Hi @user-0f7e53 👋🏻 Could you please go to "General Settings", select "Restart with default settings", and let me know if this resolves the issue with your laptop?
Hello, I currently have a problem. I want to draw something on the world interface. How should I do it to complete this?
Hi @user-fb8431 , you can write a Plugin to do this. The gl_display
callback will be helpful here.
The built-in Accuracy Visualizer's implementation of gl_display
can serve as a reference point.
Hi @user-becdcb! I'm just trying to better understand your goals here. What do you mean by 'debug the plugin'?
I mean, for example, I want to add debug output to the log so I can identify the problem or issue. Without debugging, how am I supposed to solve or see the problem? The plugin seems to be running, but it’s not actually functioning as expected.
I am simply trying to perform edge detection using contours. In debugging, if I want to print something like "no edges found," how can I do that? I tried using logger = logging.getLogger(name) for outputs, but I see no output in the def detect(self, frame, **kwargs): method. Does this mean the plugin is unable to run this method if nothing appears there?
AI Pupil Detector
Hi everyone, I need your help with my first project using Pupil Core. I am using the eye tracker to ensure that my participants fixate on the center of the monitor. I perform a standard calibration and validation in the Pupil Capture app, and the app shows a confidence level for both eyes between 0.9 and 1. However, when I record the eye movements and analyze the data in Pupil Player, I notice that the fixations are consistently about 10 cm shifted from the part of the display I was actually looking at. Is there anything I can do to improve the accuracy and quality of my data?
PupilDetectorPlugins
can also
Hi @user-177371 👋 ! Good to hear that you have great confidence levels for both eyes. Your value indicates good detection of the pupils in each eye image. This is an important pre-requisite for good eyetracking performance with Pupil Core.
Then, to understand how well the device is estimating gaze, you want to take note of the Accuracy
value that is shown in Pupil Capture, after you have finished a calibration. If calibration accuracy could be improved, then that could potentially explain the results you are seeing.
Make sure to also fit a good 3D model of the eye before proceeding to calibration.
Otherwise, if the headset has slipped significantly during the recording, then this could also explain the result.
Ultimately, if you could share a screenshot or perhaps even the recording with us, then we can provide better feedback. You can do this privately by sending an email to [email removed] if you wish.
Thanks a lot Rob. I will send one of the recording to the email address.
~~Greetings! I am new here to the Pupil products and I currently have the core. I am working on getting real time data using the network API, but am running into an issue. I am using the filter_messages.py template and it does not appear to be working for me. It gets hung up at requesting the sub port whether it’s local or remote. I am not really sure precisely on how to debug this, so I wanted to see if anybody had a similar issue or any advice. Thanks everybody!~~
Got it working!
Hello, I would like to ask what is the abnormality when I calibrate the pupil core?[03:09:50] WARNING world - uvc: Turbojpeg jpeg2yuv: uvc_backend.py:917
b'Premature end of JPEG file'
WARNING world - uvc: Turbojpeg jpeg2yuv: uvc_backend.py:917
b'Premature end of JPEG file'
^C[03:10:00] WARNING MainProcess - root: Launcher shutting down with main.py:342
active children: [<Process name='eye0' pid=147
parent=104 started>, <Process name='world'
pid=115 parent=104 started>]
Hi @user-78da25 , could you please try the steps in this message --> https://discord.com/channels/285728493612957698/285728493612957698/1329036958625693767 <-- and let us know the result?
hi ! Im trying to use pupil-capture to track my eye using a webcam. For some reason i cant get the stream from my webcam into the app. Im fairly sure that my webcam is working properly, as i can see footage from it in camera app. Can anyone please help? Adding some logs --
Hi @user-ff7150 👋 ! Would you mind clarifying which webcam are you trying to use? Is it UVC compliant?
till now i've tried 1. falling back to v 3.4 2.running the program with admin permissions
Hello, I’m using your core product well. I have a question: In the Gazer3D class, the eye0_hardcoded_translation and eye1_hardcoded_translation are set to [20, 15, -20] and [-40, 15, -20] respectively, and ref_depth_hardcoded = 500. Are these values in millimeters (mm) or pixels (px)?
Also, I’d like to know the unit for self.last_gaze_distance. Thank you
Hi @user-fce73e ! 👋
Yes, these parameters are reported in millimeters (mm) in the 3D camera space. Please note that they are primarily intended for internal use within Pupil Capture. And that the translation parameters are re-defined by the bundle adjustment process, so this serves as just a starting point.
I’d recommend reading about pye3d detection to understand how it works in more detail.
The software also doesn't work with my laptop's inbuilt webcam so thought I'd say that too
Just in case, on Windows, Capture needs to be installed as administrator to be able to install the proper drivers. https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Is there any off-the-shelf camera module for the eye-camera including IR emitters that you would recommend? E.g. ones you used on the core during the years? Or were these custom designs all the time?
I know you have DIY instructions, but these modules are so clunky. And not having to remove any filters would be kinda nice, too.
We have this module, but the emitters are to close to the lense: https://de.aliexpress.com/item/1005004639261673.html
Hi, does anyone has tips for massive export of recording data? Currently, I am manually dragging recording files into Pupil Player and hitting the 'e' key for exporting. I tried looking for API but have no luck. I am wondering if there is an open API to automate the process and to have the gaze_position.csv and pupil_position.csv loaded into appointed file
HI @user-131620! Please see this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1254146039515189278
Thanks !
Hello guys, I hope you are doing well. Some questions regarding eye tracking Core: 1) I wrote a python code for my study where I included the recording of calibration and the presentation of stimuli. I saw that using validation after calibration is regarded as a best practice however I haven't found anywhere the code to run the validation using python. Is it required to use validation after a good calibration? For your information I use 2D method for calibration and I have 6 small experiments (4-7 minutes) where participants are static (in sitting position). 2) Also, sometimes even when the calibration is good according to the precision and accuracy numbers (e.g., both below 1) I still get an error ''20% of pupil data was dismissed due to low confidence'' I am aware that this indicates that the pupil detection can be improved. I wonder what are the implications of using such data. 3) Which is the rule of thumb for the distance between the participant and the screen? I use a 27.5 inches screen and I noticed that being too close (e.g., 60cm) often results in calibration failure 4) Also, if we increase the time of calibration, does this increase the chances of having better results?
Hello, I have one of the first version of the core mount. Some part of the eye tracker are 3d printed. Unfortunately, the case attached to the mount seems to be broken inside. The USBC port is loose and cannot be connected anymore. I would like to open the case and 3d print it again. Are those STL files available somewhere?
Hey @user-0558c8! If the USB port appears damaged or cannot be connected, you may need to return the system for inspection or repair. Please contact info@pupil-labs.com with a description of the issue, and someone will assist you.
Hi @user-412dbc! Thanks for your questions. Here are the responses: 1. Validation essentially verifies the accuracy of a calibration. It's not necessary, but it can provide important information. For further reference, see this message: https://discord.com/channels/285728493612957698/285728493612957698/838705386382032957 2. The implications are that you are losing 20% of the data collected during the calibration. It's difficult to provide further feedback without seeing a recording. Could you post one here for feedback? Or share it with [email removed] We would need the full recording folder that includes all stages of setup and calibration. 3. Generally, the goal is to calibrate at a distance close to the viewing distance you'll use in your experiment. For practical reasons, this isn't always possible. However, note that the 3D pipeline extrapolates outside the calibration area/distance quite well. 4. Potentially, but it's a trade-off. After a while, your participant might become fatigued and glance away from the calibration target, introducing errors into the calibration. If they can maintain full concentration and focus on the target for the duration, a longer calibration that covers more of the visual field can technically yield better results.
Thanks for the reply Neil. Also, is there any way to inspect the quality of the data I collected so I can decide if I will use them for further analyses?
Is there any 3D print support for the pupil core glasses frame?
Hi @user-3a2026! If you want to 3D print your own frame, this repository contains geometries of the camera mounts that you will find useful! https://github.com/pupil-labs/pupil-geometry
Hello @nmt I facing this error. Please let me know what is the issue?
@user-00cc6a, can you please try restarting Pupil Player with default settings (you can find this option in the main settings) and trying again?
AFAIK, currently no. "Shapeways" had a webshop where you could buy the 3d models for the frame. But the shop is closed down. You can still find the link in the DIY section: https://docs.pupil-labs.com/core/diy/#getting-all-the-parts Not sure if the models will be made available again. Maybe the pupil team has some insights.
@user-3a2026 @user-84387e we will have a replacement soon ™️ , hopefully within a week. We are waiting for approval at i.materialise. We will update the link in the docs once is up.
Hi, we are trying to get pupil capture working on Linux. Installed the application. Icons are available but on clicking the application does not start. Could someone help?
Hi @user-3bcb3f 👋 ! Could you share some details about which distro, computer architecture and version of Pupil Capture are you trying to use? Does it work if you try starting from the terminal ?
Computer details are in the screenshot and Pupil Capture version: pupil_capture_linux_os_x64_v3.5-8-g0c019f6. When I start from terminal, it says cysignals failed to execute...and terminal closes..so I am not able to get the whole error.
May I ask if, besides the integrated GPU, you also have a dedicated GPU? If so, could you try right-clicking the application and selecting "Launch with dedicated GPU"?
Additionally, my colleague @user-f43a29 mentioned that our software supports only X11 and not Wayland. He has more expertise in Linux and might be able to provide further insights if needed.
We do not have a dedicated GPU. I have changed the windowing system to X11, still not launching. I managed to get a screenshot of the error when trying to run on terminal.
Hi @user-3bcb3f , I am briefly stepping in for @user-d407c1 . Do I understand correctly that the system does not have an Nvidia or AMD GPU? In other words, was the system previously used as a server?
Running from terminal will typically only work if you instruct the system to use your dedicated GPU when launching from the command line.
Before that, could you briefly run the following command in the terminal and send the output?
lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation AlderLake-S GT1 (rev 0c)
Thanks. That chip is certainly powerful enough, but we have not extensively tested using integrated GPUs with Pupil Capture on Linux, so I cannot make any guarantees.
Having said that, may I ask, are you trying to run Pupil Capture from source?
We downloaded the core bundle for linux and installed them. The icons are showing on the applications, but not launch able
Ok. My experience has been that the cysignals error, when seen on Ubuntu, is usually resolved by:
Launch with dedicated GPU
With respect to point 2, you might want to check if Intel provides dedicated Linux drivers for that chip, rather than using Ubuntu's open-source default of MESA.
Point 1 is done. Point 2-Will check the GPU drivers..Point 3-dont see any option on roght clicking to Launch with dedicated GPU.
Since you have only an integrated GPU, that option will not be present. It probably will also not be present after installing Intel's drivers, if they provide them. I just list it for completeness, as the usual process for resolving the issue.
As mentioned, I cannot make any guarantees that it will work with an integrated GPU on Linux.
In the worst case, you could swap in a GPU from some other computer. The dedicated GPU does not need to be incredibly new or particularly powerful to run the Pupil Capture GUI.
This is the crash log created when trying to open pupil capture in Ubuntu 22.04 after installation.
Hi, we see that pupil capture is not working on Ubuntu 22.04 with just an integrated GPU. We haven't tested yet with dedicated GPU. Has anyone reported this before? With Ubuntu 20.04 however, with integrated as well as dedicated GPU pupil capture works fine.
Hi @user-3bcb3f , you could also try running Pupil Capture from source, which should automatically pull the latest version of cysignals (1.12.3).
If that does not fix it, then you may want to reach out to the cysignals team, as the error is originating in that package. It sounds like a potential interaction between expectations in an older version of cysignals that do not match up with some change in the graphics sub-system of newer Ubuntu versions.
Hi @user-3bcb3f , looking back, it seems there was some success with that issue on Ubuntu 22.04 when running Pupil Capture from source with the develop
branch, potentially with cysginals v1.11.4, although it's probably worth a test first with the latest version of cysignals.
Already using develop branch. the error now is as in the attahced document with pyglui
Is it possible to copy-paste the error at the end of the document into the chat here? Or upload it into Discord as a text file with .txt
file extension?
nboundLocalError: cannot access local variable 'g_pool' where it is not associated with a value
[16:02:46] ERROR world - launchables.world: Process Capture world.py:837
crashed with trace:
Traceback (most recent call last):
File
"/home/profnivethida/pupil/pupil_src/launchable
s/world.py", line 124, in world
from gl_utils import GLFWErrorReporting
File
"/home/profnivethida/pupil/pupil_src/shared_mod
ules/gl_utils/init.py", line 11, in
<module>
from .draw import
draw_circle_filled_func_builder
File
"/home/profnivethida/pupil/pupil_src/shared_mod
ules/gl_utils/draw.py", line 16, in <module>
from pyglui.cygl.utils import RGBA
ModuleNotFoundError: No module named 'pyglui'
UnboundLocalError: cannot access local variable 'g_pool' where it is not associated with a value
[16:02:46] ERROR world - launchables.world: Process Capture world.py:837
crashed with trace:
Traceback (most recent call last):
File
"/home/profnivethida/pupil/pupil_src/launchable
s/world.py", line 124, in world
from gl_utils import GLFWErrorReporting
File
"/home/profnivethida/pupil/pupil_src/shared_mod
ules/gl_utils/init.py", line 11, in
<module>
from .draw import
draw_circle_filled_func_builder
File
"/home/profnivethida/pupil/pupil_src/shared_mod
ules/gl_utils/draw.py", line 16, in <module>
from pyglui.cygl.utils import RGBA
ModuleNotFoundError: No module named 'pyglui'
Hello, After we download the data, Ιs there any way to inspect the quality of the calibration conducted and the data collected so we can decide if we will use them for further analyses?
Hi @user-412dbc! Are you referring to data exported from Pupil Player, e.g the .csv files?