Hello everyone, Happy new year. I have a question about IR LED. I'm using the SFH 4050 in series with a Ξ©50 resistor (For each eye) and connected to 5v USB power. The LED was extremely hot (LED voltage was 1.4v, current about 65ma) Then I change the Ξ©50 resistor to Ξ©150, and the heat issue was slightly better, but still quite warm. I want to ask if is this normal. If not, do you have any suggestions to solve this issue? Since it's warm I'm quite worried to move it close to my eyes. Image is my DIY
DIY Headset Temperature
Is it possible to save world data without gaze? (I am using a pupil core.) I found that if I set the Minimum data confidence to 1.0 in the Pupil Player settings, the line of sight is not displayed in the data at hand, but I think this is not the best solution. I want to do object detection on world data without gaze and then calculate AOI from matching between bounding box and gaze. For that purpose, I thought that the accuracy of object detection would be better with world.mp4 where gaze is not displayed. Sorry if it's a duplicate question. thanks
Hi! If it is only about disabling the visualizations, you can do that before the export. Check out the Plugin Manager to disable the "unique" plugins. Other plugins can be removed from their respective menu. You can open the menus via the icons on the right of the Player window.
Hi all, just thought I should ask this here to other researchers using PL with Psychtoolbox in MATLAB. I want to use gaze data to trigger changes in my experiment running on MATLAB and Pyschtoolbox. The eyetracker is connected to the same system as MATLAB/PTB. Any link or points in the right direction would be much appreciated!
Hey @user-c828f5 π. Check out our MATLAB helper repository: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab. You might be able to use some of the code examples in their to get started with your Psychtoolbox integration
Hi, could you please recommend some program to get number of blinks, blink start time and end time from the blink.csv ? Thank you!
Hello, I have a question about calibration. I noticed that if I look at the center area of my world camera, the tracking is very accurate. But when I look to the edge of the world camera, there seems to be an offset. Is there something I might be doing wrong? I also noticed that the resolution seems a little off. How can I correct it under pupil_capture_settings? Thanks for your help!
Hi, could you please let us know if you are using 2d or 3d calibration?
In earlier messages, you mentioned using a DIY headset. Is this the case here as well? If so, you might need to set up custom intrinsics for your eye cameras. The 3d eye model uses the focal length to scale the model. Based on the eye ball outline color (cyan blue), I can see that the model is not well fit which would explain the decreased gaze estimation accuracy.
hi The left eye video is very blurry. How to adjust this? Thank you
Hi @user-4bc389 , could you ~~let us know which hardware you are using~~ confirm that you are using the HTC Vive add-on? Is it the 200 Hz model?
Hello, I used 2d calibration. When using 3d calibration the offset is stronger than 2d. About eye cam's custom intrinsics, I calibrated them before, I also can see USB_camera.intrinsics under pupil_capture_settings. The issue is all three cameras are called USB_camera, will it still work in this case?
The cameras having the same name is actually an issue for Pupil Capture, yes. Do they also have the same vendor and product ids?
Hi, I have the Pupil Labs Core. I want to use it with a Raspberry Pi to externalize the eye gaze tracking functionality from another pc. The Raspberry Pi OS is Ubuntu 20.04, Kernel Linux 5.4-1078-raspi, Architecture arm64. I'm getting started with programming to use these glasses, sorry if I write any nonsense. When I download the Linux zip from Pupil Labs website, I get the player, capture, and service modules. But when I run any of them, non of them works. My application requires that I stream real-time data of the users' gaze from the raspberry to another PC, so I also tried cloning https://github.com/pupil-labs/pupil following the developer's guide. However, I'm not fully aware of what is required to use the Network API. Did anyone here manage to use the glasses in a raspberry pi? And if so, could you give me some hints from which to start? Or some thread with an extended installation guide for Linux for developers use.
Let's continue in this thread: https://discord.com/channels/285728493612957698/1039477832440631366
hi, I used Pupil Labs Core to record eye movement data in the real world, and defined two surfaces as areas of interest. I would like to ask whether pupil player can help me analyze the number of gaze and gaze time of the interest area
Hi! After setting up the surfaces (and ensuring that the marker detection has finished), you can export the mapped data. It includes a gaze_positions_on_surface_<surface_name>.csv
file that indicates whether a given gaze location was on a surface or not, and the corresponding timestamp. With that, you should be able to calculate your metrics.
Note, that there might be situation where gaze was on the surface but the surface was not recognized, e.g. due to motion blur. Such gaze samples are not included in the exported file.
https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker
how do i purcharse the pupil invisible
hello
Hello there. I am looking for a eye tracking system suitable for two things: 1) dyadic eye tracking experiments (e.g., measuring eye contact etc.), and 2) being able to co-register with EEG (biosemi) mainly for fixation analysis (not for eye movements corrections!). I assume all of your products should be suitable for the former, but I wondered whether you have any solution for coregistration with EEG in any of the products?
Hi, so for calculating dwell ratio (defined as duration of fixation in an AOI over duration of time fixating across all AOIs), I did not use total task time. I instead summed the time spent fixating across all AOIs. This led to the calculated duration being more than the total task time (gotten from the export file). Not sure what is going on!
I do realize duration is in milliseconds and time in the export file is in seconds
Hi team, my lab and I just received a pair of the new version of the Core with two eye cameras, and we have a question. Essentially we are thinking about modifying the hardware slightly by adding a small movable ball joint near the eye camera so we can tilt the eye camera either upwards or downwards to get an optimal view of the pupil, particularly when participants are looking downward. However, we also want to be able to use the live calibration feature by using screen markers. If we were to slightly adjust the hardware so we can adjust the angle of the eye camera, would that severely affect the ability to get a good live calibration? Is that enough to mess up the camera intrinsics? Thank you!
Hi @user-ae76c9. The existing ball joint should allow you to rotate the camera to get a good image of the pupils, even during downward gaze shifts. Certainly I don't think there would be a need to add another ball joint. With that said, if you do make modifications, so long as the pupil is visible and not too far from the eye, you'll still be able to calibrate. It won't change the lens intrinsics.
Hi, team! Thanks for your pye3d contribution, after many experiments and adjustment, it is proved that pye3d can be used to estimate IPD when the camera external parameter between two eye camera and 2d ellipses are accurate. I have test it at at least 5 participant that has different IPD, the method gives approximate exact values. Although there are inevitable errors, pye3d is still a good algorithm.
This is great to hear! Thanks for sharing your findings π
Before you go ahead and modify the camera, it might be worth sharing an example recording such that we can provide feedback on eye camera positioning
Hi there! I have a question regarding payment for my repair
just wondering if I can pay via purchase order via my University system instead of credit card
Please contact info@pupil-labs.com in this regard
Hello! Is there a Pupil Core download option for Windows 11 yet?
Hey, simply download the windows version. It should work on both Windows 10 and 11
It must be a problem with my Python configuration, then. I've followed the steps for dependencies on Windows (https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md) . I'm in a virtual environment running Python 3.6.8 and, when I run the 'pip install -r requirements.txt' command, I get the following error: "Command errored out with exit status 1: Failed to build pyaudio." I noticed there's a note about a prebuild wheel issue for pyaudio in Python 3.7... Do I maybe need an older version of 3.6? (I've also tried downloading the dependencies manually but have run into issues there as well.)
Hey, there is no need to run from source. Just download the pre-build bundle from here https://github.com/pupil-labs/pupil/releases/download/v3.5/pupil_v3.5-1-g1cdbe38_windows_x64.msi.rar
I've tried this as well, but I can't open the .rar (I've tried 7-Zip, PeaZip, and Zipware), which is why I resorted to running from source.
Hello there! I was wondering if there is a documentation or suggestion for connecting the Core to a CRT monitor? The CRT monitor has a refresh rate of 60hz (usually) and is causing an odd image shadow. Just wanting to know if there are settings to change as we're using 2 Pupil Cores and CRT monitors for a study. Thank you!
Dear Developers, I would like to ask a question regarding the world camera of Pupil Core. I would like to use the world video to estimate local brightness around the center of fixation and to assess how pupil diameter changes when the subject looks at different areas of the screen with different local brightness. This is possible only, if the world camera do not adjust automatically to changes in luminance. My question is: Does the world camera do any automatic adjustment according to changes in luminance? I went through the docs but did not find this information. I really appreciate any help you can provide, and thank you for developing this amazing platform!
Hi, yes, the scene camera performes Auto exposure by default. You can turn it off in the video source settings
Hi, I would like to ask one more question about the auto exposure functionality of the scene camera. Based on earlier messages here I would suggest that the aperture of the scene camera is fixed, and in aperture priority mode the exposure time is changed automatically. My question is if the exposure time is logged somewhere automatically for each frame. I see that a similar question was posed in 2019 (https://discord.com/channels/285728493612957698/446977689690177536/601024309955133441), but it was 3 years ago, maybe something was changed since that. I really appreciate if you can give an update about this question.
Thank you for your quick response!
Hi, I'm simply trying to live-stream data from PupilCapture into MATLAB, but am not finding much documentation on how to do this. Most code is python-based. Any tips? For context, I'm using a PupilCore headset connected via USB-C to an android Motorola Z3 device streaming wirelessly to PupilCapture on a windows 10 device.
Hi, have you checked out these examples already? https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
Hello, I have a problem with my pupil core. The cover (with the lens) of one of the pupil cameras seems to have fall off. The pupil was purchased in November 2022 and we (my research team and I haven't used it too much yet. It hasn't fallen off either etc so I don't really understand where the problem can come from. Is there any way to re-glue the cover and lens to the pupil camera? Or do I need to send it back to you ?
Please contact info@pupil-labs.com in this regard π
Hello ! I have recently started working with a Pupil Core and I am trying to analyse some of the data that I acquired.
In my experiment, I had participant watch a screen positioned at 165cm in front of them (participants had to follow a target on this screen). I have included AprilTags on the stimuli displayed on the screen to get some surface data.
I am using scripts https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb and https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb as starting points for my analysis (thanks for the great scripts!). In particular, I am now looking at the gaze angular velocity.
I understand from the notebook 05 that the velocity is obtained by converting the 3D gaze position (in the world camera coordinate system) to spherical coordinates. The radius being the length of the vector from the origin to the point at position gaze_point_3D
.
While browsing through the server, I saw this message : https://discord.com/channels/285728493612957698/285728493612957698/1039867463225053205. I agree with this, since I see length of around ~40-50cm while the participant should be gazing ~165cm away.
Eventually, I am wondering how much this inaccuracy impacts the computation of theta
and psi
(r
is underestimated, but since y and z are also underestimated does the result still make sense, with the error "cancelling" each other ?). Also, how much does this impact the projection on the surface ? I might be wrong but I understand that surface position are calculated from the 3D gaze position and the camera intrinsic. I am asking because I seem to have some kind of offset on the bottom of my surface that looks like it could be caused by some error in the projection of the gaze on the surface.
Thanks for your help !
Hi! Happy to see that you were able to make progress based on these examples! theta/psi are the software's best estimation for a gaze direction. Its accuracy depends on the calibration. By default, the accuracy visualizer calculates a mean calibration error in angular degrees. In other words, it informs the user about the accuracy of the gaze direction. The gaze depth is not being considered.
If you see a bias in the surface mapping that is not visible in the scene-camera-mapped gaze, than it might be possible that there is an issue with the surface detection or your surface definition. The surface mapping is a purely geometric mapping based on calculated homographies.
Hello! Sorry to bother you, I have a few quick questions regarding the Pupil Groups plugin. I am currently using two Pupil Cores, each connected to a separate PC. The PCs are connected to a network router. I attach here some parts of my code that controls the acquisition from the eyetrackers. Both eyetrackers start automatically so the pupil groups plugin is working well. However, I have the impression that the second eyetracker starts a little bit after the first one. 1) My first question is: are the recordings coming from the two eyetrackers synchronized to each other? Or do I need to perform some post-hoc reallignement? 2) Are the start_time_synced for both recordings in the same "clock"? Is it the same time system for both eyetrackers? 3) I'm currently sending annotations but only the first eyetracker is receiving them. Is it possible to send annotations to all eyetrackers in the group? Or if only one eyetracker receives them, can I consider them to be syncrhonized to all other recordings as well?
Many thanks for the help!
Syncing Pupil Core recordings with each other
Can I use an eye tracker (pupil core or pupil invisible) with glasses? If there is a problem, is there a reason for it? I couldn't find a description in the documentation as far as I could find. Thanks.
Hi @user-cf4282 π It is sometimes possible to put on the Pupil Core headset first, then eyeglasses on top. The eye cameras need to be adjusted to capture the eye region from below the glasses frames. This is not an ideal condition but does work for someΒ people. Ultimately, it depends on physiology and eyeglasses size/shape. Regarding Pupil Invisible, unfortunately it is not possible to use it while wearing glasses - but being the lenses swappable, we offer a prescription lens kit that comes with -3 to +3 diopter in 0.5 steps. If you need additional diopters, you can take the Invisible glasses to an optometrist to have custom lenses fitted. The frame can accomodate -8/+8 diopter lenses.
Hi Eleonora. Sorry for my late response. If the glasses were put on first, sometimes the pupils could be detected, and sometimes not. Since it was more difficult to put on the headset first, we tried putting on the eyeglasses first, and put on the headset first when the pupils could not be detected through the eyeglasses. Thanks to being able to have both options, we were able to measure the gaze of people who wear glasses. Thank you so much.
Hi! Yes I have read through the pupil-helpers documentation. Do you know if it will run on Windows? It looks like it was only tested on Ubuntu
Do you use the data in real time in your experiment? Or do you use matlab to record the data?
We have not tested that, sorry.
Hello to the Pupil Lab team, we are currently working on the development of an eye tracker for research purposes. We are using the cameras of the eye tracker for this prototype. We already have the position of the pupil, but we have difficulties to find the gaze direction. So my questions are: do you use a polynomial approach for the calibration? And, is it possible to have a general description of your approach? Thank you in advance for your answer.
Hi there, I am a researcher working with VR/AR. We are looking into your solutions to incorporate eye-tracking. Do you have a product roadmap to share especially with regard to which VR headsets you anticipate supporting going forward. Your website mentions support for various HTC Vive headsets. What's the status of support for Oculus/Meta, others?
Hi @user-b054d7! Supporting different VR systems with a current VR Add-ons is not on our roadmap. However, it might be worth having a look at our new Neon module. We have designed it such that it can be attached to AR/VR devices with a DIY mount, and we're releasing the 3D geometries of Neon such that mount prototyping is possible. More details here: https://pupil-labs.com/products/neon/
Hi, the recorded gaze data was much shorter than pupil data, why this happened? It was supposed they had the same length.
Hi @user-2196e3 ! Please note that the https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv file contains rows for the 2D detector and the 3D detector. So, it is expected to contain more rows than the https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv file.
Also, note that Capture receives data from free-running eye cameras with sampling rates that may not be in sync, and it employs a pupil data matching algorithm to decide which video frames should be used for each gaze estimation to get gaze data https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching
Thank you! But in pupil.csv, end_timestamp -start_timestamp = 4 mins, in gaze.csv, end_timestamp -start_timestamp =15s. Too much difference. Do you think if there was a problem?
Could you share the recording with [email removed] so we can have a look?
I shared the data. Please take a look. Thank you!
Hello, Is there any way to detect saccades from Pupil Core eye tracking data (Post process or Real-time?)
Hey @user-6e1219 π. Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/826064348723937300
Thanks A LOT
Hey team, hope you are well π I'm looking for a way to assess data quality (i.e. accuracy or precision) for my recordings, I have ~60 recordings. For pupil capture there seems to be plugins for calibration and gaze prediction accuracy. Is there a way I can get data for that post-hoc from the recordings? thanks a lot!
Hi, check out https://github.com/papr/pupil-core-pipeline
Hi, I'm trying to use the labstreamlayer plug-in with my pupil core and I noticed a problem with the recording of the fixations. For some reason, it seems that it considers that the minimum duration of a fixation is also its maximum duration. If I set the minimum duration to 320 milliseconds, all fixations will be 322 milliseconds long (almost always the same value). If I set the minnimum to 10 milliseconds, the fixations will always be about 10 milliseconds long. If I use Pupil Player, I can see the correct number of fixations as well as their correct duration (By the way, if someone knows where to remove the "maximum" duration limit of a fixation, I'm also interested). Does anyone have any idea where I can go to modify this issue and properly recover the duration of my fixations please?
Realtime vs Post-hoc Fixation detection
hello! my capture is failing with the following errors
Hey @user-1a6a43. Please follow the steps outlined here: https://docs.pupil-labs.com/core/software/pupil-capture/#video-source-selection
any idea what might be causing this? I am running the software on a M1 Max mac
Hello, I wanted to record both EEG and Eye tracking(PupIl core) data simultaneously and correlated them based on temporal events. I have installed LSL and also activated LSL plugin in Pupil capture. The LSL file generated by the Pupil LSL plugin only contains EYE tracker data. My EEG (Emotiv) also supports EEG outlets. If I wanted to record both ams (EYE and EEG) simultaneously through LabRecorder , Will it work? Can I read both streams using the LSL-generated XDF file? if yes can you tell me ways to read the XDF file and convert it to CSV or maybe another format which I can use in python for analysis .
Hello. I am doing an analysis of surface tracking. Is the origin of the set surface upper right?
Hi @user-d2d7bd , The origin will be bottom left. Check out this page (https://docs.pupil-labs.com/core/terminology/#surface-aoi-coordinate-system) to better understand how the surface mapper coordinates work in Core.
Please do not confuse it with the Marker mapper enrichment coordinates that are used with Pupil Invisible https://docs.pupil-labs.com/invisible/enrichments/marker-mapper/
Hi everyone
Please anyone know how I can create a combined heatmap from data of different individuals (different videos or raw data). Thanks
Please anyone thought about this ?
Hi I would like to calibrate my pupil labs core glasses for an eye tracking of my laptop screen and the notes that I'm taking. How should I calibrate this?
Hi @user-83799d. Would you be able to elaborate more on your experiment task? That will help us offer more concrete advice π
Hello! I'm wondering what kind of calibration routine you would recommend for a study with a parent and a child sitting at a table working on puzzles together. They will be looking down quite a bit (at the puzzles on the table - moving them back and forth across the table) but also intermittently up at each other. What would you recommend for something like this? Thank you!
Hi @user-5a4bba! I'd recommend using the single physical marker calibration in this instance. You'll be able to cover more regions of the participant's field of view with that. Try to ensure the pupils are visible to the eye cameras even whilst the wearer is looking at the puzzles.
Hey guys, I'm doing a scan path and I'm having a problem when matching the x and y because it is given with values ββ0.754 and why can I match it to an image?
Are you referring to our scanpath tutorial that uses surfaces?
yes
Hi! I've been examining some pupil size data I collected and it seems that one pupil has a consistently larger size than the other! Any reason why this might be happening?
Hi @user-75df7c ! It is possible that the difference is a result of small eye model fitting inaccuracies. I recommend repeating the same trial multiple times and resetting+refitting the eye models between each trial. If you see the same difference across all trials, then it likely that you are measuring a physiological effect named Anisocoria, a condition of unequal size between pupils. Some studies refer to physiological anisocoria prevalence as 10-20% of the population. Check out this link for more information: https://eyewiki.aao.org/Anisocoria#:~:text=Physiologic%20(also%20known%20as%20simple,or%20equal%20to%201%20mm.
thank you!
It would also be worth reading our best practices for doing pupillometry with Core to minimise measurement error: https://docs.pupil-labs.com/core/best-practices/#pupillometry
thank you!
hey everyone! is it possible to download the Pupil Core code base as a package through something like conda or pip?
not yet. What use case do you have in mind?
Good Afternoon, when making a video with pupil labs, it is mandatory to use the qr code?
The "QR" codes (AprilTags) are only required for the marker mapping/surface tracking features.
Hello,
is it possible to change the time interval for eye capture with Pupil Core? If yes, how? thank you.
Hi! Welcome to the server! You can change the frame rate in the Video Source menu of each window.
@papr ok. I will try. Thank you. Take care.
Hello, I would like to know if there is a way to adjust the lsl relay plugin (or another way) to send surface tracking gaze data as a lsl stream ? Thank you! And how can I access to the surface topic please ?
Unfortunately, it is not 100% clear how one would implement this. See https://github.com/labstreaminglayer/App-PupilLabs/issues/18
Ok thank you ! do you know how the lsl relay plugin access to the gaze or fixation topic ? (to have an idea of what adjust in the plugin to access to the surface topic)
Check out the Pupil Capture folder in the same repository
hey! quick question, what is meant by recalculating accuracy and precision in this repo?
according to chatgpt pupil core with 2x cameras can be bought for $599, but I canβt find that on the site. Please advise
Hi @user-601cc5 π. It seems chatgpt was a little off the mark in this instance πΈ Pricing on our website is accurate: https://pupil-labs.com/products/core/
Thatβs a shame, out of our budget. Oh well thanks anyway
Also check out Pupil DIY: https://docs.pupil-labs.com/core/diy/#diy
Thanks!
can that diy headset detect pupil dilation?
Yes, it can
also does the library work for arm embedded devices and what is the latency?
Hey, we do not support arm architectures with pre-compiled packages. But you can run from source. Latency depends on the used cameras. Check out the tech specs on our website for a reference.
I need to modify the headset to add some additional camera mounts. Can I download the 3d design and if so what is the best CAD for Mac to edit it on?
Hello.
I'm going to use Pupil Core to verify that I'm staring at certain points of the whiteboard.
I can get the position 3D position value for the point, and I can get the gaze point 3d value through the Pupil Core, and I wonder how you can get the position value for the eye.
After checking the data, there are values of eye center x,y,z, is this coordinate on the eye camera? I'd like to know more about this as well.
The data in gaze_positions.csv is as follows:
As the coordinate system (green) where the world camera is the origin, I wonder if it can be interpreted as shown in the picture below.
Hello, At our university, we have recently purchased the Pupil core eye tracker. When we tried it, we had some problems with the algorithm detecting the pupils. Indeed, some instability occurs for the capture of the pupil. How should we proceed to avoid it? We would have more questions for you, but it might be better if we contacted someone directly by phone.
We thank you for your help and wish you a good day.
Hi, did you know that every new Pupil Core purchase includes 30 minute onboarding video call? Contact info@pupil-labs.com to book it. Otherwise, you can share a Pupil Capture recording folder with data@pupil-labs.com for us to review.
Hello, I am very new to eye-tracking technology and was wondering how large the markers need to be if I (a) am printing them and putting them on the environment and (b) if I am putting them on a digital image displayed in a screen... Thanks for the help π
Hello, I am a relatively new user of pupil core. We are setting up a new experiment involving pupil core, however the problem is we are not able to get a proper view of the pupils. The limited movement of the camera arms do not allow for proper setup. In a case like this what should I do? I had to jam a piece of paper b/w the forehead and the headset to make it sit higher. Is there any better solution to this? Also the headset seems a bit flimsy as in it might break easily, so are there any stl files that will let us 3D print in case of a disaster. Attached are the eye cam screengrab
Hi @user-4ba9c4 π To ensure good pupil detection, we recommend checking our guide on proper camera setup (https://docs.pupil-labs.com/core/#_3-check-pupil-detection). If the eye cameras ball joints are too stiff, try loosening the screws with a fine screwdriver. Hope this helps!
Hi! I'm hoping someone can help me with my queries on this thread: https://discord.com/channels/285728493612957698/1064859361719099442. Thanks!
Hey there! Robert here from TU QULabs. Can I ask you for some quick advice? I have recorded Pupil Core data via LSL and now have 22 dimensions in my stream - all without names. Does anyone know how I could recover the variable names? Thanks in advance & best regards!
Hey Robert, nice to hear from you! The names are actually part of the xdf file. Let me look up a script that extracts the data.
Hey guys, do you know any code that uses the fixation points to assemble a grid in the image?
By grid, do you mean plotting the fixation's positions as points?
No, something like this