Hello! I'm using the pupil labs core for a project that heavily depends on the surface tracker plugin. I made a test and in the gaze_on_surface.csv file I've got a row every 0.004s (approx.) leading to a frequency of around 240 Hz. However, if I export the eye videow from the same test, I get it is 120 Hz. Why is there this mismatch? I would assume that the eye videos frequency is the same of the cameras and of the gaze_on_surface file... What am I missing?
Hi @user-4c48eb, thanks for reaching out ๐ May I know how you're calculating the sampling frequency from the gaze_on_surface.csv file?
Context of Pupil Core's eye cameras: The eye cameras can run up to 200 Hz, but may sometimes record at a lower rate depending on several factors (e.g., eye camera resolution, CPU resources, dropped frames).
The eye cameras are also free-running and pupil data matching is performed to give you the gaze position .csv files.
Hi! I forgot to say that I'm recording and extrapolating the data with the pupil player program. To find the frequency I'm loading the file in a matrix in MATLAB and looking at the gaze_timestamp column (assuming it is in seconds), so I do 1/mean(diff(MyColumn)) and I get a bit less than 240 Hz
The gaze timestamp is indeed in seconds. May I know if you've filtered out rows where the surface was not detected in the data, before calculating the sampling rate?
The data is in the file gaze_on_surface_RX.csv (my surface name is indeed RX) so it should naturally be filtered out. However, in my use case the same surface is displayed several times (I discussed it with your colleague Dom a couple of days ago in this channel) and I am filtering the different occurences using the surface_event file (I have checked there's no accidental surface loss). The only rows I'm leaving out are those due to low confidence but that would lower the frequency. Even if I calculate the frequency in the way I described for the single occurences of the surface I get roughly the same values ranging from 236 to 239 Hz)
Could it be that the cameras ran at 120hz and the pupil data matching algorithm makes monocular gaze alternating each eye? That would explain a weird zig zag I'm seeing in the gaze pattern
Thanks for the clarification. It's possible for a single data point to be matched more than once due to the pupil matching algorithm. You can read more here.
As for the weird zig zag pattern, it could suggest low quality data. It'll be helpful if you could share the particular recording folder so that I can take a look at it. This helps me better understand what's going on with your data. You can send your recording folder to [email removed]
Anyway to detect the accuracy of all fixations ? Note that we calculated our own fixations based on gaze angular velocity.
Hello! I am conducting an experiment on a table, and sometimes the calibration fails because the glasses obstruct the participants' view. I think it might be a good idea to re-calibrate post hoc. However, I'm not exactly sure how to do this. Ideally, I would like to click on the part where I think the participant is looking, since after calibration I will present different points for participants to focus on. Is that possible? Like in this video: https://www.google.com/search?q=pupil+lab+calibrate+post+hoc&rlz=1C1UEAD_esES1044ES1044&oq=pupil+lab+calibrate+post+hoc&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigATIHCAMQIRigAdIBCDQ3MjlqMGo3qAIAsAIA&sourceid=chrome&ie=UTF-8#fpstate=ive&vld=cid:8c494d2e,vid:mWyDQHhm7-w,st:0 Thank you!
Hey @user-4514c3, you can enable post-hoc calibration in Pupil Player by clicking the "gaze data" icon, and changing the data source to "post-hoc gaze calibration". You'll need to enable manual edit mode, since it sounds like you want to manually click on certain positions on the scene camera frame for the post-hoc calibration.
Hi! In my study, I am using a few pupil core devices at once and would love for the data to be synchronized - it is essential to correctly match everything between participants. How do I make sure that the time sync plugin works, if I am connecting each core to the mobile device and using Pupil Mobile for data collection?
Hi @user-453f5f ! Kindly note that Pupil Mobile is deprecated and no longer supported, we recommend to use a laptop/SBC unit to make Pupil Core portable. With that said, I recommend reading our best practices for synchronization and the pupil groups documentation for which you will need a host/computer running capture. Let us know if you face any specific issues.
Hello! I am working on the integration of the Pupil Core data and motion capture data. For this, I need to calibrate the two coordinate systems. Can someone guide me on what the coordinate system (origin location and coordinate axes) for the Pupil Core world camera is? Is it the same coordinate system in which the 3D gaze points are expressed? If not, what is the latter coordinate system? Thank you.
Hi @user-272aa9 ! The coordinate systems are defined on the Terminology section of the documentation. Have a look at the different coordinates systems employed and this reference and let us know if you have any further questions.
Can anyone share the guidlines how to run to ETs core on one computer using pupil groups? It is not working for us. Maybe we do sth. incorrectly?
Hey @user-24010f, take a look at this guide on using Pupil Groups. It's important for the computers to be connected to the same network (local networks are the best).
As for the best practices, it depends on your set-up. But we also have this guide that gives you tips on improving data quality.
Can anyone also share the best practises to calibrate simultaniously the kid and the mothet mother?
@user-07e923 can we use pupil groups on one computer? It seems to have problems with working on one computer
All the devices using Pupil Core should have Pupil Groups turned on in Pupil Capture.
We have it on but on one computer.
May I know if you're using multiple Pupil Cores on one computer?
@user-07e923 yes, on one computer
That's not recommended.
To use two or more Pupil Cores simultaneously, you will need two or more separate computers, each running a separate instance of Pupil Capture. Each Pupil Core should be connected to a different computer and all computers should be connected to the same network. The recordings can be simultaneously stopped/started with the Pupil Groups pluginย and the timestamps can be synchronized with the Time Sync plugin. It is necessary to use both plugins, otherwise you will encounter discrepancies in timestamps.
And it seems that pupil 2 pupil camptures swaps the cameras between 2 ETs
@user-07e923 we would also to present the film. Shall we use the 3rd vomputer for presentation? What would you recommend?
Thanks for confirming @user-5a4bba! The difference you see in the fixations' confidence values could be related in differences between calibration accuracies in your tests. Fixations are calculated based on gaze, and gaze confidence is dependent on your calibration and the corresponding accuracy. Therefore, if you compare two calibration procedures (one done during the recording and another one done post-hoc) with different accuracies, then fixation confidence values will also differ. If the calibration during the recording was optimal and you have high confidence data, you can simply use this for the fixation detection.
That is from a dell latitude laptop, nothing special. I was trying out OCR libraries and its enhancements using GPU. The only thing I can imagine, that trying to enhance OCR proceeding times, I might have followed some instruction from Nvidia ๐ฌ .
Hi, I wonder if it is capable to control Arduino using serial communication when I press the circular R button on Pupil Capture. Is it rely on Network API?
Goodmorning, I was wondering what was the best method to calibrate the pupil labs core when the subject has to do a walking task looking at targets placed on the floor
Hi @user-a4aa71 , you could print out a few calibration markers and then use them as the features in the Natural Features Calibration Choreography. You would want to do a test and some validation to be sure it is setup right.
I suspect that walking while looking down has a slightly higher chance to lead to headset slippage. In your case, you might want to experiment a bit with some validations over time to see how regularly you should re-calibrate to account for any slippage.
When performing desktop calibration, the screen flickers. Is there a solution? Thanks
Hi @user-4bc389! I have not seen this before. Can you share the specifications of the computer you're running Pupil Capture on?
Is the STL file for the Pupil Core frame/headset for sale anywhere? Would be happy to buy if so - Shapeway store seems to be having issues.
Hi Has anyone ever tried to use the video backend with a raspberry pi?
You might want to check out this community repo: https://github.com/Lifestohack/pupil-video-backend?tab=readme-ov-file#pupil-video-backend
Hello! Does the newer PupilCore software provide the two-sphere 3D eye models? (i.e. work published by Kai et al where we can predict corea and eyeball spheres)
Hi @user-c828f5 , do you refer to this article?
Is it possible to get status of calibration? We can start it but we can't get info if it's finished and all the results.
Hi @user-311764 , I am stepping in briefly for @nmt here.
Do you mean can you get the status of the calibration over the Network API?
May I ask why need to configure the network data feed? If you want to configure what is sent over the Network API, then you are free to edit the source code of the Pupil Core tools. Otherwise, there is no setting to change what data is sent over the Network API. Nearly all data is sent by default and you subscribe to the "topics" of interest. ZMQ is extremely efficient and well tested, so you should usually not run into any trouble with the default settings, provided your network is working well and especially if you are running everything on one computer.
Regarding calibration inaccuracy, what is the accuracy value reported by Pupil Capture when you finish a calibration? Have you ensured good pupil detection and a properly fitted the 3D eye model before starting calibration?
Also I wanted to know if it's possible to configure the network data feed from pupil software - for example keep video local and transfer only gaze data via network?
If possible help me with ways to make 3D mapping more accurate. all the calibration procedures with 3d mapping lead to very inaccurate gaze data. 2D is ok
Hello. I'm using Pupil Capture to get pupil data. I wonder if it is possible to achieve pupil data from a single frame and also achieve this frame from related eye camera based on Network API. What should I add in my code? Many thanks for assistance!
Hi @user-b02f36. The Pupil Helpers Repository has examples that show how to get what you need!
Hello! I'm working on my project with PupilCore. I have 2 questions: 1. Is there any way to get the gaze position on the display without using markers? 2. If I use markers to detect the surface and get the gaze position, the gaze position on the screen is still not accurate to where I am looking, how to solve this problem?
Hi @user-23a0f0, thanks for reaching out ๐ Could you explain a bit more about your test setup and what you're trying to achieve?
For example, are you doing a screen-based task, what do you mean by 2.), etc.? Also, it would be helpful to tell us the confidence values, and whether the eye models are properly calibrated.
for 2.) I'm trying to get the gaze position on display with setup as the picture. I have calibrated it but still cannot achieve optimal accuracy (accuracy is simply a visual check to see if the gaze point is exactly where I am looking). There are different deviations between different positions on the display
Thanks for the screenshot. It'll be much better if you could provide a sample recording, so that I can give you precise feedback about the issue.
Please send a sample recording folder to [email removed] Please remember to grant permissions for access.
Oh, don't show the link here. It's still a public channel. Please remove it. We've received the link in the email ๐
Ok thank you
Feedback
I am a researcher and work with the pupil core and matlab. I basically want to use the pupil core to run matlab scripts/experiments, but currently it doesn't work. Specifically, I want to run the demo before incorporating it into my script to test it as recommended in your guideline. I installed Psychotoolbox, use the matlab(version 2024) demo + api scripts provided online. I receive an errormessage when running the demo script "demo_pupil_labs.m', saying "...failed to connect to [my IP] and some port number i used" (edited). Internet connect is there, I played with proxy ip and proxy host, but it did not change the outcome. Can you help?
Hi @user-52c68a , do you have Pupil Capture running at the same time? When you say Internet connection, do you have Pupil Capture and Matlab running on two different computers?
Hi @user-311764 , to your other question, my colleague, @user-d407c1 , has written example code showing how you can check for the calibration.successful
notification.
Thanks - I'll look into that
Hello! Iโm not sure which channel to ask this question so Iโm asking here, please let me know if this is the wrong place for this question. So among Neon, Invisible and Core, which one is the best for screen based experiments? Like looking at a moving target or reading texts on the screen
Hi @user-bda2e6, this is quite a difficult question to answer. It highly depends on what you want to achieve with your data analysis.
Let's take moving targets as an example. In this case, you probably want to take into consideration the scene camera on the eye tracker. Are you looking to analyze precise timing of gaze on a target when it is at a specific location? If so, then you probably want to use Core because you can get up to 120 Hz scene camera sampling rate. This would minimize the possible visual distortion (think CRT monitors showing stimuli in a top-to-bottom way). This is also under the assumption that your monitor's refresh rate is less than the scene camera's sampling rate.
But if you're interested in, say, smooth pursuits, then I'd say the eye camera refresh rate is important, because you want to have as much gaze data as possible to compute the pursuit.
It would be more helpful to answer this question if I know precisely what you're looking for and what your set-up is.
Hello. We are currently setting up an EEG experiment that will entail passive movie watching and we are using pupil core to track eye movements and pupil size. We're having an issue while trying to set it up.
When we start the app and plug in pupil core to the computer, the pupil capture for eye 0 and eye 1 load almost immediately, however, the pupil capture for world will not connect. I have attached a picture below of what happens. We have attempted to use it 5 times, and 4/5 times this has happened, but yesterday when testing it the world view did load so we are quite confused as to what could be causing this/how to fix the problem. Thanks in advance.
@user-07e923 Thank you for the reply!! I appreciate that a lot. Is it possible to schedule an appointment with the support group to ask about some specific questions?
Sure thing ๐ Please send an email to [email removed] and we'll continue there.
Hi! I was done with the calibration step and wanted to record fixations next. But it seems like whenever I was recording, the fixations were always outside of the calibrated area even though I was looking only within that area. It seems like the device inaccurately detect my pupils but the calibration done before that showed otherwise. Any suggestions?
Hello, does the core have a CE Certification? Where can I download the certification?
Hi @user-4c48eb, could you contact us via email for this? Thanks ๐
Thanks Wee!
Hi @user-311764 , yes, you will then get uncompressed RGB images, where the color channels at each pixel are ordered in the default OpenCV layout of BGR. Considering your constraints, may I ask why you might need this format? The MJPEG images also have RGB channels, just that the data is compressed.
Each option in the Frame Publisher settings is indeed a different format. Do the available options not cover your use case?
Thank you. I'll look into that.
Hello! I am looking to purchase the Pupil Core for my lab. We have used the Pupil Core in the past (~5 years ago), but it appears that model is outdated. Could someone please explain what has changed between the models? Looking at the specifications online, it seems to no longer use a smartphone to collect data. Additionally, if there is a better place to ask these questions, let me know. Thanks.
Hi @user-1793ff! You can still opt for Pupil Core, but indeed its smartphone/mobile version, Pupil Mobile, is deprecated and no longer supported. So, you'd have to connect Pupil Core to a laptop/PC and run its software, Pupil Capture, to collect data. Some users have used a small form factor tablet-style PC in a backpack to make Pupil Core more portable.
If you would like a fully portable system, you could also consider our latest eye tracker, Neon. Neon connects to a phone and allows for long recordings any environment, even with fast, dynamic movement. If you want to learn more about it, please send us an email at [email removed] and we can schedule a demo and Q&A session via video call.
Hi, What should I do if I would like to have a spare set of cables for my pupil core cameras? Many thanks.
Hi @user-52e548 ! You can inquire sales@pupil-labs.com about purchasing additional cables.
Hi, I have a problem: I recorded all my data without the time sync plug-in! two eye-trackers were in the same group so they start and end recording at the same time, but time stamps are not same. how i can analyze data to get same time? Is there any post programming I can use to fix this problem? I appreciate your help.
Hi @user-813003 , we removed your messages in the other channels, since this is essentially a request for ๐ core . In the future, you only need to post in one channel.
Just to clarify, the Time Sync plugin does not force the two eyetrackers to record data samples at the exact same time. Rather, it ensures that they are both synchronized to the same master clock, such that all data samples are on the same time axis. For example, when using the Time Sync plugin, then 0 sec, 1 sec, 2 secs etc would have the same meaning and different timepoints would be comparable across data streams from both eyetrackers.
As noted in the Time Sync documentation, without the synchronization, it will not be possible to reliably correlate the data, as there was no master clock reference saved at the time of recording.
Hi, is it possible to retrieve any data if the core was disconnected during a trial? I see some files with .writing extension in the folder.
Hi @user-3f7dc7, thanks for getting in touch ๐ The .writing extension indicates that the files were still being written. If you've closed Pupil Capture before the writing is completed, then those files can't be recovered.
There might still be a chance to salvage some data. May I ask if you have pupil.pldata in the folder, and that it's a reasonable size (i.e., not 0 Kb or a small file size w.r.t. your recording duration)?
@user-813003 Having said that, my colleagues have clarified that with the Pupil Groups plugin, the two Pupil Cores will start recording at roughly the same time. So, you could potentially transform the timestamps to be approximately comparable. However, since the exact offset between both devices is not known, it is not possible to say how accurate this assumption is.
Good morning, I did several custom calibrations (without pupil detection and calculated post-hoc calibration) to compare them. When I do the validation, what does โmanual correctionโ mean? I mean, I get what it means but I don't know where to see the effect of having set an offset in x and y .. thanks for the help
Hi @user-a4aa71. Manual offset correction can be only applied when using the post-hoc calibration feature. This might help increase your accuracy when the gaze data is skewed in one direction (even though the pupil is detected fine).
Note that this won't change your accuracy if you're just running the offline pupil detection/calibration with default settings, which would be equivalent to your pre-recorded data. See also this relevant message: โ https://discord.com/channels/285728493612957698/285728493612957698/704259111637876757 You can also refer to our docs: https://docs.pupil-labs.com/core/software/pupil-player/#post-hoc-calibration
You can also check this plugin for manual offset correction for Pupil Core recordings: https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433
Hi, for my study, which involves participants walking while bimanually carrying an object, I am utilizing a single-marker calibration method (participants hold the paper target at the same distance from them as the object) with the goal of hitting an angular accuracy between 1.5-2.5, as per the recommendation. However, I have two subjects that had angular accuracies of 2.504 and the other at ~2.7, which were honestly the best calibrations I could get from them. For this study, we are classifying gaze via post-hoc video analysis as being allocated at three broad locations: 1) the carried object, 2) the wall (~10m from participant before they start walking), or 3) at the floor. These locations are not marked with targets. Since my subjects are in motion and not looking at small, precise locations (besides the object which is right in front of them), can/should I still use angular accuracies slightly higher than the ideal range?
Hi @user-d90133, thanks for getting in touch ๐ We can't evaluate why these two subjects had worse accuracies without seeing the recordings.
However, there are settings that can help with the accuracy. Are you using 2D or 3D gaze estimation pipeline? While 2D gives you the best accuracy, it is more susceptible to slippage, especially when walking is involved. Therefore, while you might get worse accuracy with 3D, it will be more robust. You can check our best practices for more information.
We can't generalize the accuracy you need for your use case. This would depend on your goal and how you justify which values you used in your experiment.
Hello, I have some problems when I use Pupil Capture. First, is it possible to change Sensor settings and Image post processing in eye camera window of Pupil Capture using python? Second, I recently try to receive scene and eye camera images in real-time from Pupil Capture based on the example named recv_world_video_frames_with_visualization.py in Pupil Helpers Repository, but I cannot control my Arduino UNO R3 at the same time using pyserial library. Both my Arduino and my eye camera are connected to my laptop through USB-A port. Is this problem related to the signaling rates of my USB port?
Hello, just wandering. Is there an easy way to save the processing parameters of pupil player in an .xml or something? So that a similar configuration is used with another file? By a similar configuration I mean the same activated plugins with their respective thresholds and configurations. Thanks in advance
Hey @user-b3b1d3, may I ask if you're trying to batch process Pupil Core recordings?
Yes, that is exactly what I am doing
Great! Check out https://discord.com/channels/285728493612957698/285728493612957698/1106182244869099520 -- Someone from the Pupil Core community made a batch-export script.
You might need to contact the creator if you've more specific quesitons about how it works or if you need to tinker it.
Ok, thanks for the info. I still want to check the data from each recording with the activated pluggins. Once I have the data preprocessed and checked I start my batch processing looping thorugh all the files exported by player. I will see in more detail the scripts to see if they can help. Thanks!
Hi, I have two recordings of the same task from different subjects. In the first recording I had to define 16 surfaces and I would like to export those definitions to the second recording. I have tried to copy and paste the surface_definitions_v01 but that makes it crash. Is there a way to do it beside defining them in the pupil capture?
Hey @user-4c48eb, may I ask how much RAM do you have on your computer? Your hardware might limit this process or the crash could be caused by not enough computing power/resource. See https://discord.com/channels/285728493612957698/285728493612957698/1248074502517161984
hi๏ผCan pupil core be connected to an Android device?
Hi @user-b69caf , at one time, this was possible. It was called Pupil Mobile
. That project has since been deprecated and connecting Pupil Core to an Android device is no longer supported.
These days, you can instead try the following:
May I ask if you already have a Pupil Core?
yes,i have
Ok. Then, if you want to use it in a mobile fashion, let us know how those instructions (https://discord.com/channels/285728493612957698/285728493612957698/1265954467279274005) work out for you. You could technically also connect Pupil Core to your laptop and put the laptop in a backpack, but then of course, you have to deal with the heat and weight of the laptop.
I would like to ask, can Pupil Core be used directly with an Android device without connecting to other devices, such as this Android device
@user-b69caf , as mentioned here (https://discord.com/channels/285728493612957698/285728493612957698/1265954467279274005), no, Android is not supported. Please let us know if you have other questions.
@user-b69caf - out of pure curiosity, what is that?
This microcontroller is equipped with the Android system
@user-b69caf , I should mention, though, that you can also control Pupil Core over a network connection using the Network API. It is based on ZMQ.
So, you cannot use the Pupil Core when directly connected to an Android device, but if you use a ZMQ library for Android, then you can receive data from the Pupil Core over the network. If the intention is to be mobile, then you would still need to connect Pupil Core to an SBC or laptop.
Perhaps that info can be helpful here.
@user-f43a29 Thank you for your suggestion
@user-cdcab0 Qualcommยฎ Robotics RB5 Development Kit,https://www.thundercomm.com/product/qualcomm-robotics-rb5-development-kit/
Ah, cool - thanks for sharing! That's an interesting platform. I'd love to hear what your plans are if you're able to share.
Also, it looks like the RB5 has some Linux support, which may expand your software options more than Android
I'm just testing to see if we can integrate eye tracking, gesture recognition, and SLAM into one device
Any particular reason for Android then?
Some of the features I previously worked on were tested on Android
Hello, I have been looking on how make some visualisations with the eye tracking videos I have recorded for a Website Usability study, I have come across PUPIL PLAYER, but i cant seem to to find how to access this, can anyone help please?
Hey @user-3e3f5a, are you using Pupil Core or Pupil Invisible? Also, what kinds of visualization are you looking to make?
Hello, I would like to create a plugin that allows me to display the calibration marker on the monitor and change in real-time either from slider or with keyboard the size and contrast/color of the marker so that then these parameters are set as marker parameters in the next calibration. Can anyone help me with this? Thank you
Hi @user-a4aa71 ! As a starting point, I would recommend you to have a look at the custom calibrations plugins that have already been built by the community (https://github.com/pupil-labs/pupil-community?tab=readme-ov-file#calibration-choreographies-and-gazers) and the Plugin API documentation.