This is a pretty unique configuration that I don't think many people, if any, have tried. Perhaps it's as simple as a missing system dependency? Can you try sudo apt-get install libglfw3
?
Itโs already installed too
(Sorry, printscreen doesnโt work)
Hmm. You said this is in a VM, right? Perhaps you need to install some guest drivers?
Yes, it is. Which ones I would need to install?
What are you using to run your VM? VirtIO? VirtualBox? etc
I am using the UBports PDK, which is preconfigured with QEMU
Looking that up, it appears to be preconfigured with OpenGL support, although they do note some known issues with NVIDIA GPUs if that applies to you.
Are you able to run any other GLFW apps? Is there an Ubuntu Touch or ubports community?
I get the same error, when I try to use glfw library to create a window. Yes there is a telegram community of Ubports.
They suggested to make the pupil app a native UT app, or package it as a classic or strict Snap.
That first option sounds like it would be a pretty massive undertaking, and I don't know enough about snaps to know how that would actually change anything
I was thinking the same..
Hello, my world cam has started displaying upside down. Does anyone know how to resolve this?
This is the player.log file from just a moment ago. I opened a recording in player the quit the player which caused it to crash.
It looks like you may be hitting the limit for file path lengths (this is an OS / filesystem limitation). Can you try moving, renaming, or otherwise restructuring your folder organization such that the file paths will be shorter?
In addition the problem I am having with player crashing that I messaged about above, I have now encountered a different issue when using player on a different computer. I open player, switch to 'post-hoc Pupil Detection' and both eye video load up and the process appear to run as normal, except the message "There is no pupil data to be mapped!" appears in the world view console in red. When the detection is complete both videos disappear as normal, however in the plugin within the console eye0 still says detecting and only eye1 says completed. If I try to run the redetection again only the recording for eye1 loads up this time and I still get the "There is no pupil data to be mapped!" message. Finally, if I switch to 'Pupil Data From Recording' and then back to 'post-hoc Pupil Detection' I also get the follow message on the console in yellow "Aborting redundant eye process startup" in addition to the previously mentioned message in red. Oddly the player.log file is too large for discord to allow me to attach it however, 99.999% of the log file says the same thing on repeat so I have attached a copy of the first part of the log file. The rest of it just repeats the lines "Create gaze mapper Default Gaze Mapper - [DEBUG] gaze_mapping.gazer_3d.gazer_headset: Prediction failed because left model is not fitted" and "Create gaze mapper Default Gaze Mapper - [DEBUG] gaze_mapping.gazer_3d.gazer_headset: Prediction failed because binocular model is not fitted" a LOT until the end where the last bit of the log file reads "2024-02-01 13:40:18,100 - player - [DEBUG] glfw: (65544) b'Win32: Failed to open clipboard: Access is denied. " several times.
Thanks for your response. I tried renaming one of the folders to reduce the length of the file path and that seemed to have solved the issue of it crashing when I try to close player. Not sure about the other crashes yet but hopefully this has fixed those too. Thanks a lot!
Do you have any thoughts on the other issue I posted about above that I was having with a single recording on a different computer?
Please try restarting Player with default settings. That option is in the app settings
Hi, is it possible to get the same output data without having to export it from the video file? Since weโre looking to use a rather large data set it would be great not to have to save video for every participant.
Hi @user-f76a69 ! Just to be on the same page, you want to record data without recording the video, right? is not that you have the recordings already and want to export data or?
In that case you could use the Network API to access the data.
Nevertheless, we recommend recording everything, such that you can also do post-hoc evaluations and that you split your recordings into chunks. Check out our Best Practices
Thanks for the quick response! Yes we would like to just have the data without the recording ๐ Iโm connected to the network api. This is probably a silly question but how do I access the data from there? I have so far only used it for pupil remote and had to then use pupil player to export gaze positions and dilations from the recording.
Hello, I'm using pupil mobile on android to collect the recordings. When I do post-hoc pupil detection in pupil player afterwards, it shows on the screen: Eye0: No camera intrinsics available for camera eye0 at resolution (192,192)! Loading dummy and same for Eye1 also in red: PLAYER: There is no pupil data to be mapped! The output video and data from export seems to be okay so far, but does it affect anything?
Hi @user-f76a69 ! You will need to subscribe to the specific topics you are interested. Here you have an example of subscribing to gaze 3d.
Hi @user-c075ce ! Pupil Mobile is deprecated, we recommend to use a laptop/SBC unit to make Pupil Core portable. That said, if you had recordings, the software still supports it as you can see.
The intrinsics of the cameras are not corrected when using Pupil Mobile, unless you use none standard cameras this should not affect much your data. And if you obtain a good accuracy I would not care much about this.
Thank you for your answer. Also, the videos from eye cameras are 7-8seconds longer than the world camera video, I guess it is again due to pupil mobile. It shouldn't also affect the export data?
There are some quirks with Pupil Mobile. Not sure about the eye video's length, but it should not be an issue. I guess it loads fine in Pupil Player, is that right?
Yes, it does. But overall, world video obtained from pupil capture seems to have better quality than the one obtained from pupil mobile, although they have the same resolution.
The processing on the phone might be a limiting factor; as mentioned, the current recommended workflow for Pupil Core in a bit dynamic environments is to use it with a laptop or an SBC unit.
I am currently using Pupil Core, and I want to create a plot of the gaze points for a specified fixation ID. Is there a good way to do this?
Hi @user-181b57 ! Could you further develop which kind of plot are you looking for? Btw, have you seen these tutorials ?
"I want to plot the position of the gaze point on the surface onto a photo of the same size
then I would recommend you get a look at https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb
Thank you ! I try it!
Hi Miguel This is Enze from Pace University. If my coworker didnt put any Apriltag marker on her screen when recording with Pupil Capture but we still want a heatmap from the surface of her laptop (an app UI) post-hoc with pupil player, is it possible?
Hi @user-2bf7f8 I saw you also asked by email, we have answered you there. Unfortunately, we dont offer a markerless solution to detect surfaces in Player. I would recommend you to look at feature matching algorithms online.
@user-d407c1
tks
Hi, I need help with a project using Pupil Invisible. I would like to know with what uncertainty I find the position of the Apriltags. Thanks!
Hi @user-c07f5f ! There are no uncertainty metrics for the location of the April tags marker.
If you use Cloud what you would see is a grey/purple bar on the timeline, being purple located and grey surface not found in a sort of boolean fashion.
If you want to see how the surfaces are located relative to the scene camera, you can do something like this. Note that this tutorial was made for Core, if you want to follow it step by step you may want to use the Surface Tracker in Pupil Player, rather than the Marker Mapper.
Hi! I am running into an issue where after performing post-hoc gaze calibration I am not getting any fixations within the calibrated timeframe. I was reading through older messages and saw that in order for fixation detector to work, post-hoc calibration must be "disabled." Can I have more clarity on this please?
Thanks
Hi @user-904376 ! Do you have the fixation detector enabled in Pupil Player?
Hi! Yes, I have the fixation detector enabled. It gathers fixations with the original gaze data, but after I do post-hoc calibration the detector fails to capture any fixations.
Hello, I'm using pupil mobile to collect the recordings (I know it's not supported anymore), and process them in Pupil Player. When loading the data, it says "No camera intrinsics available, Loading dummy" in Pupil Player. By loading dummy, does it load the real camera intrinsics? (since I guess the default intrinsics are known for core glasses?) Or can I obtain the camera intrinsics by running Capture app and then load these intrinsics into Pupil player when processing recordings from mobile app?
Hi @user-c075ce ! the default intrinsics are defined here, Pupil Mobile does not define any, so dummy
is used.
If have another recording made on the computer you can find .intrinsics
files and copy them to your recording (Assuming same parameters are used as in Pupil Mobile).
Thank you for your answer. Also, "dummy instrinsics" are just random intrinsics? And by "Assuming same parameters are used" you mean using the same glasses?
The dummy camera is defined here is not random, but rather no intrinsic distortions. And yes! same glasses and resolution configuration.
Ok, thank you very much.
Hey @user-904376! Can you just confirm that both the post-hoc calibration and gaze mapping was successful, and that you have high-confidence gaze data? Without it, the fixation detector won't find any fixations. Feel free to share a screencapture showing the recording after you've run post-hoc calibration/gaze mapping!
Hi! Yes, both the post-hoc calibration and gaze mapping status is "successful." Where can I see the confidence in the gaze data? The validation accuracy and precision result is 2 and .1 degrees, respectfully.
Hi! Yes, both the post-hoc calibration
Hello. I have a question regarding the recording formats of the pupil core. Im using the LSL plugin to stream pupil core data from different computers to a single CSV file. The LSL plugin documentation says that the values passed in the LSL stream are a flattened version of the original Pupil gaze data stream. Where can I find the information on the original Pupil gaze data stream so that I can label the flattened pupil core data in my CSV file?
Thanks
Hi, @user-2c86b2 - gaze data is streamed to LSL using the format described here: https://github.com/sccn/xdf/wiki/Gaze-Meta-Data
@user-cdcab0 Thank you
Hi, I'm working with some data acquired by the Pupil Core which I proceed trough the software Pupil capture. However, I would like to use some function like fixation detection directly in a python. is there any easy way to achieve it?
Hello, team. The videos I shot have not uploaded to cloud even I set the images should automatically and privately be uploaded. Do you know how to fix it?
Hi @user-5d2994! Double-check that your phone has internet connection and that Cloud uploads are enabled, then please try logging out and back into the Companion App. Does that trigger the uploads process?
Hey @user-0b5cc5 ๐. We don't have a Python script that would compute fixations using Pupil Core recordings. Although there is a community contributed export tool that would extract some of the raw data programmatically. That might be of interest!
thanks for your reply, I will take a look.
Thank you, Neil, the uploading has been started.
Hello team. I have a question regarding the data processing. I've been using Pupil Core to collect pupillometry data and am currently in the preprocessing stage. The two eye cameras return pupil diameter values along with two independent timestamps for the right and left eyes, respectively. In my long-format dataset, I have one column for the right pupil values and another for the left pupil values. Due to the non-synchronization of timestamps, often a single row (corresponding to a specific timestamp) has values for one eye but not the other. I'm conducting the preprocessing in R, using the pupillometryR package. Unfortunately, all these NA values are interpreted as missing data due to poor data quality, rather than the lack of data synchronization. I'm reaching out to ask if there's a method to address this issue, or if you know of any R packages for pupillometry data processing that take this aspect into account. Any advice or guidance would be greatly appreciated. Thank you in advance!
I have a hardware-related question: When the Core's eye cameras are operating at 192x192 px, I've noticed that the contrast of the images is visibly higher than when the eye cameras are operating at 400x400 px. Is this a known phenomenon? And is there any insight to be had as to why this might be?
I'm afraid I don't have any insights into why this is. However, what I can say is that the 2D detector is optimised for the lower resolution. So, I'd generally recommend that, especially as it runs at a higher sampling rate, if that's important!
Hi @user-b407ae ๐. Pupil Core's eye cameras are free-running, which means they aren't in sync and the timestamps might not match exactly for each pupil datum. This is expected. How you ultimately work with this data really depends on your research. I'm not familiar with the pupillometryR package. However, typical approaches to the pre-processing of pupillometry data include things like interpolation of missing samples.
You might want to check out this third-party Python tool for evaluating pupil size changes (which should be compatible with Core recordings), as it already has some steps implemented: https://pyplr.github.io/cvd_pupillometry/index.html
This paper has also proven useful in the past: 'Preprocessing pupil size data: Guidelines and code'. Behaviour Research Methods, 51(3), 1336โ1342. https://doi.org/10.3758/s13428-018-1075-y
Lastly, I'd highly recommend reading our best practices for pupillometry with Core: 'Best practices for doing pupillometry with Pupil Core': https://docs.pupil-labs.com/core/best-practices/#pupillometry
I hope these resources help!
Many thanks, @nmt I will follow the advice you've provided!
Hey Pupil Labs, I have a couple of recordings that are properly segmented for each task in an activity. However, one of the recordings is the entire activity recorded non-stop. So when comparing all the data from all the recordings, this full recording has more data, which is not a good comparison since there was more data. So is there any way I can precisely trim the video and get a separate output value for each trimmed clip?
Hi @user-870276 ! Generally you can trim using the trim marks on Pupil Player, if you need something more precise perhaps you can set some events and use their timestamps to crop your data
If I trim the Pupil Player, will it crop the data of the CSV file when exporting, or will it not? And should I do it manually?.
Yes, setting the trim marks will exclude data from the exports that are outside of the trim marks
Good morning! I have a question, how do I know I am doing a good calibration? Sometimes those orange marks appear and other times they don't even though I calibrate the same way, what could be the cause? Thank you!
Those orange lines visualise what we refer to as residual training errors. They represent degrees of visual angle between the calibration target locations and the associated pupil positions. If they're not present, it means little pupil data, or limited gaze angles, were recorded during the calibration. So that we can provide concrete feedback, it would be helpful if you could record a calibration choreography and share it with us. Enable the eye overlay plugin first so we can see the pupil detection.
Hello, I have a question about how to read and interpret pldata files. I noticed on your website that file_methods.load_pldata_file() should be used to read the files, but when I run the method, the output is uninterpretable to me. Could someone explain how I can translate pldata files into a human-readable format?
Hi @user-3515b1 ๐. Just to provide a bit of context, these are binary files that store data in an intermediate format, not designed to be directly consumed by users. You can already access the data in a human-readable format by loading recordings into Pupil Player and exporting them into .csv files. Further instructions can be found in this section of the docs. That said, if you did want to access these files programmatically, have a look at this message: https://discord.com/channels/285728493612957698/285728493612957698/1097858677207212144
Hello, my world cam has started displaying upside down. Does anyone know how to resolve this?
Hi @user-338a8c ! Can you click on general settings and "Restart with default settings"?
Thanks Miguel. That didn't work, unfortunately.
Could you share some details as to when that happened? What OS are you running pupil capture on, version of pupil capture, etc
Hi! I'm wondering whether it's possible to control the timing of the frame capture, for example with a sync pulse?
Hi @user-40931a! Timing of frame captures with sync pulses is not possible I'm afraid. However, we do have several ways to synchronise Pupil Core with other systems. If you can share more details of your goal, we can try to point you in the right direction.
@nmt We're trying to record from both the Pupil Core and an Optitrack motion capture system simultaneously. Since the Optitrack works by sending pulses of IR the two are currently in conflict, but if the Core could capture frames in sync with the Optitrack IR pulses then we thought they might be able to work together.
What do you mean by the two are in conflict?
Hi,
Suddenly eye0 is not working in Pupil Capture anymore. I get the message:
eye0 - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting... eye0 - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID0. Estimated / selected altsetting bandwith : 200 / 256. !!!!Packets per transfer = 32 frameInterval = 83263
How can I fix this?
Please double check that the cable connecting the camera is secure/fully connected, then follow these steps: https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting. Does that solve it?
Hi Neil, reinstalling the drivers worked. Thank you so much ๐
We're still setting up equipment so this is just theoretical, but the optitrack system works by flashing IR lights at a given frequency. If the Core is capturing data out of sync with that flashing, that means that the level of IR reflecting from the participants eyes will differ from frame to frame for reasons unrelated to the participant's eye behavior. In your view, do you think this will present a problem in reliably tracking gaze with the Core or do you anticipate that it will be fine?
Thanks for explaining. I understand what you mean now. You should be fine. Core has been used alongside optical motion tracking systems many times in the past without issue and the extra IR illumination shouldn't influence our pupil detection pipeline. That said, always best to run a quick pilot with both systems prior to data collection!
Hello everybody. I have some questions about installing Pupil capture. We bought the glasses for our labs. I need some info about installing the software.
Hi @user-2cc535! Have you already seen our documentation for installing Pupil Capture? Also, feel free to just go ahead and ask any questions you have!
hey everyone! I have bunch of Pupil core recordings, I'm wondering if there is a utility to export all of them at once only for csv files (pupil_position...)?
Hey @user-03c5da ๐. Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1184385440388759562
Hello! We have a problem related to static: one device has knocked out 2 world cameras, and one tracker usb-hub. We have replaced the chip on the hub - it started, but the cameras do not start. Can you promptly advise how to connect another camera (which is not in the list of recommended cameras)? We took a standard laptop camera
Unfortunately, we have to say that the glasses are not protected from static electricity and can break for this reason( how do you see the solution to this problem?
@user-f93379 Hi, I opened an issue (see https://github.com/pupil-labs/pupil/issues) with some notes for zadig and libusbk (I am assuming you are running on windows)
Please, confirm your OS (Linux, Mac, or Windows)
Also, unfortunately, Pupil Labs is not strong/powerful enough to be generous and support third-party cameras. I think they are not going to help you.
As an early adopter, I can say that they changed. They used to believe more in open source.
Hi @user-41f1bf! Thanks for your feedback. As I mentioned in my previous message: https://discord.com/channels/285728493612957698/285728493612957698/1201719271563202610, we're focusing on ensuring stability with the Pupil Core software for now. However, we're always pleased to hear that users are tinkering with the software and experimenting with different hardware. That's why we choose to keep the pupil repository open-source for the community for user to build and extend according to their own requirements. With that said, we would ask you to be mindful of how you communicate on this forum as we want it to be a constructive space and for people to be excellent to each other, referencing the https://discord.com/channels/285728493612957698/983612525176311809 ๐
Hey @user-f93379 ๐. I'm sorry to hear about that! If the USB hub and other integrated components have been damaged by electrostatic discharge, I'd highly recommend taking some precautionary steps before handling any electronic components inside your lab when working with the device you mention.
Regarding the cameras, the Pupil Core backend is compatible with UVC compliant cameras that meet the criteria outlined in this message: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498. So, if your laptop camera meets these criteria, then in theory they should appear in the software.
If you don't manage to get things up and running, we would need to receive the hardware at our Berlin office for inspection or repair. You can reach out to [email removed] with the original order ID in this case.
Neil, would love to meet you in person and head office, but we are in another country. The trackers are really good and flexible. I wish you good luck with your developments!
Thank you for your feedback @nmt . May I kindly ask you why you think I am not being mindful or being excellent?
Hi, I am using Pupil Player on Windows to do post-hoc calibration. When I calculate a new 3D gaze mapper, the process is very slow (>10 minutes for a 13 minutes video), but in Task Manager it seems that the CPU is mostly idle (pupil player requires ~1-2% CPU) and neither RAM, storage or network are being used. Is it normal that the gaze mapping is so slow? Which step would be the limiting factor? Is there a way to speed this up?
Hello, I am conducting an experiment on a transparent screen and will have gaze data from two eye trackers from both sides of the screen. However, on one side, the entire experiment (and therefore the April tags) is mirrored. How can I flip the video/recording to define the surface and extract the gaze data?
Hello. I am trying to describe how far the gaze is from the center in terms of angle using the gaze_point_3d_x/y/z data from pupil core. For example, I calculated the angle between the values (0, 0, 1) and the collected values (-12, 2.9, 36.9) within the same timestamp to obtain an angle of 18.5 degrees. Can I explain that the subject's eyes deviated from the central line by this angle at that timestamp?
That sounds like a very interesting setup (side note: I'd love to see a picture!). To accomplish what you need, I think you'd need to write a custom plugin that extends the built-in Surface Tracker plugin so that you can flip the image prior to running detection on the AprilTag markers
@user-492619 - on a second look, it's a bit more involved than I originally indicated. A much easier approach would be to simply have two sets of markers displayed - half of them being flipped
Thank you! The camera settings have been made and everything is working. Pupil is a great tool, but it's not perfect. There should be some kind of protection against static. Perhaps some additional shielding is needed, as the kit comes with a very long wire, which apparently creates flooding. I'm not very knowledgeable, but it would be nice if this point was mentioned somewhere in the documentation. Maybe I just haven't found it.
Hello, I have a question regarding exported video. So the recorded world video has varying fps which can be seen in player, but after exporting does it have a constant fps? If yes, how the conversion is done to make it constant, does it just take mean fps or?
Hi @user-c075ce! The exported video is the same as the one recorded live, just with the addition of whatever visualisations you added in Pupil Player
Hello! I was wondering if I could please have more specifics on the accuracy and precision given during validation of post-hoc calibration? For example, what is the origin of the degree offset and what is the significance of the sample rate next to the accuracy/precision values. Thanks.
Hi @user-904376! I can recommend first reading the Core whitepaper. It covers in detail the specifics of the calibration/validation and how the accuracy and precision metrics are computed. You can download that from here: https://arxiv.org/abs/1405.0006. Look in the 'Performance Evaluation' section ๐
Hello!
I am working on the fixations.csv data and I want to calculate gaze transitions or jumps. I would like to understand whether the norm_pos_ columns are the correct values to use or the gaze_point_3d_. Which of these maps to the virtual world and would give more accurate coordinates?
hope this question makes sense..
Hi @user-ca5273! Just to clarify, are you interested in saccadic eye movements, i.e. rapid shifts of gaze between fixations? If so, what outcome metrics are you interested in? Are you simply wanting to count, for example, the number of saccades, or are you delving deeper into things like saccade amplitude and velocity? I ask because this would likely determine which datastreams and approach to data analysis you'd want to use.
I've been looking at this documentation.. but it's not quite clear yet...
Hi @user-0e5193! You can say that the gaze angle deviated from the optical axis of the scene camera in this case. I'm not sure what your goal is here? But be mindful that the scene camera can technically be orientated arbitrarily with respect to the wearer's head/eyes. You might find this overview of Core's coordinate systems useful when thinking about this topic: https://docs.pupil-labs.com/core/terminology/#coordinate-system
Thank you so much!
@nmt We are more interested in dwell time on each fixation and not really the rapid gaze shifts. With the gaze data, we want to calculate the total distance that (using euclidian distance). Hence why knowing which x, y, or z to use is very important
๐ค in that case, I'm not sure I fully understand your goals. Fixations, in the classical sense, are characterised by periods where gaze does not shift. Can you elaborate what you mean by total distance? As this would imply a shift of gaze.
You also talk about calculating 'gaze transitions or jumps', which is further confusing me ๐
@nmt We have this file of fixations. The data is from a user moving through a VR world an interacting with molecules.Each row is what fixation they were looking at. Take the first two rows for instance, they capture two fixations that the user looked at. You can think of it as their gaze moved from row ID 6 to row ID 7. That gaze move is what we're considering a gaze transition. A gaze distance would be calculating the distance (using the x,y coordinates) between these points. What we are not sure about is which coordinates give a true mapping of the coordinates of the VR world. Is it a norm_pos_x and norm_pos_y coordinates or the gaze_point_3d_x, gaze_point_3d_y, and gaze_point_3d_z? This is what we would like clarification on.
Thanks for elaborating. I'm getting closer to understanding your aim. However, I do have some follow-up questions. Are you looking to compute this 'gaze transition' in units of degrees of visual angle? Or something else, like the Euclidean distance between two fixation locations (perhaps molecules) in your VR space?
Hi, we have Pupil Core. We have created Heat maps with Pupil Core using the Surface Tracker plugin.
this is from one record (user). How ever, we would like to create an aggregate heat map from a group of record (users) watching a shelve with products, by example. As soon as possible, could you tell us how to do it, please?
Of course, we have used Pupil-player. If we can use other tool or the cloud, donยดt hesitate to tell me about it, please!
And also, We would like to Gaze metrics on Areas of Interest like it is possible in Pupil Cloud with Pupil Invisible
from the group of records (users)
Hi @user-a81678! Pupil Core software does enable the generation of heatmaps for single recordings. However, it doesn't have aggregate capabilities, meaning it doesn't generate heatmaps from multiple recordings like you can do in Cloud. Unfortunately, Core recordings are not compatible with Cloud.
If you're comfortable trying out some coding, we expose all of the raw data necessary to build custom visualisations yourself. We also have a tutorial showing how to generate heatmaps from surface-mapped data. It would be feasible to modify this to build a heatmap from multiple recordings.
As for gaze metrics, these are not generated in Pupil Player. However, the exports in .csv format would enable you to compute your own metrics of interest. For some examples of how we compute metrics, you might want to look at this Alpha Lab article.
Hello I have a question regarding my pupil core glasses. I have been using them for a while and today when i started the pupil capture exe i was only able to view one eye camera. Is there any other reason why the other eye camera not showing up? How can i check if the camera is damaged. I also noticed that while using the software that same left side starts to heat up a bit.
Hi @user-9e2a60 ๐. Double-check that the cable attaching the camera is securely connected (sometimes it can come loose) and then restart Pupil Capture with default settings. Let me know if the eye video shows after that.
Follow up: we finally got the Core and the Optitrack powered up together and there's definitely an issue. The flickering is visible on the Core's cameras and it isn't possible to even calibrate to the eyes. Our lab has had this problem before with a different eye tracker brand in the past and were able to solve it by syncing frame capture to the optical tracking IR pulses via a TTL pulse from the Optitrack's Sync2 box. If that's not possible, do you have any other suggestions on how we may to be able to sync the two?
Thanks for sharing that! The IR strobe is certainly bright. I don't think it's optimal to have the OptiTrack camera pointing directly at the Core headset, but I suppose that's a constraint of the experiment. So, the strobe is definitely causing the eye image to be overexposed. However, even when the IR strobe is off, the image seems overexposed. The first thing I would try in this case is to set the eye cameras to 'auto-exposure' mode, or, reduce the exposure time in manual mode. You can access these settings in the eye windows. I still think there's a chance you could find a setting that would work.
I am struggling installing pupil capture and pupil player on linux. I install the .deb packages but the programs do not open. Is there any guide with all the steps to follow during installation?
Hi @user-8563d1 ๐ ! Are you on a laptop with an Nvidia card? If so and you are running Ubuntu, then do not directly open the app. Rather, right click the app icon and choose "Open with dedicated graphics card" and see if that works. If you are on a different flavor of Linux, then you will need to check how to open it with the dedicated GPU on that system. Could you also provide the logs? You will find them in your home directory, under "pupil_player_settings/player.log" and "pupil_capture_settings/capture.log".
Hello! We collected 5 videos (without eye-tracking) using Pupil Core cameras with Pupil Mobile v.1.2.3 (all videos lasting around 15 minutes each). However, we have been having issues with exporting some of the videos. One video exports just fine using World Video Exporter in Pupil Player v3.5. The other 4 videos are constantly being stuck on โStarting video export with pid: <number>โ, and cannot seem to progress. We see no obvious differences between the files linked to the one video that exports, and the others that do not. We tried on several different computers but we cannot figure out what the problem is. All videos work just fine (including audio) on Pupil Player. Could you please help us resolve this issue?
Would you be able to share a recording with us such that we can take a closer look? If so, please share it with data@pupil-labs.com
Hello @nmt I am trying to do reference image mapping and did all the steps as instructed (having Invisible) but the processing is stuck on "processing 0%" for all day - any help please ?
Hello ! Is there a way to disable the IR source on my eye camera ? Thank you for your reply
There's no way do this. May I ask why you would want to do so?
Hello! I have been troubleshooting with different post-hoc calibration/validation techniques and I had a question regarding including the validation calibration at the end of a recording within the trim markers for calibration. I obtained higher accuracy/precision when I did this, but I am not entirely sure what is happening in the background. Can you please clarify the technique and if this would be appropriate?
Hi @user-904376. Conceptually, there's no difference between performing calibrations or validations post-hoc versus in real-time. You can read about the underlying processes in the whitepaper that I linked to in my previous message.
Hi, i am having no luck and no help with the pupil cloud. I need to analyse this data for my dissertation and it's just not working. As i cannot create enrichments right now due to whatever reason, is there a way of export just data for the number of fixations and saccades? Look forward to hearing back from you.
Hi @user-2b8918 ๐. I'll ask thr cloud team to look into why the enrichment are taking longer than expected. In the meantime, you can right-click on your recording of interest and download timeseries data. That has basic metrics, like fixations, blinks etc.
Hello everyone! I been reading the Terminology section of the docs, and im a bit confused when is the 2D pupil detection is used, and when is the 3D detection used. It seems like just 2D pupil detection is enough to triangulate gaze location in the world image using polynomial fit?
Thanks for your query. We do indeed run two pupil detectors in our Core software: 2D and 3D. Let me explain them below: - 2D Pupil Detection: The 2D detection algorithm uses computer vision to locate the 'dark pupil' in the infrared illuminated eye camera images. It outputs 2D contours of the pupil that feed into a 3D model (described below). You can read more about this in our whitepaper. The 2D detector is always running and forms the foundation of Core's gaze pipeline. - 3D Pupil Detection: We alspo employ a mathematical 3D eye model, pye3d, that derives gaze direction and pupil size from eyeball-position estimates. The eyeball position itself is estimated using geometric properties of 2D pupil observations. The pye3d model regularly updates to account for movements of the headset on the wearer's face, thus providing slippage compensation. You can read more about how it works in our documentation.
So, how do these relate to gaze data produced after calibration? Well, it depends on which calibration pipeline you select. - 2D Pipeline: You're correct that 2D pupil detection is enough to compute gaze in scene camera coordinates. For this, you would choose the 2D calibration pipeline, which uses polynomial regression for mapping based on the pupil's normalised 2D position. It can be very accurate under ideal conditions. However, it is very sensitive to headset slippage and does not extrapolate well outside of the calibration area. - 3D Pipeline: This pipeline uses bundle adjustment to find the physical relation of the cameras to each other, and uses this relationship to linearly map pupil vectors that are yielded by the 3D eye model, pye3d. The 3D pipeline offers much better slippage compensation, but slightly less accuracy than the 2D mapping.
Please let me know if you have any follow-up questions!
Lu, C, Chakravarthula, P, Liu, K, Liu, X, Li, S, & Fuchs, H (2022) โNeural 3D Gaze: 3D Pupil Localization and Gaze Tracking based on Anatomical Eye Model and Neural Refraction Correctionโ In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), (IEEE), pp. 375โ383
In this paper, the authors propose an improved 3D eye model and demonstrate ~1ยฐ improved error, it would be great if this could be added to Pupil Core?
They tested using Pupil Core cameras and compared to the pupil core 3D model as I understand it
Hi @user-93ff01! Thanks for sharing that. We're aware of the work, it's certainly an interesting paper with a neat experimental setup! While we don't have any plans to implement their approach, our Core software is open-source, which should enable users to do so themselves if they want to experiment with it ๐
[email removed] รngel Juรกrez! Pupil Core
Dear sir Thank you for your help. Could you please answer the following? Best regards.
Q1 Can the Pupil core measure the angle of eye orientation (the angle of how many degrees the eyeball is tilted)?
Q2 Is the pupil neon the only one that can measure the eye orientation? (Is the pupil invisible also impossible to do it?).
Q3 Can the Pupil Core be used to obtain the depth direction of gaze (such as looking at something 10 cm away or 100 cm away)?
Does the following metric mean the depth direction of gaze ?
gaze_point_3d_z - z position of the 3d gaze point (https://docs.pupil-labs.com/core/software/pupil-player/)
Q๏ผ. Which indicators in the csv can be used to determine the data that signify the saccade?ใ Which of Pupil Core, Neon or Invisible can be used to obtain the saccade?
Hi @user-5c56d0! 1. Yes. We output eye orientation for each eye in both Cartesian and spherical coordinates. You can look at this page for an overview of each data stream. I think theta and phi would be useful for you. 2. Neon does indeed output eye state/orientation. However, Invisible does not. Invisible does provide the orientation of a cyclopean gaze ray originating at the scene camera in spherical coordinates. You'll want to look at elevation and azimuth in the gaze export 3. Not reliably. Please see this message for further explanation: https://discord.com/channels/285728493612957698/285728493612957698/1100125103754321951 4. I already responded to your question about saccades in https://discord.com/channels/285728493612957698/1047111711230009405/1212284516077404180
Hello Pupil Lab team, I hope this message finds you well. I have a question regarding the precise calculation of the confidence index. In Pupil Player videos, the confidence is displayed separately for each eye per index, whereas in the export files, it appears as the confidence for both eyes combined and given for each data row. Could you please provide more insight into this? Thank you
Hi @user-6586ca! You should be able to have one confidence value per eye. The column eye_id
includes 0s or 1s for left/right eye data points, and the column confidence
has the confidence value associated with each data point for a given eye. Please have a look also at this relevant tutorial that shows how you can split your dataset into left vs. right eye data and select only high confidence values for each eye.
is there a way i could run validation remotely, and if so what string would I pass in? ex: C starts calibration, R starts recording
Hi @user-01bb59 ! Are you using the IPC backbone message format ?
if so:
import zmq
import msgpack
topic = 'your_custom_topic'
payload = {'topic': topic}
# create and connect PUB socket to IPC
pub_socket = zmq.Socket(zmq.Context(), zmq.PUB)
pub_socket.connect(ipc_pub_url)
# send payload using custom topic
pub_socket.send_string(topic, flags=zmq.SNDMORE)
pub_socket.send(msgpack.dumps(payload, use_bin_type=True))
where the topic 'notify.recording.should_start'
starts the recording, 'notify.calibration.should_start'
and then 'notify.validation.should_start'
, should work
Hi! I have Pupil Capture installed on multiple laptops. They work great on all laptops but one, a Ryzen 5 AMD OMEN HP laptop. I attached the capture.log. The weird thing is that initially, it works. After 3-4 times, when I run Pupil Capture from the Desktop icon, a small window with the message "Configuring Pupil labs.." pops up with a progress bar. Once it is completed, it opens up. Later, it breaks and never opens, I try to check the C:/ProgramFiles(x86)/Pupil-Labs/Pupil v3.5.1/Pupil Capture v3.5.1/pupil capture.exe and it disappears from that folder. This is after a clean reset of the laptop, all drivers updated. The same camera set has no issue when run on another laptop. So, the issue seems to be related to the computer altogether. I also repaired, tried uninstalling then reinstalling all with the same output. Finally, I also deleted the libUSBk drivers from device manager, then reran the system again and still same issue arises. It works some times then stops working randomly at other times.
Hello, I was wondering if it is possible to merge different calibrations? Or is there a way to set several trim marks in pupil player? It's for the case when the video contains several single marker calibrations from different distances (for the post-hoc calibration).
Hi, I'm a beginner in the use of eye tracking device, and I'm using pupil core. I am using psychopy 2023.2.3 and I'm trying to send triggers from psychopy to pupil labs recorder. I am quite inexperienced in programming. I've been trying with different kind of forums, git-hub and so, but I am not understanding what I am supposed to do to make them work together. I also tried to use labrecorder, but it is not working for me
To send events from PsychoPy to Pupil Capture, you'll need to use remote annotations. You can find a more complete example in the pupil-helpers repository