Hello everyone! Can anyone tell me whether it is possible to set via the network api on which monitor the calibration should be displayed? ( I have two monitors and want to start calibration on the second as soon as the one on the main screen is finished) Thanks in advance!
Hi, all! I'm so new to the whole game. First time using any eye-tracking system. First time using Pupil Lab Core. I managed to run some test trials and export pupil/gaze position estimations (.CSV) using the GUI apps. Have no idea what each variable means besides my wild guess based on their variable names and a basic understanding (it tries to fit a sphere [eyeball] and a circle on the sphere [pupil]). I'm interested in calculating metrics preferably in physical units (e.g., saccade in visual angles). Could someone kindly point me to where I can find some technical details on what those variables mean and how they are computed? Thanks! 🤓
Hey @user-2b1780 👋. You can read an overview of the data made available here: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter - I'd also recommend reading the rest of the Core docs to learn more about how everything is computed. If you can't find specific details of something in particular, make sure to search for it's name here on Discord as there's a wealth of information in the history of chats 🙂
I am using the LSL plugin w/core and am able to collect xdf files from Lab Recorder. I want to use python and am trying to use the pyXDF function. Is there python code which exists to do such processing? The python code which I have found for the LSL references a lsl csv file but I am unsure how to convert/access this as I am using lab recorder (https://github.com/pupil-labs/pupil-helpers/blob/master/LabStreamingLayer/lsl_inlet.py).
Hi @user-7d8d41. You'll definitely want to check out this: https://discord.com/channels/285728493612957698/285728493612957698/1070056032304386098
Hi @user-741d8d! This is not possible using the network api. May I ask why you want to calibrate on both screens?
thanks for the answer! I developed a system that recognizes which monitor the user is looking at and activates the corresponding screen. Since the documentation recommends to calibrate with points that resemble the experimental conditions i figured i might get better results if i calibrate with both screens.
Hello, I have some problems when using the head pose tracker function. I have attached multiple Apriltags to the environment. These Apriltags can also be well detected, but in the 3D modeling process, only one Apriltag can be modeled, and the camera positioning only surrounds this Apriltag, and the camera positioning of the other tags cannot be obtained. , I would like to ask why?
Hey, @user-873c0a - are all of your AprilTag markers the same physical size?
Hi@user-cdcab0,I did use Apriltags of the same size and made sure they didn't duplicate the same ones.
Are you doing it live with Pupil Capture or on a recording with Pupil Player? Could you share a recording with us?
I recorded a video and then used pupil player to perform offline head pose tracker.How do I share my recordings with you?
Are there multiple markers visible at all times? As the video progresses, the model is constantly refined by re-calculating the relative positions of the AprilTag markers with each other - so marker visibilities need to overlap temporally. If you have that covered as well, you can send your recording to data@pupil-labs.com and I'll take a look
I have sent recordings
I think I am following the requirement to ensure that there are 2-8 Apriltags in the foreground camera within a period of time. I will send a recording to this email address, please take a look, sorry to trouble you.
I looked at your recording and it looks like you need to capture more angles of the markers (with multiple in view simultaneously) before the markers can be successfully incorporated into the model
If more angles mean I need to move around the scene repeatedly, I'll experiment further.
Not so much repeatedly but from a greater variety of angles. In your recording, you mostly move horizontally. In addition to that, you should also move the camera up (while pointing it downward at the markers) and down (while pointing it up at the makers).
Even better than just purely horizontal or purely vertical movement, try moving the camera in large circles as you move around the scene to capture as many unique angles as possible
Thank you very much for your guidance, I will try further
Hi! I am having some issues with the pupil core package on my macbook M1 apple silicon. It takes a good long while for any of the apps to open... I haven't found a version of the apps that is specifically compatible with apple silicon - i've only found the one that is compatibel with macOS... any help would be appreciated, thanks!
The first time loading each app can be slower than subsequent loads. Are you still experiencing slow load times when you open the app the 2nd, 3rd, etc. time?
There is no M1/2 native version. The app has a number of lower level dependencies that are not made for Apple silicone, so it's not really feasible. Further, Apple's Rosetta provides a pretty satisfactory solution
Hi, I must buy the Pupil Core for my Ph.D project in Milan. I need some help for info. on the other side, I want to participate in your workshops. can you provide me some info about the reference for my questions?
Hi @user-2cc535 👋🏽 ! I've already replied to your email, providing more information on your specific questions. I hope this helps!
Is pupillometry now working in Pupil Cloud?
Yes - see the announcement here: https://discord.com/channels/285728493612957698/733230031228370956/1177542261551140896. If you have further questions about collecting and using data with your Neon, feel free to ask in the 👓 neon channel
I will download Apple's Rosetta and try again! Thank you 🙂
Ok I see. In that case I am not to sure how to run the apps through rosetta....do you have any guide I can refer to? Thank you for all the help
There is nothing you need to do (other than have Rosetta installed). If an Intel-architecture app runs at all on your Apple silicone, it's running through Rosetta
Ok, that still doesn't work unfortunatelly... I reverted to see if checking that box would help after noting that the apps didn't work having installed Rosetta.
Not working at all? They were working before, right - just slow to load?
Hello. Is anyone using Pupil Cloud having issues with accessing their workspace?
I logged in the morning and it's giving me this error when I try to access a project: "Project not found. This project does not exist."
I've used incognito mode, cleared cookies etc, deleted and re-created projects
My uploaded videos etc are still available in the workspace, I just can't put them into projects
or access the projects
Hi @user-c1bd23! I'm sorry to hear about that - do you mean to suggest that even when you createa a new project, you're unable to access it altogether? If so, could you please reach out to info@pupil-labs.com and share a screenshot of the said behaviour, and also an ID of one of the recordings that you can't add to the project.
Hi, i am back here for another question...i hope you can help me. I am using VIVE cosmo device with pupil labs and i want to set up my pupillometry protocol with Phyton. Is it possible to display the images i need to on the VIVE Cosmo goggles only by using Phyton or do i need to use the Cosmo software? Note that i only have Linux
Hey @user-c39646! We have some documentation for presenting stimuli in Unity3D (search for 'hmd-eyes' on here), but not directly with Python. Also, be sure to read our pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry
Yes, so the apps are still slow to load, and when I try to use them (I tried to use the player) it crashed - without giving any error. I never tried to use the apps before, because they were so slow to load (they would take 15-20 min). With Rosetta, they open after a few minutes (5-6 min) - so a bit faster but then crash.
In your home folder you should find a folder named pupil_player_settings
. Inside that folder you should see a player.log
file. Can you share that here?
Good morning 🙂 We have a project with 57 movies and 7 enrichments. We downladed Enrichment data - so we have 7 folders, separately for every enrichment. We are working with files fixations. The problem is - every fixation file in our 7 folders has got different number of fixations - as we thing should be the same number with True or False for this particular enrichment. We need help with finding the reason for that.
Hi @user-23899a 👋. I'll need a little more information to fully grasp the situation. Could you describe in more detail which kind of enrichments you ran, on what data, and why you think they fixations should be equivalent?
Hello Neil, I will try to describe the situation in a more understandable way.
I recorded 57 participants using Pupil Invisible (short ride on the tram sim). Then, in the Pupil Cloud environment, I defined 7 enrichments based on markers. Now, I need to check how many fixations each participant had in each area designated by the markers.
The fixations file contains all the fixations of my participants. I have 7 enrichments, so I have 7 files. I believe that each file should contain the same number of fixations. The difference lies in the fact that in each file corresponding to a specific enrichment, there is a column with either true or false - depending on whether the fixation was present in that area or not. But numer of all fixations should be the same.
I will be glad for the answer
Hello, I would like to ask some questions about Surface tracker. When defining a monitor screen, we usually use 4 Apriltags. The website that introduces related functions introduces that more than two Apriltags can define a plane. I would like to ask why 2 Apriltags can define a plane. How is this different from more Apriltags defining a plane?
Sometimes image noise, weird lighting/shadows, extreme angles, or other effects can cause an AprilTag marker to not be detected. The more tags that define the surface and are detectable by the camera/algorithm, the more accurate the exact location of the surface in the scene can be determined.
So having many AprilTag markers will help ensure more robust surface tracking
Thank you very much, very clear.
is it possible to download pupil capture on windows 11?
Pupil capture should work with Windows 11 - be sure to install it with admin rights.
Pupillometry in hmd
Hi, @user-cdcab0 sorry for the delay! Here is the player.log file.
This is the log after I have dragged a recording into the player. It basically disappears after a few minutes and its not bringing up the window with the video. Sometimes it does save an "export folder" but the whole process is incredibly slow, other times the player crashes.
Hm. Let's try a fresh start. Make sure the apps are closed and then rename the pupil_player_settings
folder to pupil_player_settings_backup
, launch the app again and try to load a recording
Hello, an other question, somebody know if the glasses are CE marking ? (neon, core and invisible one)
Yes! all of them are!
Hello everyone, I'm working with the pupil core and using the pupil player to extract data on eye movement. I'm thinking of putting the data through a matlab script that generates a saliency map for me. However I'm confused about the difference between fixations.csv
and fixations_on_surface.csv
. Is someone able to explain it to me? Thank you very much for the help.
To clarify my question: I see more values in fixations_on_surface.csv
than i do in fixations.csv
. Where there are multiple entries for the same fixation_id in fixations_on_surface.csv
. How does this work?
What if your eyes do not move but the surface does? In that case, it makes sense to me that the fixation ID stays the same (since the eyes haven't moved), but surface gazes need multiple entries (since the gaze position on the surface has changed)
Hello everyone,
I would like to know if there is an android application for smartphone that works with pupil labs glasses core version? Thanks in advance.
Best regards, Azamat
Hi @user-c075ce 👋🏽 ! Pupil Core connects to a laptop and runs the Pupil Core software (https://docs.pupil-labs.com/core/getting-started/). What prompted your interest in whether Pupil Core connects to a smartphone? Are you exploring different options or do you have specific requirements that a smartphone-connected eye tracker would better meet?
Heya people where can I find to download pupil capture? Every link leads to the codes for the software but I want to download it
Hi @user-301f9a ! To download it, go to this link https://github.com/pupil-labs/pupil/releases/tag/v3.5#downloads, scroll down to Assets
, and there you'll find the different bundles for macOS, windows, linux.
Hi Team, i have multiple recordings and I would like to export the data and I am wondering if there is a way to run the export script without having to drag and drop each folder into pupil player before exporting? Thanks.
Hi @user-6cf287 👋🏽 ! There is no batch export built-in to Player. However, there is a community contributed tool for exporting recorded data from multiple recordings without starting Player. You can find it here: https://github.com/tombullock/batchExportPupilLabs
Hi, I am currently working on experiments in our lab using Pupil Neon, specifically focusing on typical field operator tasks. While exploring the exported data, I couldn't locate information on the 3D gaze vector and its origin in any of the files.
Could you kindly provide guidance on accessing or extracting the 3D gaze vector and its origin data from the Pupil Neon export files?
Hi @user-0aefc0 ! 3D Eye states as well as pupilllometry, are currently only available in Cloud. https://docs.pupil-labs.com/neon/data-collection/data-streams/#_3d-eye-states
From there, you would like to download the CSV files and look at https://docs.pupil-labs.com/neon/data-collection/data-format/#_3d-eye-states-csv
May I ask, are you using the data exported from the phone?
Hi Pupil Team. I hope this message finds you well !
We are conducting a study exploring the visual search behavior, specifically focusing on participants' gaze spatial distribution. We would like to work with the "gaze_positions" file for our analysis (we don't have surfaces in this study). However, we are encountering uncertainty regarding the specific columns related to X and Y coordinates of the gazes and specifically the distinctions between columns : norm_pos_x/_y and gaze_point_3d_x/_y/_z.
In essence, my question revolves around discerning the nuanced disparities between the terms "position in the world image frame" (relating to the "norm_pos_x/_y" columns) and "point in the world camera coordinate system" (relating to the "gaze_point_3d_x/_y/_z" columns) that you use in your user guide.
Additionally, my second question pertains to the point of reference for these X and Y coordinates. For example, is it the center of the calibration zone? This inquiry arises from our observation of coordinates with negative values in the mentioned columns.
Thank you in advance for your consideration of our inquiries.
Hi, @user-6586ca - sorry for the confusion. Have you seen this page in the documentation? https://docs.pupil-labs.com/core/terminology/#coordinate-system
Hi Pupil Team, I just got the pupil Core but I can't find where to download the pupil labs software Capture (https://docs.pupil-labs.com/core/software/pupil-capture/). Could you help me ? (I am so sorry but this must be the dumbest question you have ever received )
Hi @user-e518ed ! you can download them here https://github.com/pupil-labs/pupil/releases/tag/v3.5
It's not a stupid question, in fact seems in the update of the docs, they went MIA. We will update this ASAP
Hello! I have a problem with one eye camera in Pupil Core. Image looks blurry, and pupil detection isn't working. This happened suddenly, after it worked fine for months. Deleting the settings didn't help. I am attaching an image from this eye camera. Computer is Macbook Pro M1. Can anyone help me with this? Many thanks!
Hi @user-7ff310. That image looks quite exposed. Have you tried changing the eye camera exposure settings?
Hello Pupil team, after about one year of utilisation of the pupil core, i encountered a problem for which i may need help. Since recently, I have an error message at pupil capture launch: world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables\world.py", line 671, in world File "plugin.py", line 409, in init File "plugin.py", line 432, in add File "video_capture\uvc_backend.py", line 75, in init AssertionError
In the worst case I'd like to be able to run capture even without the world camera if it is indeed the cause of the error. Thanks in advance for any help you can give.
What operating system are you running and what version number of Pupil Capture? It would be great if you could share the log file from ~/pupil_capture_settings/capture.log
I am running on windows 10 with the 3.5.1 pupil capture. Here is the log: 2023-12-14 19:29:26,847 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version. 2023-12-14 19:29:27,191 - world - [DEBUG] launchables.world: Application Version: 3.5.1 2023-12-14 19:29:27,191 - world - [DEBUG] launchables.world: System Info: User: Olfaction humaine, Platform: Windows, Machine: DESKTOP-04JA75C, Release: 10, Version: 10.0.19041 2023-12-14 19:29:27,191 - world - [DEBUG] launchables.world: Debug flag: False 2023-12-14 19:29:27,521 - world - [DEBUG] video_capture.ndsi_backend: Suppressing pyre debug logs (except zbeacon) 2023-12-14 19:29:27,561 - world - [DEBUG] remote_recorder: Suppressing pyre debug logs (except zbeacon) 2023-12-14 19:29:27,571 - world - [DEBUG] pupil_apriltags: Testing possible hit: C:\Program Files (x86)\Pupil-Labs\Pupil v3.5.1\Pupil Capture v3.5.1\pupil_apriltags\lib\apriltag.dll... 2023-12-14 19:29:27,571 - world - [DEBUG] pupil_apriltags: Found working clib at C:\Program Files (x86)\Pupil-Labs\Pupil v3.5.1\Pupil Capture v3.5.1\pupil_apriltags\lib\apriltag.dll 2023-12-14 19:29:27,584 - world - [DEBUG] plugin: Scanning: fix_error_message.py 2023-12-14 19:29:28,054 - world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables\world.py", line 671, in world File "plugin.py", line 409, in init File "plugin.py", line 432, in add File "video_capture\uvc_backend.py", line 75, in init AssertionError
Can you try renaming your pupil_capture_settings
folder to pupil_capture_settings_backup
, and then launching the app? This will give it a "fresh start", so to speak.
If that loads, you can then try enabling each of the plugins you need one-by-one
Hello
Hi, @user-7daa32 - if you put on the Pupil Core headset first and then eyeglasses on top of, it does work for some people with some adjustment to the eye cameras. It's not ideal, but depending on the person and the glasses, it does work for some people.
Contact lenses definitely work with Core
Can we use eye glasses or contact lenses with the pub Lab core eye tracker?
Low contrast eye images
Hello, I managed to integrate the Gaze Tracker (Pupil Capture) into my Unity project. I receive gaze data in .npy and pldata. I cannot find anywhere how I can read these datafiles.
My goal is to have data about where my participants are gazing within the Unity project. Does anybody know where I can find information on how to do this? Thanks 🙂
Hi @user-8bce45 ! Is this project using the VR Add-ons and an XR headset, or is it using Pupil Core and Unity to project data on some screen?
Hello! I have a problem with world camera in Pupil Core when using other cameras. Image looks below, and pupil detection isn't working for several months. This problem is continuously appearing except the first and successful debug. In fact, it only works when I turn on the Pupil Service, and it cannot work when I change into the Pupil Capture. The world GUI made an abnormal debug to the gaze track camera and causes my system to crash. Computer is ASUS TUF laptop using Windows 11. The cameras include an OV9281 as world camera and two cameras called HBVCAM-12M2221 V22 with IR LEDs for gaze tracking. Can anyone help me with this? Many thanks!
Hi @user-b02f36! So the cameras you have work with Pupil Service? In that case, please try restarting Pupil Capture with default settings.
Hi @user-e3da49! Since this is a Neon discussion, let's move it to the 👓 neon channel.
Hello, is there a way to validate after calibration using the single marker? Thank you!
There is. You can adopt similar head (or marker) movements as you did during the calibration, just during the validation instead.
Hello!! Is there any method to convert data from iMotions exports back into the original Pupil format? Unfortunately, my hard drive crashed, and I lost all the data except for the iMotion exports.
Hi @user-f2b05d ! Here you have the iMotions Exporter, you will need to reverse what is done there, that said, you won't be able to get back all the data, as not everything is exported/supported by iMotions.
Hello. Sorry if this is the wrong place to ask a question.
https://docs.pupil-labs.com/core/getting-started/ I downloaded Public Capture v3.3.0 based on the above. The application started, but when I connect Public Core to my PC, the camera image does not show up. What could be the cause? I will present any necessary information. Please help. Public Capture v1.14 worked fine.
Hi @user-f792d5 👋 ! Is there any reason why you opted for an old release? Please try installing version 3.5, and refer to this specific OS troubleshooting steps.
Hu Pupil Team. I hope this message finds you well !
As part of our study, we are working with fixation data, specifically with gaze and fixation files. However, upon closer examination at the macro level, we have identified an inconsistency between the timestamps and indexes in these two files. To provide a specific example, we have encountered a discrepancy between the start_timestamps in the fixation file and the corresponding timestamp in the gaze file, where they do not consistently share the same index. In this cases, there appears to be a discrepancy where the index indicated in the fixation file is greater than the index indicated in the gaze file for the same timestamp.
We are currently exploring potential solutions to rectify this issue and ensure the accuracy of our analyses. If there are any recommendations or specific steps you suggest we take to address this index misalignment, we would greatly appreciate your guidance.
Thank you for your help.
Hi @user-6586ca 👋. When you say index, are you referring to the fixation id, or the fixation start and end index/world index? This is an important detail, because the two are different entities. E.g. A fixation with a given ID, might span multiple world indices.
Hi, I'm using Pupil Labs plugins (Gaze Tracker with Screen Cast Camera) within my Unity project. I managed to collect data about the gaze positions within the Unity world, which is perfect. Now I was wondering if it might be possible to collect data about the velocity around each point of gaze. For instance, whether it would be possible to see within the data whether the participants are gazing at a fast-moving environment or at a slow-moving environment.
Hi @user-8bce45 👋. We don't have the functionality built into our demo scenes. However, it would be somewhat straightforward to compute it yourself based on the velocity of your stimuli in Unity and the gaze signal.
Hi Using Pupil Core to export data, it was found that there are duplicate fixation IDs. Does the number of fixation points include duplicates, or is it only counted as one
Hi @user-4bc389 ! that looks like the fixations_on_surface
file, am I right?
It is normal to have the same fixation ID multiple times in there, as the position on the surface may have varied. So it would depend also on the surface detection.
Have a look on how gaze and fixations are remapped to the surface here
Depending on what you want to achieve you may want to filter by unique IDs or not. If your intention is to plot over the surface, you may want to keep it, while if you simply want to use metrics as duration, you can filter them
Would you reccommend including participants that were glasses or lenses even though it is not ideal?
I have the core eye tracker, the four wires to the eye cameras have disconnected. Can you provide a wiring diagram so I can properly reinsert the correct wire into the 4 pins on each side?
Hi @user-071a54 👋. Please reach out to info@pupil-labs.com and someone will assist you with the HW issue from there
Hi again, i have three questions. One, how do I confirm the intrinsics of my camera? Two, do you suggest correcting for sampling error during pre-processing (add 4.166 ms to gaze timestamps?). Three, pupil size is larger when the eye is in line with the viewing angle. Is that being corrected for by the software?
Hi @user-908b50! 1. We have an intrinsics estimation plugin that can be used for this purpose 2. I'm not sure I understand what you mean by correct for sampling error. Could you elaborate? 3. Which data stream are you examining? Pupil size is reported in pixels as observed in the eye image, and mm as output by our 3D eye model. The latter is corrected for perspective