Hello, why do I often encounter the following problems when using pupil player?
Hi @user-873c0a! Can you please try restarting Pupil Player with default settings and seeing if that resolves the issue?
@nmtThis worked, thank you very much๏ผ๏ผ
Hi, I am trying to calculate angular velocity of eye movement during VR in order to run a saccade detection algorithm. Since I am not interested in heatmaps or AoI, it seems best to me to calculate angular velocity not based on distance on the screen and distance to the screen, but directly from the positioning of the eyes. What would be the right way to do this? My best idea would be to create vectors from sphere_center(x,y,z) (i.e. center of the eyeball in mm) to _circle_3d_center(x,y,z)_ (i.e. center of the pupil in mm) , calculate the angle between those vectors and divide by delta t. Is there perhaps a better way to do this? or some established code to calculate angular velocity without using information about the screen? Thanks!
Hello, I found that when I was doing experiments at night, the confidence of pupil detection was always low even though I had tested it in good light. Why is that?
Hi @user-873c0a ! It is hard to know without seeing an example, do you have the exposure settings in manual and is the image quite dark? Have you tried setting the an ROI? https://discord.com/channels/285728493612957698/285728493612957698/1214561339947876392
Hi @user-d7c9b4 ! To get the gaze velocity you can follow this tutorial, rather than using the data on pupil_positions, you can use the one on gaze_positions.
Thanks!
Hello,
I am unable to use my Pupil Core eye trackers on Mac or Windows.
In both cases, the Pupil Capture world camera feed appears and then freezes. After that, when I re-launch Pupil Capture on pc (Windows 11), it does not connect to/recognize the tracking device. Tried various input ports but no difference. It seems to be a driver problem, but I followed the Pupil Core Windows troubleshooting advice (uninstall camera drivers and then run Capture as administrator) and it still does not work. On the Mac (Big Sur 11.7.10), I pasted the Pupil Core troubleshooting prompt into the Terminal but it also does not work.
The tracking glasses/cameras are fine but on both mac and pc platforms, it is recognized, freezes, then does not recognize. When I try again after some time, it again recognizes the device and shows the video feed, then freezes, etc.
FYI After I uninstall the drivers on Windows and run as administrator, the libUSBK Device category no longer shows up under Hidden Devices in Device Manager so apparently the drivers did not get re-installed. I tried running PupilDrvInst.exe as administrator, re-started and still do not see libUSBK Device category under Hidden Devices. Disabled Windows Virus & Threat Real-Time Protection, same result - no drivers installed.
Can anyone please help?
Hi @user-fa2527 , could you send the log file in the pupil_capture_settings
folder for both systems? It will be in your user/home folder. You can send this via DM, if you prefer.
Pupil Core Debugging
Hello! I am in the process switching from PC to Mac for my pupil data collection. I previosuly recorded with LSL, where I put the necessary lsl folders/scripts in the "plugins" folder in the pupil_capture_settings. I tried to do the same with mac but it is not working. Infact, I am getting the error below. My LSL on my mac is working with other applications.
pupil_capture_settings/plugins/
pupil_capture_settings/plugins/
liblsl-1.16.2-OSX_amd64.tar.bz2
pupil_capture_settings/plugins/pylsl/
@user-63b5b0 - please see open that thread to view all of the steps
Please what is the difference between x,y-norm data and x,y-scaled data?
I got the difference already! How can I link any of these to the actual physical dimension (in centimeters) of the screen ( as the whole AOI surface created by April tags)?
Hi @user-7daa32! May I ask what your end goal is, just so I'm sure I understand your question
More questions: What should influence the selection of "resolution from the eye window? does it affect the outcome of the gaze position data?
The lower resolution enables you to sample at the full 200 Hz rate (not possible with a higher resolution). So for most cases, that's what we'd recommend. The lower res doesn't have a negative impact on gaze position, either ๐
I am trying to calculate the actual dimension of the screen. I will have my plot overlaid on the image of the visual stimulus. I need to calculate the actual dimension of the surface I created using the Apritags. By so doing, I will get the actual gaze data and also and get the actual size of the scanpath
I see. Then you'll need to measure the physical dimensions of the screen yourself. When you input these in Pupil Player, the screen-mapped gaze coordinates get scaled automatically and are provided under the scaled field.
Please why is the Player taking time to open after loading file into it?
Are you attempting to load the recording from a remote file storage platform?
is (O,1) normalized dimension correspond to the actual screen dimension (X,Y inches)?
You'll need to look at the values under the 'x_scaled, y_scaled' fields. Their unit corresponds to the unit you used when entering the dimensions in Pupil Player.
from one drive
This can cause issues. I'd recommend loading them from a local storage drive.
Hello, did anyone experience that the software stops recording once the laptop lid is shut on a Mac? Thanks for any advise!
Hi @user-f4d2b3 ! You may want to check in the settings > battery whether you have this setting toggled off
Thanks I will give it a try ๐๐ผ
Hello. I have a question about pupil core recordings. I want to use something like Reference Image Mapping that is done on pupil cloud. Can I upload my data from my computer to Pupil cloud? Or is there any way I can use the reference image mapping offline?
Hi @user-46e202 , it is not possible to upload Pupil Core recordings to Pupil Cloud and the Reference Image Mapper is not available offline.
Thank you for the fast response. Just to be sure. Is there any alternative for it to use on pupil player?
Do you mean something like a Pupil Player plugin?
Yes
Ah, no, we do not offer one and I do not know of any third-party or community plugin that offers something similar.
Hi, we are planning a paradigm for the research in younger participants. We are wondering whether it would be possible to use custom markers (such as predefined pictures of animals, more suitable for kids) in the calibration setup? We are not sure if such option is available with Pupil Core. We are using PsychoPy to communicate with the eye-tracker.
Hi @user-3927e5 ๐. This is possible in Pupil Capture when using the 'Natural Features' calibration. Direct the wearer to look at specific points in their environment, such as the custom markers designed for kids. Then, manually click on where they were looking in the Pupil Capture software. Good communication between the operator and wearer is essential, so be sure to practise! https://docs.pupil-labs.com/core/software/pupil-capture/#natural-features-calibration-choreography
The pupil player fixation detector uses a maximum fixation duration of 220 ms as standard. This results in what seem to be longer fixations (green in the picture) being cut into several shorter fixations (light blue bars). Is there a reason for this? As far as I know, Fixations can easily be longer than 220ms and it seems to me much more reasonable to leave this criterion out.
Hi @user-d7c9b4 , that's the default setting, but there's no hard requirement about it. You can change the setting in Pupil Player as needed for your experimental paradigm (all the way up to 4000 ms). More details can be found in our documentation here.
Hello,
Just a question about the Fixations and the Surface Fixations When I download the datasets in excel that show All fixations the fixation id follows an ascending order where each fixation number only shows a single time, e.g., 1,2,3,4,5,6 In the Surfaces folder, the excel that shows the Surface Fixations I noticed that the IDs for each fixation tends to repeat and thus it is presented more than a single time e.g 1, 1, 1, 1, 2, 2, 2, 3, 3 The thing that I care about is the duration of the fixations which is the same even when the fixation id is presented more that one time in the Surface Fixations folder (as in the example above). Is this how it is supposed to be? why the fixations ids are not presented in the same way in these 2 files, and if I want to analyse the Surface Fixations regarding their duration do I have to only count for the first fixation for each independent id or count all of them if the fixations IDs are presented several times.
Thank you very much, Panos
Hello, I would like to use Pupil Core with one eye closed. Could you please let me know how accurate it is? Best regards, Goki
Hi @user-04fdac , using both of Pupil Core's eye cameras and closing one eye will switch to the monocular pipeline. You can also disable one of the eye cameras in the Pupil Capture settings:
Then, it does not matter whether that eye is open or closed, as that eye camera will be disabled for eyetracking.
Now, you can perform a calibration. After calibrating, Pupil Core will report the accuracy for that calibration. Generally, monocular detection will have somewhat lower accuracy. The exact value will vary on a case-by-case basis.
Hi @user-412dbc ! Thatโs normalโfixations are computed relative to the scene camera, and then translated to the surface coordinates.
However, the surface's position within the scene camera frame an shift from frame to frame, even during the same fixation.This causes the coordinates of the fixation on that surface to vary, and reporting it this way allows you to account for.
You can in fact, visualise this change. by following this tutorial
If you are only interested in fixation duration, you can as you mentioned, use unique instances of the fixations.
Thanks a lot! appreciate your help.
Another question that I have is if it is generally a good practice as part of the data pre-processing to exclude the Fixations that appear near the blinks (e.g., 100-150ms before and after the blinks). If yes what is the easiest way to do that? The only think that I am interested about in my analyses is the fixation duration within AOIs. Also, are there any common good practices that will guide me for preparing the datasets for the analyses?
Thank you very much for your reply. Panos
Thank you very much, Rob! I was at the point where I wasn't sure if I should buy it first, but I will buy it and experiment. In my experimental system, the position of the display is fixed and the user does not move, so the accuracy may not drop that much.
Hi @user-04fdac , may I ask if there is a reason that you prefer Pupil Core over Neon? No pressure, as the choice is yours and we are here to help either way ๐
If you have other questions before you make your purchase, feel free to ask or send us an email at info@pupil-labs.com to organize a Demo call. Then, you can see how it works beforehand.
I chose this one based on a comparison of the listed accuracies, as the frame rate and accuracy seem to be necessary for what I am trying to do to some degree. Why did you recommend NEON?
While Pupil Core is powerful, Neon is our latest eyetracker. It is easier to use and more flexible:
Because of this, you can use Neon in the lab and outside while playing sports. With Neon, you can also analyze your data on Pupil Cloud.
What kind of experiments are you doing and what level of accuracy do you need?
Thank you for your kind attention. I have no problem with my system taking a long time to calibrate and my users don't move that much, so accuracy and customizability are also priorities. I will give it some more thought. Thank you๏ผ ๏ผIf you know of a product that places the camera close to the eye instead of on display and is accurate, could you let me know...?๏ผ
No problem! In case it helps, if you go with Pupil Core and want to squeeze the most accuracy out of it, then consider:
When it comes to other eyetrackers, I recommend checking with the development teams of those products.
Thank you! I will use it for a display or projection, so 2D Gaze Mapping may work. Sorry, I donโt understand your notion โJust note that you will not get physiological pupil estimates in millimeters with this approachโ
The Pupil Core software fits a 3D physiologically based model of the eyeball to each eye. This allows it to estimate pupil diameter in millimeters.
EDIT: Apologies, you will also get pupil diameter estimates in millimeters when using the 2D calibration pipeline (https://discord.com/channels/285728493612957698/285728493612957698/1290306177514733671).
I see. Now I understand the difficulty of monocular gaze tracking.
Ah, this is independent of monocular gaze tracking. Both the 3D and 2D calibration pipelines can be used for that. If you want to discuss further, I will start a thread where you can ask questions in more detail.
Sorry to bother you. Can we?
Hi @user-04fdac , I'll make a brief post here and we can otherwise continue in the thread, as my colleague, @nmt , has clarified two points for me:
Not a bother! ๐ sure ๐ See here: https://discord.com/channels/285728493612957698/1290297041641013411/1290299513797480612
Hey Rob,
I am reading your posts to get a better grasp of using Pupil Core!
So, based on what you replied to Goki, the use of either a 3D or a 2D calibration does not matter to the post hoc pupil analyses? I am asking as I am thinking to go the 2D calibration way because my participants will be stable and the experimental blocks will be relatively short (6 blocks that are 4-7 minutes each, and each of the blocks will include a new calibration).
Thank you, Panos
Hi @user-412dbc , yes, in the pupil_positions.csv
file that is exported from Pupil Player, there will be two lines for each timestamp:
diameter_3d
column that contains pupil size estimates in millimeters.You can see which pipeline is used in the method
column.
This is the case whether you use the 2D or 3D calibration method.