Thank you very much!!!β₯οΈ
Hello,
We're having a few issue with our Pupil Core glasses. When we initially received the glasses, all of the cameras were working correctly. After a few uses, the world camera stopped working. On Windows 11, we consistently get the following message in the command prompt: "world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied.
". On Windows 11 and Ubuntu 22.04, the eye cameras seem to work, although it can be finnicky and sometimes requires unplugging/plugging in the glasses a few times. However, we cannot seem to get the world camera to work. There was a recent occurrence where the world camera worked, but then one of the eye cameras stopped working. Shortly after that, the world camera stopped working again, but both eye cameras were still functional.
Here are some of the fixes that we've tried:
- Uninstalled and reinstalled the latest version of the Pupil Labs suite
- Reset the Pupil Capture settings to default
- Deleted the pupil_capture_settings
, pupil_player_settings
and pupil_service_settings
directories from the home directories.
- Uninstalled and reinstalled the "libusbK USB Devices" drivers
- Disabled and re-enabled the cameras via Device Manager
- Used the glasses and Pupil Capture on different hardware/machines and operating systems (e.g., two desktops running Windows 11, a desktop running Ubuntu 22.04, two laptops running Windows 11).
- Tried the USB-C connection as well as the USB 3.0 adapter
- Tried a different USB cable
- Tried different USB ports
- Reconnected the eye cameras and inspected the pins.
- Searched through the troubleshooting section of the Pupil Labs documentation and some of the GitHub issues
All of our machines' drivers are up-to-date. Please let us know if there are any other ideas or suggestions that we should try.
Thanks,
Brayden
Hi, dear pupil lab teams. I'm wonder if you sell the camera world replacement. I think that our camera CMOS was crashed... π
Yes we can replace world cameras on the core headset. Please reach out to sales@pupil-labs.com mentioning that you require a new world camera!
Hi @user-cb8fad! Sounds like you've already done some thorough testing. We might need to receive the system back for inspection/repair. Can you please send your original order ID to info@pupil-labs.com and someone will assist you from there!
Hello, how can I generate heatmaps from the eye tracking recorded video?
Hi @user-757948! You'll want to check out the Surface Tracker Plugin. More in this section of the docs
Hi there, the confidence level of the right eye is really low. In the attached video, you can see that even if the camera is pointing exactly at the eye, the model keeps failing to track the pupil and the eyeball. Do you have any idea how to solve this issue?
Recommend setting the eye camera to autoexposure in the settings - the image looks somewhat over-exposed.
Hello team, is there a way to mark areas of interest without using the apriltag marker? I have a puzzle, and I want to analyze every piece. Thank you!
With Pupil Core, specifically, you would need to use AprilTag markers to generate AOIs, if using our software. Is that not feasible for you?
Hello, We are running 2 experiments using Pupil Core: Experiment 1 is performed on a screen with gray background, whereas experiment 2 has to be performed on black background, both in almost a pitch dark experiment cabin (except the light from the screen). 3d pupil detection works significantly better in Experiment 1 where the pupils are smaller due to the light coming from the screen. In experiment 2 with black background, pupil detection confidences drop below ~0.7-0.8 quite often, which leads to time windows during which gaze detection does not work (which we also use for our experiments). In this case, which parameters and settings would help me the most to improve pupil detection in a dark environment?
Are you able to share an example recording with us such that we can take a closer look? If so, please send the recording to [email removed] referencing our conversation here
Hi! I have a simple question about the right-hand Player (v 3.5.7) menu. It seems to be only partially visible, so I can't access everything I need to (e.g., the manual edit mode toggle). I've adjusted the window size, restarted Player, restarted my machine (MacBook Pro 2021, v 12.6), and toggled different menu options. Is there an easy way to fix this?
You can resize that panel by dragging the three bars to the desired location (see red arrow in screenshot)
Hi Does it affect the use of an eye tracker if one eye camera is broken and only one eye camera can function properly? Thanks
Hi @user-4bc389 π. Pupil Core can operate with only one eye camera. But it will default to a monocular gaze estimation pipeline, so that means slightly less accuracy in practice. We also sell replacement cameras and can repair damaged frames/internals. So if you need inspection/repair, please reach out to info@pupil-labs.com
Hi! I have a question about mapping dynamic areas of interest with data from Pupil Core. I've collected data from children and mothers wearing Cores with apriltags attached at the top of the headsets. We want to see if we can measure looks to each other's faces - the apriltag marker provides a rough estimate of 'face in worldview', but this is a bit less precise than we'd ideally want. Do you have any suggestions about methods for defining areas of interest in the worldview that we can then map fixation data onto? If I'm not mistaken, Cloud is only compatible with Neon. I'm looking for something like this but usable with Core. Thanks so much!
Hi @user-5a4bba π ! Using only 1 AprilTag to capture a dynamic area of interest will be tricky, as you would still need a way to segment the AOI to know when gaze/fixation is on it. If your goal is to detect faces, then one approach would be post-hoc processing with an open-source face detection software, such as RetinaFace. And, you are correct that Cloud is not compatible with Pupil Core.
Hi, I am interested in how the gaze estimation is calculated from the pupil position data. More specifically, how is the frequency of both related. Lets say the pupil position data is recorded at approximately 120hz, what will be the frequency of the calculated gaze position data?
A following up question, if I would like to down sample the data to 120hz, which is the frame rate of my current recording, what would be a recommended way to achieve this?
Hi team, i would like to know how many seconds/miliseconds of the beginning of the recorded pupil data can be used as a baseline value. Also is it best to use a mean value. The goal is so that i can then calculate the percentage of pupil diameter change compared to the baseline. I did not record a particular baseline value but there was at least a 10 second pause before the next scenario was played to them. Thanks
Hi @user-6cf287!
Regarding baseline correction for pupillometry, I highly recommend having a look at this paper by MathΓ΄t et al: https://doi.org/10.3758/s13428-017-1007-2. The recommended approach is to use subtractive baseline correction (corrected pupil size = pupil size β baseline). Please also see this relevant message for more on this topic: https://discord.com/channels/285728493612957698/285728493612957698/1084755193918406696
Now, as of how long of the pre-stimulus/trial period you should include as baseline, this depends. There's research that includes long baseline periods up to 1s and others using shorter periods of ~500 ms (you can find examples in the paper I shared). Ultimately, the best approach would be to find previous published research with paradigms similar to yours and see what they've done.
Thanks btw!
I encountered a strange problem while stream data using the LSL plugin. When I look at the xdf file, the timestamps are not monotonously increasing. Did anyone encounter such a problem?
Hi, @user-6d3681 - if I recall correctly, LabRecorder doesn't make any attempt to sort data as it arrives. This task must be done later.
If you think about it, this is preferable. Because the data is sent over the network, the order of their arrival is not guaranteed to match the order in which they are sent. LSL and LabRecorder are designed to work with multiple high- and low-frequency streams. Sorting data as it arrives can be computationally expensive, and could always be done later instead. Because of this, LabRecorder instead uses its resources to just ensure that all the data is recorded
Can you share or link this load_xdf.m
? If that function is meant to sort and is returning data which isn't sorted, the error must be there, right?
load_xdf.m is here: https://github.com/xdf-modules/xdf-Matlab. It has been used by many others and not just me. As I mentioned, it works well for my other streams. I think, also, that LSL does ensure samples are recorded in order and handles retransmits as needed. Does the Pupil Labs LSL plugin not transmit data in order?
Network traffic is never guaranteed to arrive in the order in which it's sent
I understand that, but, again, all the other streams seem to be fine. They also have more channels and higher data rate.
That may be a bit unexpected, but not impossible. Regardless, if load_xdf.m
is supposed to perform a sort, then it wouldn't matter how the data is sent or recorded.
exactly. That's why it's strange. I asked here in case someone noticed that problem in the past. In our end we are experimenting with some other computers (perhaps it's a computer problem). I'll let you know if we resolve it.
manual
hey pupil labs, i just wanted to know what excatly happens when the Core eye tracker dosent recognise the eye in the middle of a recording?
i meant will it record it as a false data or it just skips the perticular point? like will it give a false fixation or blink anything like that or it just skips?
There are several instances where this could occur, such as during blinks. In the case of blinks, we have a blink detector. There may also be instances where the eye isn't detected properly due to, for example, less-than-optimal camera positioning or when the wearer looks outside of the eye camera's field of view. In these situations, you will encounter 'low-confidence' data. The raw gaze will still be recorded by our system, but it won't be used by plugins like the fixation detector. Generally, when confidence drops below 0.6, we would recommend discarding the data. However, there can be exceptions. If you haven't already, definitely have a read through our online documentation, and let us know if anything requires further clarification!
Hi Pupil Labs,
I am not able to find documentation on the Core's working distances. I would like to set a visual task closer than 40cm, ideally 20cm away from the observer's eyes, has this been tested before?
Hi @user-57290c π. Pupil Core can indeed operate at the viewing distances you mention. I'd recommend calibrating at about the desired distance to get the best results
Neon
Thanks for clarifying @user-757948! Let's move this discussion to the π neon channel then. I replied there: https://discord.com/channels/285728493612957698/1047111711230009405/1215222805583499264.
Hi, I would like sync pupil eye tracker with other device, how to do it the right way?
Hi @user-756d06 ! Can you kindly confirm whether you are using Pupil Core or else, such that I can link you to the correct documentation? There are multiple ways to sync with other devices, being the most popular Lab Streaming Layer or iMotions. What other device are you looking to sync with?
Hey guys. Could anyone give me a little information?
Hi @user-d09ae0 How can we help you?
Hi I was interested in the mechanism of the pupil core device. I wanted to know how does the system decide the confidence value? Is there a computational algorithm which it uses to assess the confidence based on height width and depth of the world view and the pupil position or does it use some form of computer vision based model to provide a reding for the confidence?
2D confidence, specifically, is an indicator of pupil detection confidence, measured using the number of pixels on the circumference of a fitted ellipse. Pupil detection forms the foundation of Pupil Core's gaze estimation pipeline. Highly recommend reading the whitepaper for a detailed overview!
Hi! I am interested in achieving the data of pupil and gaze through pupil core by using my own cameras. Is there any demonstrations or examples for testing? And where can I find them? Thanks sincerely!
Hi @user-b02f36 - I'm not sure I fully understand your question. Pupil Core comes with its own eye and world cameras. Could you maybe elaborate on your use case?
Hi I have updated my Macbook and installed the new version of Pupil capture, now I faced on problem in local USB (it is seem inactive). How can I solve this problem?
Hi @user-9d6c03 ! Could you develop a bit? What version of Mac OS were you running/are you running now? Have you already seen/tried these troubleshooting steps ?
Question: How can I normalize the left/right pupil size? In my figure, each green line is a stimuli onset, (this recording has multiple trials) , blue scatter plots are left eye recording, red plots are right eye recording. You can see left and right pupil size are very different, but I'm guessing it's from the camera angle and other instrumentation errors. So my question is what's the best way to match (normalize) the two eye sizes? I'm thinking shifting the red plot baseline up, then rescale the plots so it has the same peak sizes as blue plots, and do it for every single trials. Is there a better way to do this?
Hi @user-03c5da π ! The "diameter_3d" column of the pupil_positions.csv file gives you the pupil diameter in millimetres that is provided by the 3d eye model. Note that you need a well-fit eye model and no slippage for these values to be accurate.
Can I ask what your stimulus/setup was and the general environmental lighting conditions? For future reference, you can consult our tutorial on visualizing the pupil diameter, if you have not done so already.
Lastly, may I ask why do you need to normalize values across eyes?
Hello, i'm currently doing my master thesis and im implementing the Pupil Core headset within an entire system of biosensors for monitoring in real time data from First Responder (nurses, firefighters etc...) My objective is to obtain data in real time with the Network Api to MongoDB - Docker. The thing is that im trying to run the Pupil Core in Visual Studio Code and when installing the requirements.txt appears an error. This error is related with the installation of uvc and the wheels. Im lost. Im on windows 11, Ive tried with virtual environment, installing wheel, installing cmake, -m build, the steps you provide in github in the uvc windows section but nothing. i can send you the error image. Could you help me im a bit desperate. Also, could you recommend me how do you think I should implement the system to upload it to Mongo and Docker, should I do it in visual studio, should I do it using the Capture app, I'm a bit lost with that too because I really don't know what to do after running it in python. How could I approach it? Very thank you in advance
Hi @user-4ab98e , I have continued the conversation here.
Please in which way can we record voice using Pupil Lab Core?
Hi @user-7daa32 ! Pupil Core has no microphones, if you would like to capture audio. It is possible to collect audio externally using the Lab Streaming Layer (LSL) framework. This would provide accurate synchronisation between audio and gaze data, but takes more steps to set up:
Hello, Iβm developing an application that interfaces the pupil capture. The application can start, stop recording, and perform other functionalities. Now I want to develop a new functionality: can close and open each eye camera. Is it possible to do that with the Network API? Thanks in advance, Nobby
Indeed you can! Here's a snippet
import time
import zmq, msgpack
# create a zmq REQ socket to talk to Pupil Service
ctx = zmq.Context()
pupil_remote = ctx.socket(zmq.REQ)
pupil_remote.connect('tcp://localhost:50020')
# convenience function
def send_recv_notification(n):
pupil_remote.send_string(f"notify.{n['subject']}", flags=zmq.SNDMORE)
pupil_remote.send(msgpack.dumps(n))
return pupil_remote.recv_string()
n = {
"subject": f"eye_process.should_start",
"eye_id": 0,
}
print(send_recv_notification(n))
time.sleep(5)
n = {
"subject": f"eye_process.should_stop",
"eye_id": 0,
}
print(send_recv_notification(n))
Hi @user-cdcab0 , i have a short question regarding the "Pupil Core" model. Is it able to measure the average blinking rates of participants?
Hi @user-afe8dd ! Pupil Core does not output directly the blink rate, but computing this parameter is trivial.
Firstly, I 'd recommend learning about the blink detector employed and how it differs from Capture to Player and the best practices when employing the blink detector.
Once you understand how blinks are detected, and know whether you would like to compute blinks in realtime or post-hoc, you simply need to iterate over the blinks and their timestamps.
Simply define the period you want to compute your blink rate from, and look up for the blinks whose start and end ts are within
@user-d407c1 thank you for your answer. for my research this parameter is quite important. So i guess, i would have to look at the number of blinks a particpant has made, then divide it with their time they spent during the experiment. And do this for all of my 200 participants. So this is quite doable, do i understand correctly?
That sounds like a good approach to get a blink rate per session. Perhaps you would like to compute the blinks per minute across each session, as it might be more useful. That would give you a metric to compare not only across participants but also to compare along the experiment.
Depending on how long the sessions were, how exhausting it was, or whether screen tasks were involved, you may see a decay within the participant alone.
I'm getting the following error when running 1 (out of 21) recordings. Is this a corrupted file issue, and is there a way to fix the file in question?
player - [INFO] launchables.player: System Info: User: ander, Platform: Windows, Machine: Muttnik-3, Release: 10, Version: 10.0.19041 Fixation detection - [INFO] background_helper: Traceback (most recent call last): File "background_helper.py", line 73, in _wrapper File "fixation_detector.py", line 170, in detect_fixations File "fixation_detector.py", line 170, in <listcomp> File "file_methods.py", line 293, in getitem File "file_methods.py", line 253, in _deser File "msgpack_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbf in position 2: invalid start byte
player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 722, in player File "fixation_detector.py", line 534, in recent_events File "background_helper.py", line 121, in fetch UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbf in position 2: invalid start byte
player - [INFO] launchables.player: Process shutting down.
Hello all, I plan to use Pupil Core when the participants are in motion. I realize that the Pupil Mobile application is no longer available on the Play Store. Is there any way to use this Pupil Core on mobile? Thank you.
Hi @user-75ea17! Do you already have a Pupil Core system or are you scoping out a new purchase?
Hiβ¦ Iβm going though the GitHub scripts to find out the pccr (pupil center corneal reflections ) implementationβ¦ can any one please help meβ¦ Iβm trying to implement it for a two camera system and hoping to find some help hereβ¦ would be grateful for any useful leads β¦. (Note: I am referring to Guestrin and Eizenmannβs paper , but Iβm not getting expected results)
Hi, where do I find the serial number of my pupil core device?
Hi @user-ea64b5! You can't find Pupil Core SN by visually inspecting it. May I ask why you need the serial number?
that one yes
Hello! Our lab has the eye tracking glasses from Pupil Labs company. We would like to conduct an experiment using these glasses and other equipment in combination, namely: GRS (galvanic response) and PPG (photoplatysmogram). Which brands of GCR and PPG are the glasses compatible with?
Hi @user-ee74ae - I just replied to your email π
Thank you very much!
hi all, is it possible to use pupil player in a headless fashion to automate data exports. Have a large batch of recordings that I need to get pupil_positions.csv files for, similarly need to do post-hoc gaze calibration (marker dection, calibrations, mappings)
Hi @user-948402 π ! There is a community contributed batch exporter that you can try using to get pupil_positions.csv for each recording, but note that it requires a step where you edit file paths in the code. Regarding batch post-hoc gaze calibration, that is not possible in Pupil Player.
Hello ! I am working with surface tracker and I was wondering if there was any recommendations regarding the tag size in relation to the eyetracker-screen distance Thanks
Hi @user-c68c98 ! There are no specific Apriltag size guidelines, only that it needs to be big enough in the scene camera to be recognised, I would suggest that you try different sizes on the screen at the distance you want. The only requirement is about the white margin, it has to be at least the width of one of the smallest squares within the marker to be decoded. That said there are other factors to account for: - The complexity of the AprilTag (e.g., how many bits it contains) can also affect the required size. More complex tags, with more bits, might need to be larger to be decoded correctly at a given distance. - Likewise good lighting conditions improve the surface tracker's ability to detect and decode the tags. So, make sure the lighting is even and reduces glare on the screen where the tags are displayed.
Hello! I'm not sure if this Discord will be able to help me with this issue, but I thought I would give it a try. My Computer Science Capstone class is working with our Psychology Department to use the Pupil Core glasses. My team split up some work and asked me to find information on the following:
*Documentation on the origin point of all cameras (eye0, eye1, world) and what the correlation between them is. (I think they are asking, if your pupil is at the center of the sensor, are you staring at the center of the screen?)
*The device we use creates a grid in the device itself for tracking the eye movements, we need to figure out how these X and Y values relate to the pixels on the screen the person is looking at. First, I would like to see if you could find any documentation on this, or if there is a script already available to use.
Basically, I am wondering if there is any more indepth documentation then what is on the website?
Any help is appreciated!
Hi @user-c921bf π ! Always feel free to ask questions here.
The three cameras can be rotated independently of each other, so their coordinate systems are indepedent. The headset can also be positioned up and down the nose. Because of these factors, pupil at center of the image does not mean you are staring at the center of the screen. Rather, a 3D eye model is first fit and then you want to perform the calibration process to know how pupil position relates to gaze. For mapping gaze to normalized surface coordinates, check out the Surface Tracker plugin. The coordinate systems for the cameras are documented here.
What is the name of the device? I am not sure I understand what the grid is.
Hello! I have a difficult problem when using the GUI of Pupil Capture. Here is the description: I'm now using Pupil Capture for the debug of my world camera and two eye cameras. It is correct to get the image for three cameras in world GUI, but I cannot change the eye camera when it comes to the eye GUI. Actually, neither eye 0 GUI nor eye 1 GUI is working correctly even I change the different USB for each cameras. The appearance is that I can only debug one camera. Has anyone ever encountered this problem? Please give me some suggestions! Thank you sincerely!
Hi @user-b02f36 π ! Could you open a ticket in the π troubleshooting channel? Then, we can assist you with debugging steps in a private chat.
P.S. : The drivers of the used cameras have been changed into libsdk. My computer device is ASUS TUF Laptop using AMD 7940H and Nividia RTX4060, the version of Pupil Capture is 3.1.16
Hello,
Iβm currently using the camera from Pupil Labs along with the UVC Python library to capture video. Itβs working well, and I can successfully capture video streams from both the left and right cameras. However, Iβm encountering an issue where the video from the right-side camera is upside down.
For context, Iβve created a Camera class that takes idcam as a parameter, which can be either 0 or 1. The expectation is that the video streams from both the left and right cameras should be identical when I instantiate this class.
I appreciate any assistance you can provide on this matter.
Regards, Nobby
Hi @user-80123a π ! That is because that camera is actually mounted upside down. You can simply flip the image in software (try cv2.flip(img, 0) or np.flipud() ). However, just to be clear, the video streams from the two cameras will never be identical. May I ask what you are planning to do and why do you prefer PyUVC over Pupil Capture?
Hi there. I'm just starting to experiment with AOIs and Pupil Core. My experiment setup is quite "natural" (people sitting on a chair gazing at a white wall, whilst thinking about personal memories). I had two questions : 1) I see there are a variety of AprilTags - how does one pick a specific one ? Do you need to use a different type of tag everytime ? 2) how many should I place on my white wall ? Thank you so much for your help !
(Another quick question : does anybody know of this Plugin : https://scholarworks.calstate.edu/downloads/2j62s7153 'Gaze Features Exporter" and know where/how I could install it to directly export average fixation durations, saccades, etc. ? Thank you so much)
Hi @user-ee70bf π. Recommend using the ones provide in our docs. We know they work well. As for size and number, that really depends on your set up. Check out the example here. It's in the Neon docs, but shows how we used markers to outline a whiteboard.
I'm not familiar with the work you've linked to. What exports exactly does it provide that the stock Pupil Player doesn't?
Hi everyone, I have some data from my last study (on painting and eye tracking), I put my data on pupil player and now I have some questions about extract data. I had 60 images and in extracted data I have fixation_on_surface 1 and fixation_on_surface 2.i can't find what is this. can you please help me?
Hi @user-2cc535 ! I assume you are using the surface tracker, is that correct? If so, this surface 1 or surface 2 are defined surfaces by you, you get one file per surface/AOI that you defined. Kindly note that you can name them in Pupil Player before exporting them, and they will have more meaningful names.
Thank you so much @user-d407c1 just one more question, I have all the 60 images in one task (aprox. 8 mins). so I must surface 1 and surface 2 just one time and it will be defined for whole task?
The surfaces you define are there for the whole recording duration. Do you have 60 surfaces on an 8-minute task? May I ask how you define these surfaces? Do you use unique markers for each one? Or do you use one set of markers and crop the area?
Hi team. I would like to generate scan paths, however I noticed from this https://docs.pupil-labs.com/alpha-lab/scanpath-rim/ that it uses Reference Image Mapper in Pupil Cloud which is used with Neon and Invisible. I have Pupil Core. Is there a way to generate scan paths with Core?
Hi @user-3c6ee7 - have you checked this tutorial ? It shows how to generate scanpaths with Pupil Core data.
Hey guys, we are currently trying to run pupil_capture on Ubuntu 22.04 but somehow it's not working. First we tried to install the latest release (https://github.com/pupil-labs/pupil/releases/tag/v3.5) which doesn't start due to "Segmentation Fault" in CPython. Then we tried to run the master branch from source: The program starts but we can't open the options of the three different windows (world, eye0, eye1). If you already answered this, sorry for asking. We tried to search this commuity for answers but unsuccessfully. Help is appreciated. Best regards, Rob
Hi @user-1386c3 ! This is probably a different issue, but have you already tried these troubleshooting issues.
hm. It's kinda strange. It doesn't seem to have to do with the cameras. It worked initially when setting up Ubuntu 22.04 (with the release package) but after dist-upgrade it stopped working. Do you run the source on 22.04?
Hi everyone. I have one more question. I extract my data. I have one problem; for pupil _positions, in the clumn of CSV file with the name of method, I have two sources of data: 2d c++ and pye3d 0.3.3 post-hoc, so in the part of dimetr_3d I have empty rows. is it the problem of recording process? can I resolve it in the pupil player or capture?thanks in advanced
Hi @user-2cc535 π ! If you look at the timestamp column, you will see that there are two entries for every timestamp, one for the 3D detection method and another for the 2D detection method, as both pipelines run in parallel. So, you can just filter based on the "method" column for "pye3d 0.3.0 real-time" and use the resulting data.
Hi! I have a question in Pupil Capture. From developer document, there are two data related to gaze tracking. Due to my use of Pupil Capture for achieving foveated rendering, which gaze data should I use? 'gaze_point_3d' in Gaze Datum Format or 'gaze_on_surfaces' in Surface Datum Format?
Assuming the surface you're tracking is the screen on which you're doing foveated rendering, you'll want to use the surface gaze data
hm. It's kinda strange. It doesn't seem
Hello, we're looking for recommendations for small microphones to attach to the eye-tracker. Does anyone use anything like this?
Hey all, I am working with the pupil core for a class project. I am wondering if there is a way to determine what pixel the user is looking at. I have access to pupil_postions.csv and gaze_positions.csv post recording, and I want to take that data and translate it to determine what pixel (on a screen/monitor) the user was looking at during the recording, or I am wondering if there are any plugins that would help with this? Thanks!
Hi @user-ab0a30 π ! If you want to map gaze to screen coordinates, then we recommend using AprilTags + our Surface Tracker plugin during the recording with Pupil Capture. You can also do post-hoc detection of the AprilTags with the Surface Tracker visualizer in Pupil Player. If you did not use AprilTags during the recording, then our software for Pupil Core does not handle that case and if you cannot re-do the project, then you might give computer vision algorithms for feature/object tracking a try.