Thank you very much. I understand about the cable. I see that a USB 2.0 or higher cable can be used.
Hi everyone,
I recently started a class project with my students, where we used the Invisible Glasses to record a one-timer in hockey (receiving a pass and immediately making a direct shot on goal). The students' task is to perform annotation coding on the video, focusing on gaze position (e.g., ball, stick, or goal). These are also my first steps in the Pupil Labs environment.
A frame-by-frame analysis would be ideal for this. Is it possible to implement this using the Pupil Labs Cloud or Player? I noticed the "Events" feature—would this be the right approach?
In previous projects, I used an older eye tracker with a custom Python script for frame-by-frame analysis and key bindings to predefined events. However, for this class project, I’d prefer to utilize the Cloud or Player for simplicity and accessibility.
Thank you for your advice!
Best regards, Peter
Hi @user-975d28. Thanks for sharing the project details! There are several ways to approach this. Events are one option, but I suggest an alternative: setting up a manual mapper enrichment. This is a slightly different way to use the manual mapper, but it could be effective.
You would upload a static image, which could be of a real scene or one with labels corresponding to a ball, stick, or goal. Then, you can go through the eye-tracking footage and manually code each fixation, creating a sort of semantic mapping.
Next, generate AOIs around your labels on the image, and Pupil Cloud will compute metrics for you, such as dwell time on the bat. Staying within Pupil Cloud simplifies the process, allowing your students to join the relevant workspace and conduct the analysis themselves.
Here's the manual mapper documentation and instructions on generating AOIs and metrics. Give it a try and see what you think!
Hi @nmt Thank you for the quick and excellent response! Your described approach was exactly what I was hoping for—it really clicked.
I created a reference image with squares representing each predefined category and manually clicked on the square that corresponded to the fitting category for each actual fixation. Summarizing the dwell time distribution afterwards was straightforward, using AOIs mapped to the squares.
This workflow is a fantastic way to introduce students to their first eye-tracking analysis.
Thanks again for the help!
Really glad to hear it's going to work for you! Let me know if you have any other questions
Hi just to be sure: Gyro X,Y,Z are angular velocities and yaw, pitch, roll are absolute angles?
Hi, I would like to export the video with the fixation (the area the participants are looking at). Is there any way I can do this? Each time I download the video, the fixation area is missing, and there is no information included.
Hi @user-58c1ce! Are you using Pupil Cloud? If so, you can download your scene camera video with the gaze and fixation overlay using the Video Renderer. You can find more information here: https://docs.pupil-labs.com/invisible/pupil-cloud/visualizations/video-renderer/
Hello! Is it necessary to code the fixations of my data manually or is there also a way to do it automatically within the pupil cloud?
Hi @user-61e6c1 👋 Could you please elaborate on what you're aiming to achieve? If you're looking for a way to automatically map fixations on a surface, I recommend checking out our Marker Mapper and Reference Image Mapper enrichments, as they may be of interest to you.
Hi Eleonora and thank you for your hints! I checked out the Reference Image Mapper but could not find out how to upload a reference image.
Sorry messed up things ... I checked out the manual mapper. But now really regardin the RIM - we would have to take a scanning video with Neon - with Invisible it's not working, right?
No problem at all! To clarify, the Reference Image Mapper enrichment works the same way whether you're using Neon or Invisible. For this, you’ll need an eye-tracking video of your participant visiting the art exhibition, an image of the area they’re observing (such as a painting or a statue), and a scanning recording of that specific area.
May I ask if you’ve already collected the recordings in the museum?
We have videos from people visiting an art exhibition, so moving around, and want to achieve an excel with fixations.
Hi! I can't use Pupil Cloud for my analysis and I am trying to determine the corresponding video frame numbers for each gaze data point for further analysis. I have used pl-rec-export to export my data. My plan was to number the rows in world.csv consecutively to get frame numbers, and then assing each gaze data point to a frame number by using the unix timestamps provided in gaze.csv and world.csv. However, that does not seem to give me accurate frames. When I use PyAv to extract specific frames based on these numbers, the frames I get do not match the frames in Pupil Player. Is there a better way to get frame numbers and to match them with gaze data?
Hi @user-d5a41b ! 👋 Here you can find a boilerplate example that demonstrates how to use Timeseries data for this, merging world timestamps and gaze to access gaze on the scene camera frame. Let me know if you need help!
Good morning! Yes, we already collected the data. So for the scanning video we could also use Invisible? I'm a bit confused as the docs for Invisible explain a scanning video taken with Neon's scene camera.
Hi @user-61e6c1 ! Apologies this, seems to be a typo, we will correct it, and yes you only need the eye tracker that you have whether that's Neon or Invisible.
I have a question about fixation dispersion. As I know, the points during a fixation should fall within a small radius, typically 1°-2° of visual angle. But here what I get form invisible glasses is about 30°(distance150cm, radius 75cm), and its duration is 34s caused by the giant dispersion. though it's rare among all fixations, I still want to know the exception reason(I assume the glasses build-in algorithm stops to work because it's too warm after 15min working).
Another question is about the fixation duration, as yoiu can see the minimum duration is about 17ms, which is a far way from 100ms (default threshold)
This should not happen, the minimum fixation duration for Pupil Invisible is set on 60 ms. Could you open a ticket on 🛟 troubleshooting and follow up with a recording ID? And if possible the fixations.csv file?
Hi @user-cc9ad6 ! Please allow me to clarify few concepts here. What you see in the Cloud Player, is a representation of the fixation, being located at the (x,y) coordinates of the fixation and its circle size being relative to the duration of the fixation, and not to the dispersion.
Pupil Invisible alike Neon employs a velocity-based algorithm to detect fixations, rather than a dispersion based. You can find more info on these messages:
https://discord.com/channels/285728493612957698/1047111711230009405/1280892486360764416 https://discord.com/channels/285728493612957698/1047111711230009405/1281180585792114718
Or in the Pupil Labs fixation detector whitepaper or our publication in Behavior Research Methods.
Thanks for your fast reply. But I thought you use dispersion-based according to https://docs.pupil-labs.com/core/terminology/. And I can't find info about invisible like you offer me on the website. maybe you can update that to avoid confusions.
It seems you’re referring to the documentation for Pupil Core. For Pupil Invisible, you can find the relevant documentation here: Pupil Invisible Documentation – Data Streams: Fixations & Saccades.
Ok, now I found it, thanks a lot.:)
Thank you for your managment.
I would like to purchase a smartphone that can use Invisible. Does it work with any common smartphone? Are there any recommended specifications?
The reason is that when experimenting with Invisible, the operating time of invsible (operating time of the smartphone) is short, and I would like to increase the operating time by switching the smartphone. I am now experimenting with 6 hours of continuous operation, etc., but I need to intersperse charging the Invisible's smartphone in the middle of the experiment, which takes about 90 minutes to charge the smartphone.
Hi @user-5c56d0! The supported phones and Android versions for Pupil Invisible are listed here: https://docs.pupil-labs.com/invisible/hardware/compatible-devices/#android-os. Which phone model are you currently using?
Thank you for your reply. Thank you. I have one plus 6. If I buy a OnePlus 8 now, is there any chance it will not be Android 11? Is there a limit to the continuous operating time of the Invisible main unit (eye tracker not smartphone)? For example, if it is operated continuously for more than 3 hours, it will break, etc. Also, is there a limit to the continuous operation time for Neon? (I have experienced that the NEON became very hot after about an hour and gave an error in the smart phone recording).
It's difficult for me determine if a OnePlus 8 purchased on the open market will come with Android 11. You would have to ask the vendor, but in most cases it would be possible to install that Android version with fastboot commands. You can also reach out to sales@pupil-labs.com to enquire about purchasing one from us. There's technically no limit other than battery and phone storage capacity in how long one can record. Neon, in particular, can record in excess of four hours. When you say error, do you mean that the phone itself reported an overheating error?
Hi, quick question - When adding Areas of Interest to a Marker Mapper enrichment, can the AOIs overlap? Or to be more accurate - what I want is a "default" AOI for the entire surface, except for let's say two-three specific AOIs inside the main screen AOI.
Here is a diagram of what I'm asking is possible. Hope it makes sense : )
@user-f6ea66 Yes, that's possible 😉 !
Ok, nice! However, I tried before asking and it seemed to me that whenever i made an AOI that overlapped another, it didn't seem to 'register'. So it seemed gone. When I tried to highlight it by choosing it in the menu on the left (when making AOIs), nothing popped out in the picture. ..as opposed to when I chose one that I made first which didn't overlap, it would highlight/preview it in the screenshot on the right. How would you go about making an AOI setup as in my picture above? Thank you! : )
Hi @user-f6ea66 ! I am not sure I fully understand the issue, here is an example on how to create these overlapping AOIs.
Very cool, thanks a lot for the video. So you just made the entire board an AOI first as normal by marking it, and then added new AOIs on top of that? When I tried to do that it seemed to me the overlaying ones didn't appear. Probably a miscomprehension on my part. I'll try again in the next few days. Hmm, it makes me wonder - do the "face" AOIs in your example replace the board AOI in the face areas? So in your data output, you would get people either looking at the faces or on the rest of the board - and not both at once in the face areas?
No worries! When creating AOIs, the order of creation doesn't matter since they are treated as layers. If the AOIs overlap, the data output will include both.
If you're only interested in the foremost AOI, you can either exclude the background AOI's data where overlaps occur or draw the background AOI and subtract the area of the foreground AOI from that one using the eraser.
Ah, excellent. Was not aware that there is an eraser. Do you have a screenshot or so? (Or is it really easy to find and use? 🙂 )
@user-f6ea66 Super easy to find and use! Right there on the video, you’ll see:
Alternatively, you can click on the keyboard icon next to the profile image to see all the keyboard shortcuts:
N
- Create a new AOIV
/ ESC
- Enter/exit selection modeP
- Pen toolE
- Eraser tool😉
Excellent, thank you very much! Have a nice weekend : )
You too!
Hi @user-d407c1 , Gaze pipeline failed in the last 7 recordings from yesterday! Could you or your colleagues please fix the problem as soon as possible, because the student must start to evaluate the gaze for his Master Thesis. Do you need the IDs of these recordings? Please let me know what is needed.
Hi @user-3c26e4 ! I’m sorry to hear you’re experiencing issues. Could you please create a ticket in 🛟 troubleshooting and share the recording IDs? This will help us address the problem more efficiently.
Hey pupil labs why there aren't being videos available ?
Hi @user-870276 ! I'm sorry to hear that you are experiencing issues with your Pupil Invisible. It seems like there might be an issue with your scene camera. Could you please open a ticket in 🛟 troubleshooting and provide the following details: - The recording ID of any affected recordings. - Whether the scene camera is visible on the Companion Device.
Once you share this information, we’ll follow up there with more specific troubleshooting steps. Thanks!
i'm using "Pupil Labs Realtime Network API" via python for record initiation and record end.
No worries—this shouldn’t impact the issue. However, feel free to include this detail along with the version of the Real-time API you’re using in the ticket.
Hello everyone,
I'm planning a football scanning study and am eager to use the Pupil Invisible glasses to track and analyze gaze behavior. Our research aims to analyze players' scanning abilities of both space and the ball, to understand how elite footballers perceive and process visual information during gameplay. Specifically, we will be conducting this study in an 11 v 11 scenario over 20 minutes, focusing on midfielders.
I would greatly appreciate any advice or insights you can offer on setting up the study, particularly on:
How to effectively integrate the Pupil Invisible glasses for optimal data collection.
Tips for analyzing the data to gain a comprehensive understanding of athletes' performance.
If you have experience with similar applications in high-performance sports, or if you know any resources or references, I'd love to hear about them. Additionally, if anyone is willing to chat and provide guidance on building the study parameters and understanding the data analysis, it would be incredibly helpful.
Thank you so much for your time and assistance!
Best regards, Jack Harwood
Hi @user-a578d9 👋!
Pupil Invisible has been sunsetted, and we believe Neon would be a much better fit for your needs. It offers several improvements, including: - A wider field of view (FoV), - An IMU sensor, - Longer battery life, - Enhanced accuracy, and - A modular design.
For additional details, you can also read about it here: https://pupil-labs.com/products/invisible.
We’d be happy to answer any questions you may have and can arrange an online demo to show you Neon’s capabilities. Since you also reached out via email, I’ll follow up there shortly.
@user-d407c1 I am a user from India, we are having Enrichment issues with Reference Image Mapper. image is attached, we have tried multiple object scans and user data, still facing the same issue for all, kindly help
Hi @user-de8993 👋 ! This message typically appears when the reference image and scanning recording could not be found or don’t match. Without seeing them, it’s a bit hard to pinpoint the exact issue, but here are two common reasons why this may have happened:
I'd recommend you to check out our best practices and examples here for more potential causes, and tips to prevent this. Finally, feel free to invite [email removed] to your project, and we’d be happy to review it and provide more tailored feedback on to why it may have failed.
Hello, we use the invisible in our research for determining a person's gaze direction in relation to their future walking trajectory. To find a 3D pose estimation of the head we want to extract the roll pitch and yaw in every time stamp. The data provides only the roll and the pitch, is there an option to find also the yaw angles? Or recommended way to calculate those from the gyroscope and accelerometer? Thank you !
Hi @user-5270f6 👋 ! Pupil Invisible has a 6DoF IMU (no magnetometer), thus it is not possible to compute yaw without drift. We have an implementation of Madgwick's algorithm for pitch and roll but no yaw. You can read more on these previous messages https://discord.com/channels/285728493612957698/1047111711230009405/1288781198172225536 https://discord.com/channels/285728493612957698/633564003846717444/844179767326277652