🕶 invisible


user-5c56d0 01 December, 2024, 03:20:19

Thank you very much. I understand about the cable. I see that a USB 2.0 or higher cable can be used.

user-975d28 03 December, 2024, 10:42:54

Hi everyone,

I recently started a class project with my students, where we used the Invisible Glasses to record a one-timer in hockey (receiving a pass and immediately making a direct shot on goal). The students' task is to perform annotation coding on the video, focusing on gaze position (e.g., ball, stick, or goal). These are also my first steps in the Pupil Labs environment.

A frame-by-frame analysis would be ideal for this. Is it possible to implement this using the Pupil Labs Cloud or Player? I noticed the "Events" feature—would this be the right approach?

In previous projects, I used an older eye tracker with a custom Python script for frame-by-frame analysis and key bindings to predefined events. However, for this class project, I’d prefer to utilize the Cloud or Player for simplicity and accessibility.

Thank you for your advice!

Best regards, Peter

nmt 04 December, 2024, 10:27:44

Hi @user-975d28. Thanks for sharing the project details! There are several ways to approach this. Events are one option, but I suggest an alternative: setting up a manual mapper enrichment. This is a slightly different way to use the manual mapper, but it could be effective.

You would upload a static image, which could be of a real scene or one with labels corresponding to a ball, stick, or goal. Then, you can go through the eye-tracking footage and manually code each fixation, creating a sort of semantic mapping.

Next, generate AOIs around your labels on the image, and Pupil Cloud will compute metrics for you, such as dwell time on the bat. Staying within Pupil Cloud simplifies the process, allowing your students to join the relevant workspace and conduct the analysis themselves.

Here's the manual mapper documentation and instructions on generating AOIs and metrics. Give it a try and see what you think!

user-975d28 04 December, 2024, 12:58:58

Hi @nmt Thank you for the quick and excellent response! Your described approach was exactly what I was hoping for—it really clicked.

I created a reference image with squares representing each predefined category and manually clicked on the square that corresponded to the fitting category for each actual fixation. Summarizing the dwell time distribution afterwards was straightforward, using AOIs mapped to the squares.

This workflow is a fantastic way to introduce students to their first eye-tracking analysis.

Thanks again for the help!

nmt 05 December, 2024, 02:59:11

Really glad to hear it's going to work for you! Let me know if you have any other questions

user-e74b04 06 December, 2024, 15:01:21

Hi just to be sure: Gyro X,Y,Z are angular velocities and yaw, pitch, roll are absolute angles?

user-58c1ce 07 December, 2024, 19:14:12

Hi, I would like to export the video with the fixation (the area the participants are looking at). Is there any way I can do this? Each time I download the video, the fixation area is missing, and there is no information included.

user-480f4c 09 December, 2024, 06:29:56

Hi @user-58c1ce! Are you using Pupil Cloud? If so, you can download your scene camera video with the gaze and fixation overlay using the Video Renderer. You can find more information here: https://docs.pupil-labs.com/invisible/pupil-cloud/visualizations/video-renderer/

user-61e6c1 09 December, 2024, 10:57:44

Hello! Is it necessary to code the fixations of my data manually or is there also a way to do it automatically within the pupil cloud?

user-c2d375 09 December, 2024, 11:11:57

Hi @user-61e6c1 👋 Could you please elaborate on what you're aiming to achieve? If you're looking for a way to automatically map fixations on a surface, I recommend checking out our Marker Mapper and Reference Image Mapper enrichments, as they may be of interest to you.

user-61e6c1 09 December, 2024, 11:55:31

Hi Eleonora and thank you for your hints! I checked out the Reference Image Mapper but could not find out how to upload a reference image.

user-61e6c1 09 December, 2024, 12:03:45

Sorry messed up things ... I checked out the manual mapper. But now really regardin the RIM - we would have to take a scanning video with Neon - with Invisible it's not working, right?

user-c2d375 09 December, 2024, 13:56:09

No problem at all! To clarify, the Reference Image Mapper enrichment works the same way whether you're using Neon or Invisible. For this, you’ll need an eye-tracking video of your participant visiting the art exhibition, an image of the area they’re observing (such as a painting or a statue), and a scanning recording of that specific area.

May I ask if you’ve already collected the recordings in the museum?

user-61e6c1 09 December, 2024, 12:24:21

We have videos from people visiting an art exhibition, so moving around, and want to achieve an excel with fixations.

user-d5a41b 09 December, 2024, 17:08:00

Hi! I can't use Pupil Cloud for my analysis and I am trying to determine the corresponding video frame numbers for each gaze data point for further analysis. I have used pl-rec-export to export my data. My plan was to number the rows in world.csv consecutively to get frame numbers, and then assing each gaze data point to a frame number by using the unix timestamps provided in gaze.csv and world.csv. However, that does not seem to give me accurate frames. When I use PyAv to extract specific frames based on these numbers, the frames I get do not match the frames in Pupil Player. Is there a better way to get frame numbers and to match them with gaze data?

user-d407c1 10 December, 2024, 07:38:22

Hi @user-d5a41b ! 👋 Here you can find a boilerplate example that demonstrates how to use Timeseries data for this, merging world timestamps and gaze to access gaze on the scene camera frame. Let me know if you need help!

user-61e6c1 10 December, 2024, 06:53:15

Good morning! Yes, we already collected the data. So for the scanning video we could also use Invisible? I'm a bit confused as the docs for Invisible explain a scanning video taken with Neon's scene camera.

user-d407c1 10 December, 2024, 07:40:10

Hi @user-61e6c1 ! Apologies this, seems to be a typo, we will correct it, and yes you only need the eye tracker that you have whether that's Neon or Invisible.

user-cc9ad6 10 December, 2024, 12:47:59

I have a question about fixation dispersion. As I know, the points during a fixation should fall within a small radius, typically 1°-2° of visual angle. But here what I get form invisible glasses is about 30°(distance150cm, radius 75cm), and its duration is 34s caused by the giant dispersion. though it's rare among all fixations, I still want to know the exception reason(I assume the glasses build-in algorithm stops to work because it's too warm after 15min working).

user-cc9ad6 10 December, 2024, 12:50:29

Another question is about the fixation duration, as yoiu can see the minimum duration is about 17ms, which is a far way from 100ms (default threshold)

Chat image

user-d407c1 10 December, 2024, 13:22:31

This should not happen, the minimum fixation duration for Pupil Invisible is set on 60 ms. Could you open a ticket on 🛟 troubleshooting and follow up with a recording ID? And if possible the fixations.csv file?

user-d407c1 10 December, 2024, 13:20:36

Hi @user-cc9ad6 ! Please allow me to clarify few concepts here. What you see in the Cloud Player, is a representation of the fixation, being located at the (x,y) coordinates of the fixation and its circle size being relative to the duration of the fixation, and not to the dispersion.

Pupil Invisible alike Neon employs a velocity-based algorithm to detect fixations, rather than a dispersion based. You can find more info on these messages:

https://discord.com/channels/285728493612957698/1047111711230009405/1280892486360764416 https://discord.com/channels/285728493612957698/1047111711230009405/1281180585792114718

Or in the Pupil Labs fixation detector whitepaper or our publication in Behavior Research Methods.

user-cc9ad6 10 December, 2024, 14:02:04

Thanks for your fast reply. But I thought you use dispersion-based according to https://docs.pupil-labs.com/core/terminology/. And I can't find info about invisible like you offer me on the website. maybe you can update that to avoid confusions.

user-d407c1 10 December, 2024, 14:10:50

It seems you’re referring to the documentation for Pupil Core. For Pupil Invisible, you can find the relevant documentation here: Pupil Invisible Documentation – Data Streams: Fixations & Saccades.

user-cc9ad6 11 December, 2024, 13:04:00

Ok, now I found it, thanks a lot.:)

user-5c56d0 11 December, 2024, 00:49:59

Thank you for your managment.

I would like to purchase a smartphone that can use Invisible. Does it work with any common smartphone? Are there any recommended specifications?

The reason is that when experimenting with Invisible, the operating time of invsible (operating time of the smartphone) is short, and I would like to increase the operating time by switching the smartphone. I am now experimenting with 6 hours of continuous operation, etc., but I need to intersperse charging the Invisible's smartphone in the middle of the experiment, which takes about 90 minutes to charge the smartphone.

nmt 11 December, 2024, 02:33:26

Hi @user-5c56d0! The supported phones and Android versions for Pupil Invisible are listed here: https://docs.pupil-labs.com/invisible/hardware/compatible-devices/#android-os. Which phone model are you currently using?

user-5c56d0 11 December, 2024, 07:24:25

Thank you for your reply. Thank you. I have one plus 6. If I buy a OnePlus 8 now, is there any chance it will not be Android 11? Is there a limit to the continuous operating time of the Invisible main unit (eye tracker not smartphone)? For example, if it is operated continuously for more than 3 hours, it will break, etc. Also, is there a limit to the continuous operation time for Neon? (I have experienced that the NEON became very hot after about an hour and gave an error in the smart phone recording).

nmt 11 December, 2024, 07:58:42

It's difficult for me determine if a OnePlus 8 purchased on the open market will come with Android 11. You would have to ask the vendor, but in most cases it would be possible to install that Android version with fastboot commands. You can also reach out to sales@pupil-labs.com to enquire about purchasing one from us. There's technically no limit other than battery and phone storage capacity in how long one can record. Neon, in particular, can record in excess of four hours. When you say error, do you mean that the phone itself reported an overheating error?

user-f6ea66 12 December, 2024, 21:32:01

Hi, quick question - When adding Areas of Interest to a Marker Mapper enrichment, can the AOIs overlap? Or to be more accurate - what I want is a "default" AOI for the entire surface, except for let's say two-three specific AOIs inside the main screen AOI.

user-f6ea66 12 December, 2024, 21:41:37

Here is a diagram of what I'm asking is possible. Hope it makes sense : )

user-f6ea66 12 December, 2024, 21:41:39

Chat image

user-d407c1 13 December, 2024, 07:15:56

@user-f6ea66 Yes, that's possible 😉 !

user-f6ea66 13 December, 2024, 13:10:55

Ok, nice! However, I tried before asking and it seemed to me that whenever i made an AOI that overlapped another, it didn't seem to 'register'. So it seemed gone. When I tried to highlight it by choosing it in the menu on the left (when making AOIs), nothing popped out in the picture. ..as opposed to when I chose one that I made first which didn't overlap, it would highlight/preview it in the screenshot on the right. How would you go about making an AOI setup as in my picture above? Thank you! : )

user-d407c1 13 December, 2024, 13:52:36

Hi @user-f6ea66 ! I am not sure I fully understand the issue, here is an example on how to create these overlapping AOIs.

user-f6ea66 13 December, 2024, 13:57:27

Very cool, thanks a lot for the video. So you just made the entire board an AOI first as normal by marking it, and then added new AOIs on top of that? When I tried to do that it seemed to me the overlaying ones didn't appear. Probably a miscomprehension on my part. I'll try again in the next few days. Hmm, it makes me wonder - do the "face" AOIs in your example replace the board AOI in the face areas? So in your data output, you would get people either looking at the faces or on the rest of the board - and not both at once in the face areas?

user-d407c1 13 December, 2024, 14:13:23

No worries! When creating AOIs, the order of creation doesn't matter since they are treated as layers. If the AOIs overlap, the data output will include both.

If you're only interested in the foremost AOI, you can either exclude the background AOI's data where overlaps occur or draw the background AOI and subtract the area of the foreground AOI from that one using the eraser.

user-f6ea66 13 December, 2024, 14:16:39

Ah, excellent. Was not aware that there is an eraser. Do you have a screenshot or so? (Or is it really easy to find and use? 🙂 )

user-d407c1 13 December, 2024, 14:36:08

@user-f6ea66 Super easy to find and use! Right there on the video, you’ll see:

  • A purple button with an arrow to select the area,
  • A pen icon with a + to add to the area, and
  • A pen icon with a - to subtract from it.

Alternatively, you can click on the keyboard icon next to the profile image to see all the keyboard shortcuts:

  • N - Create a new AOI
  • V / ESC - Enter/exit selection mode
  • P - Pen tool
  • E - Eraser tool

😉

user-f6ea66 13 December, 2024, 15:40:38

Excellent, thank you very much! Have a nice weekend : )

user-d407c1 13 December, 2024, 15:40:57

You too!

user-3c26e4 13 December, 2024, 15:54:10

Hi @user-d407c1 , Gaze pipeline failed in the last 7 recordings from yesterday! Could you or your colleagues please fix the problem as soon as possible, because the student must start to evaluate the gaze for his Master Thesis. Do you need the IDs of these recordings? Please let me know what is needed.

user-d407c1 13 December, 2024, 16:01:15

Hi @user-3c26e4 ! I’m sorry to hear you’re experiencing issues. Could you please create a ticket in 🛟 troubleshooting and share the recording IDs? This will help us address the problem more efficiently.

user-870276 14 December, 2024, 14:42:02

Hey pupil labs why there aren't being videos available ?

Chat image

user-d407c1 16 December, 2024, 08:36:25

Hi @user-870276 ! I'm sorry to hear that you are experiencing issues with your Pupil Invisible. It seems like there might be an issue with your scene camera. Could you please open a ticket in 🛟 troubleshooting and provide the following details: - The recording ID of any affected recordings. - Whether the scene camera is visible on the Companion Device.

Once you share this information, we’ll follow up there with more specific troubleshooting steps. Thanks!

user-870276 16 December, 2024, 03:41:24

i'm using "Pupil Labs Realtime Network API" via python for record initiation and record end.

user-d407c1 16 December, 2024, 08:38:11

No worries—this shouldn’t impact the issue. However, feel free to include this detail along with the version of the Real-time API you’re using in the ticket.

user-a578d9 16 December, 2024, 15:38:41

Hello everyone,

I'm planning a football scanning study and am eager to use the Pupil Invisible glasses to track and analyze gaze behavior. Our research aims to analyze players' scanning abilities of both space and the ball, to understand how elite footballers perceive and process visual information during gameplay. Specifically, we will be conducting this study in an 11 v 11 scenario over 20 minutes, focusing on midfielders.

I would greatly appreciate any advice or insights you can offer on setting up the study, particularly on:

How to effectively integrate the Pupil Invisible glasses for optimal data collection.

Tips for analyzing the data to gain a comprehensive understanding of athletes' performance.

If you have experience with similar applications in high-performance sports, or if you know any resources or references, I'd love to hear about them. Additionally, if anyone is willing to chat and provide guidance on building the study parameters and understanding the data analysis, it would be incredibly helpful.

Thank you so much for your time and assistance!

Best regards, Jack Harwood

user-d407c1 16 December, 2024, 15:52:55

Hi @user-a578d9 👋!

Pupil Invisible has been sunsetted, and we believe Neon would be a much better fit for your needs. It offers several improvements, including: - A wider field of view (FoV), - An IMU sensor, - Longer battery life, - Enhanced accuracy, and - A modular design.

For additional details, you can also read about it here: https://pupil-labs.com/products/invisible.

We’d be happy to answer any questions you may have and can arrange an online demo to show you Neon’s capabilities. Since you also reached out via email, I’ll follow up there shortly.

user-de8993 26 December, 2024, 04:41:36

@user-d407c1 I am a user from India, we are having Enrichment issues with Reference Image Mapper. image is attached, we have tried multiple object scans and user data, still facing the same issue for all, kindly help

Chat image

user-d407c1 26 December, 2024, 08:25:14

Hi @user-de8993 👋 ! This message typically appears when the reference image and scanning recording could not be found or don’t match. Without seeing them, it’s a bit hard to pinpoint the exact issue, but here are two common reasons why this may have happened:

  • Featureless Reference Image: If the selected reference image lacks identifiable features (e.g., a plain white wall or text patterns that are too small), the system won’t be able to pick them up and match them effectively.
  • Poor Scanning Recording: If the scanning recording is too static (e.g., capturing the region of interest from a single angle instead of moving around to get multiple perspectives), the mapping won’t work.
    On the other hand, if the camera movement is too fast, motion blur can make the recording unusable.

I'd recommend you to check out our best practices and examples here for more potential causes, and tips to prevent this. Finally, feel free to invite [email removed] to your project, and we’d be happy to review it and provide more tailored feedback on to why it may have failed.

user-5270f6 31 December, 2024, 13:04:18

Hello, we use the invisible in our research for determining a person's gaze direction in relation to their future walking trajectory. To find a 3D pose estimation of the head we want to extract the roll pitch and yaw in every time stamp. The data provides only the roll and the pitch, is there an option to find also the yaw angles? Or recommended way to calculate those from the gyroscope and accelerometer? Thank you !

user-d407c1 01 January, 2025, 11:12:51

Hi @user-5270f6 👋 ! Pupil Invisible has a 6DoF IMU (no magnetometer), thus it is not possible to compute yaw without drift. We have an implementation of Madgwick's algorithm for pitch and roll but no yaw. You can read more on these previous messages https://discord.com/channels/285728493612957698/1047111711230009405/1288781198172225536 https://discord.com/channels/285728493612957698/633564003846717444/844179767326277652

End of December archive