πŸ•Ά invisible


Year

user-ebd8d5 07 October, 2025, 10:31:33

Hi, sorry, I have a question about gaze mapping onto a video frame. As the sampling rate for gaze and scene_camera are different, is the gaze_mapping just performed at the world_timestamps ? Is there a way to map multiple gaze points onto a single image? For example, if I calculate the camera intrinsic and camera extrinsincs for a single scene_video frame using COLMAP, I assume I can only project the gaze at that timestamp (pixel values) onto the image?

user-f43a29 07 October, 2025, 10:39:51

Hi @user-ebd8d5 , while they run at different rates, they are synchronized and timestamped with the same high-precision clock. This means you have many gaze points per scene camera image, but can directly compare the timestamps

To get all gaze data for a given scene camera frame, simply take the timestamp of that frame and the next one and find all gaze data with timestamps between them. Then, yes, you could map as you wish:

  • Plot the mean
  • Plot the first gaze datum for that frame
  • Plot all gaze data for said frame.

Whatever is best for your situation

user-ebd8d5 07 October, 2025, 10:42:01

@user-f43a29 : great, thank you

user-ce4d96 07 October, 2025, 13:25:25

Hello, I cannot view any place I can use the monitor application for pupil invinsible. Is this still supported?

user-f43a29 07 October, 2025, 15:07:27

Hi @user-ce4d96 , yes, it is. Have you already followed the steps here?

user-f1b9bf 28 October, 2025, 16:50:03

Hello! I try to use a reference image maper in enrichments in pupil workspace. But I have the scanning videos in the same video data files, as eyetrack data. Is there any way I can fix this error and split the existing files? Otherwise, my maper throws an error. I would really appreciate any help.

user-d407c1 29 October, 2025, 07:29:27

Hi @user-f1b9bf πŸ‘‹ ! I think what you mean is that your scanning sequence is included within the same recording as your main eyetracking data, and you’d like to use a subsection as the reference image mapper scan, is that correct?

Unfortunately, it’s not possible to split or select a subsection of an existing recording for use as scanning recording. The reference image mapper expects a dedicated recording that contains only the scanning portion.

You can upvote the related feature request here: https://discord.com/channels/285728493612957698/1212053314527830026 - that helps us prioritize it for future updates.

For now, the only workaround would be to re-record the scanning video separately if possible.

user-eb72b2 29 October, 2025, 07:50:46
user-eb72b2 29 October, 2025, 07:50:47

This new Alpha Lab looks super promising! Is it also possible to use it with Pupil Invisible data? I have some Invisible recordings I would love to analyse in a bit more detail

user-480f4c 29 October, 2025, 10:28:27

Hi @user-eb72b2! Thanks for the feedback, great to hear that the tutorial could be helpful for your analysis needs. To answer your question: Currently, the tutorial is built around Neon recordings - we basically get Neon recordings from Pupil Cloud in their raw Native Recording Format and then extract the relevant data gaze data & timestamps, scene video timestamps etc using the pl-neon-recording library (which only works for Neon recs).

It is possible to adapt the Colab notebook yourself if you'd like to work with PI recordings. Here are some tips:

  • Instead of getting the recordings in Neon's format automatically using the Cloud API, you can simply change this part and link to a Google Drive folder where you'll have a PI recording (Timeseries Data & Scene video).

  • Once this has changed, then you'll need to adapt the load_recording function in the last cell to match PI's timeseries data format, e.g., change the name of the scene camera video which currently is Neon Scene Camera v1 ps1.mp4, adapt the timestamp definition s.scene_ts0/tsNto get the values from the world_timestamps.csv file, similarly update the gaze_datum definition etc based on the gaze.csv file of your PI recording.

Alternatively, I'm happy to do the necessary updates for adapting it for PI recordings, however, this might need a few days. Let me know πŸ™‚

user-eb72b2 29 October, 2025, 16:49:10

Hi @user-480f4c. Thanks for the quick reply. I think I should be able to figure it out with your instructions, but I will not be able to work on it in the upcoming days. So if you could make the updates to adapt it for Invisible recordings, that would be easiest πŸ™‚

user-480f4c 30 October, 2025, 11:39:05

sure, although I can't share a specific timeline for that. I'll let you know though πŸ™‚

user-748db5 30 October, 2025, 01:51:26

Hi,I am a new user of the Pupil Invisible system.When attempting to log in, the app remains stuck on the β€œLogging in” screen and does not proceed further.

I am using a OnePlus 8T device running Android 14, and I am wondering if this issue may be related to compatibility between the software and my device or operating system version.

Could you please advise whether this could be a compatibility issue and share any possible solutions or recommended steps to resolve the problem?

Thank !

user-4c21e5 30 October, 2025, 02:50:58

Hi @user-748db5 πŸ‘‹. Looks like you're on an incompatible version of Android. The version you'll need is Android 11. We'll need to coordinate via email to get this resolved, so we'll follow up there later today!

user-748db5 30 October, 2025, 02:55:09

Thank you for your response. I have sent an email to info@pupil-labs.com at 9:20 today and I look forward to hearing from you.

user-4c21e5 30 October, 2025, 02:56:33

Yes, we've received your email and will follow up there πŸ™‚

user-f1b9bf 30 October, 2025, 11:24:21

Dear colleagues, I have a question about Pupil Cloud. We have Pupil Invisible, and the extended subscription for Pupil Cloud is expiring soon. We are gradually deleting data because we cannot renew the subscription at the moment. The question is: if we free up space after the extended subscription expires, will old recordings become available for processing again?

user-f43a29 30 October, 2025, 13:00:43

Hi @user-f1b9bf , first, if necessary, please make sure to make local backups of your data. To be clear, deleting recordings from Pupil Cloud permanently deletes them and re-uploading data at a later time is not a supported workflow.

To answer your question, the oldest recordings are actually the ones that will be available for processing after an Add-on expires. The newer recordings are the ones that will be inaccessible.

user-f1b9bf 30 October, 2025, 13:32:45

And if we don't delete all the recordings immediately and the newer ones become inaccessible (due to the lack of an extended subscription) β€” will they become accessible again after we delete some of the old recordings? Or do they become permanently inaccessible?

user-f43a29 30 October, 2025, 14:00:51

They become accessible again after you delete enough recordings to fit within the 2hr quota or after you enable an Unlimited Add-on.

We do not make your data permanently inaccessible.

user-f1b9bf 30 October, 2025, 14:05:45

Thank you for getting back to me so quickly!

user-f1b9bf 30 October, 2025, 19:36:12

Dear colleagues, when creating enrichments, we have noticed that the Pupil Invisible loses calibration - meaning the fixations are clearly located outside the image that the person is viewing. Is it possible to restore the calibration post-hoc - that is, the correspondence between the fixation points and the image?

user-d407c1 31 October, 2025, 07:44:50

Hi @user-f1b9bf ! Pupil Invisible is a calibration-free system, so there’s no traditional calibration to restore post-hoc. What you can do is apply an offset correction to slightly improve the overall alignment between gaze and scene content, which on Pupil Invisible can take special relevance as the scene camera on the lateral can induce parallax errors on closer objects.

If the fixations look correct on the scene camera view but appear misaligned on the reference image, it’s likely an issue with how the gaze is remapped rather than the fixation detection itself.

Could you confirm whether you’re using the Marker Mapper or the Reference Image Mapper?

  • If using Marker Mapper, check that all markers are consistently recognized.
  • If detection looks fine but some points drift (e.g. due to motion blur or brief surface loss), you can manually correct them using the mapping correction tool, you can also correct points through this method for reference image mapper.
user-f1b9bf 31 October, 2025, 10:54:23

@user-d407c1 Hi! we use Reference Image Mapper, as we have real-world and will try to use the mapping correction tool, Thank!

user-f1b9bf 31 October, 2025, 14:00:36

Dear colleagues, we're having some kind of issue with downloading data. When downloading a long recording from the cloud, if it's larger than 1 GB, the downloaded archive won't open and displays an "Unexpected end of archive" error. We can't figure out if there is a download file size limit in the cloud?

user-f43a29 31 October, 2025, 14:08:46

Hi @user-f1b9bf , have you tried opening it with 7-zip?

user-f1b9bf 31 October, 2025, 14:43:02

We traied to open it with WinRAR and ZIP

user-f43a29 31 October, 2025, 14:44:53

When you say ZIP, do you mean 7-zip?

user-f1b9bf 31 October, 2025, 14:47:21

@user-f43a29 We traied to open it with WinRAR and ZIP - all of them give us an error. Total commander open the archives, but the unpacked folders don't contain all the files

user-4de205 31 October, 2025, 14:51:16

Hello! We are currently using Lab Streaming Layer (LSL) with four streams: two EEG devices and two Pupil Invisible glasses. Previously, before switching to LSL, we used Pupil Cloud to label timestamps and events, and applied the Face Mapper enrichment to obtain the gaze_on_face data. This is our first time testing the LSL workflow. After recording via LabRecorder (using the pupil-labs-lsl-relay), we now have an .xdf file. I would like to confirm: once we record via LSL and have only the .xdf file, it’s no longer possible to apply the Face Mapper enrichment on Pupil Cloud, correct? And the correct way to align the Cloud data with the LSL data is to run the post-hoc time alignment using the lsl_relay_time_alignment tool? (In other words, there’s currently no offline version of Face Mapper available, right?) Thank you very much!

user-f1b9bf 31 October, 2025, 15:11:55

@user-f43a29 Oh, we try to use also 7-zip. But it also gave an error and unpacked some files - but not all of them..

user-f43a29 31 October, 2025, 15:22:15

Thanks. I left a message in a thread here (https://discord.com/channels/285728493612957698/1433829706150707404/1433829988293021778) about an additional test to try. Thanks!

EDIT: @user-f1b9bf , I fixed the link in my message here.

user-f43a29 31 October, 2025, 15:49:36

@user-4de205 , it is still possible to achieve what you are asking for. but it requires a bit of extra work when using the lsl-relay.

Basically, the lsl-relay can be configured to send Events to Pupil Invisible at regular intervals. If you run a Pupil Invisible recording at the same time, then these Events will appear in the raw data and on Pupil Cloud, as well as in the final LSL XDF file.

Then, you can run the Face Mapper on Pupil Cloud, download the resulting Enriched Data, and use the common Events in the Pupil Invisible and LSL data to do a post-hoc sync.

A general overview of this process is covered here for gaze data.

user-4de205 31 October, 2025, 15:57:23

Thank you!

End of October archive