Hi, sorry, I have a question about gaze mapping onto a video frame. As the sampling rate for gaze and scene_camera are different, is the gaze_mapping just performed at the world_timestamps ? Is there a way to map multiple gaze points onto a single image? For example, if I calculate the camera intrinsic and camera extrinsincs for a single scene_video frame using COLMAP, I assume I can only project the gaze at that timestamp (pixel values) onto the image?
Hi @user-ebd8d5 , while they run at different rates, they are synchronized and timestamped with the same high-precision clock. This means you have many gaze points per scene camera image, but can directly compare the timestamps
To get all gaze data for a given scene camera frame, simply take the timestamp of that frame and the next one and find all gaze data with timestamps between them. Then, yes, you could map as you wish:
Whatever is best for your situation
@user-f43a29 : great, thank you
Hello, I cannot view any place I can use the monitor application for pupil invinsible. Is this still supported?
Hi @user-ce4d96 , yes, it is. Have you already followed the steps here?
Hello! I try to use a reference image maper in enrichments in pupil workspace. But I have the scanning videos in the same video data files, as eyetrack data. Is there any way I can fix this error and split the existing files? Otherwise, my maper throws an error. I would really appreciate any help.
Hi @user-f1b9bf π ! I think what you mean is that your scanning sequence is included within the same recording as your main eyetracking data, and youβd like to use a subsection as the reference image mapper scan, is that correct?
Unfortunately, itβs not possible to split or select a subsection of an existing recording for use as scanning recording. The reference image mapper expects a dedicated recording that contains only the scanning portion.
You can upvote the related feature request here: https://discord.com/channels/285728493612957698/1212053314527830026 - that helps us prioritize it for future updates.
For now, the only workaround would be to re-record the scanning video separately if possible.
This new Alpha Lab looks super promising! Is it also possible to use it with Pupil Invisible data? I have some Invisible recordings I would love to analyse in a bit more detail
Hi @user-eb72b2! Thanks for the feedback, great to hear that the tutorial could be helpful for your analysis needs. To answer your question: Currently, the tutorial is built around Neon recordings - we basically get Neon recordings from Pupil Cloud in their raw Native Recording Format and then extract the relevant data gaze data & timestamps, scene video timestamps etc using the pl-neon-recording library (which only works for Neon recs).
It is possible to adapt the Colab notebook yourself if you'd like to work with PI recordings. Here are some tips:
Instead of getting the recordings in Neon's format automatically using the Cloud API, you can simply change this part and link to a Google Drive folder where you'll have a PI recording (Timeseries Data & Scene video).
Once this has changed, then you'll need to adapt the load_recording function in the last cell to match PI's timeseries data format, e.g., change the name of the scene camera video which currently is Neon Scene Camera v1 ps1.mp4, adapt the timestamp definition s.scene_ts0/tsNto get the values from the world_timestamps.csv file, similarly update the gaze_datum definition etc based on the gaze.csv file of your PI recording.
Alternatively, I'm happy to do the necessary updates for adapting it for PI recordings, however, this might need a few days. Let me know π
Hi @user-480f4c. Thanks for the quick reply. I think I should be able to figure it out with your instructions, but I will not be able to work on it in the upcoming days. So if you could make the updates to adapt it for Invisible recordings, that would be easiest π
sure, although I can't share a specific timeline for that. I'll let you know though π
HiοΌI am a new user of the Pupil Invisible system.When attempting to log in, the app remains stuck on the βLogging inβ screen and does not proceed further.
I am using a OnePlus 8T device running Android 14, and I am wondering if this issue may be related to compatibility between the software and my device or operating system version.
Could you please advise whether this could be a compatibility issue and share any possible solutions or recommended steps to resolve the problem?
Thank !
Hi @user-748db5 π. Looks like you're on an incompatible version of Android. The version you'll need is Android 11. We'll need to coordinate via email to get this resolved, so we'll follow up there later today!
Thank you for your response. I have sent an email to info@pupil-labs.com at 9:20 today and I look forward to hearing from you.
Yes, we've received your email and will follow up there π
Dear colleagues, I have a question about Pupil Cloud. We have Pupil Invisible, and the extended subscription for Pupil Cloud is expiring soon. We are gradually deleting data because we cannot renew the subscription at the moment. The question is: if we free up space after the extended subscription expires, will old recordings become available for processing again?
Hi @user-f1b9bf , first, if necessary, please make sure to make local backups of your data. To be clear, deleting recordings from Pupil Cloud permanently deletes them and re-uploading data at a later time is not a supported workflow.
To answer your question, the oldest recordings are actually the ones that will be available for processing after an Add-on expires. The newer recordings are the ones that will be inaccessible.
And if we don't delete all the recordings immediately and the newer ones become inaccessible (due to the lack of an extended subscription) β will they become accessible again after we delete some of the old recordings? Or do they become permanently inaccessible?
They become accessible again after you delete enough recordings to fit within the 2hr quota or after you enable an Unlimited Add-on.
We do not make your data permanently inaccessible.
Thank you for getting back to me so quickly!
Dear colleagues, when creating enrichments, we have noticed that the Pupil Invisible loses calibration - meaning the fixations are clearly located outside the image that the person is viewing. Is it possible to restore the calibration post-hoc - that is, the correspondence between the fixation points and the image?
Hi @user-f1b9bf ! Pupil Invisible is a calibration-free system, so thereβs no traditional calibration to restore post-hoc. What you can do is apply an offset correction to slightly improve the overall alignment between gaze and scene content, which on Pupil Invisible can take special relevance as the scene camera on the lateral can induce parallax errors on closer objects.
If the fixations look correct on the scene camera view but appear misaligned on the reference image, itβs likely an issue with how the gaze is remapped rather than the fixation detection itself.
Could you confirm whether youβre using the Marker Mapper or the Reference Image Mapper?
@user-d407c1 Hi! we use Reference Image Mapper, as we have real-world and will try to use the mapping correction tool, Thank!
Dear colleagues, we're having some kind of issue with downloading data. When downloading a long recording from the cloud, if it's larger than 1 GB, the downloaded archive won't open and displays an "Unexpected end of archive" error. We can't figure out if there is a download file size limit in the cloud?
Hi @user-f1b9bf , have you tried opening it with 7-zip?
We traied to open it with WinRAR and ZIP
When you say ZIP, do you mean 7-zip?
@user-f43a29 We traied to open it with WinRAR and ZIP - all of them give us an error. Total commander open the archives, but the unpacked folders don't contain all the files
Hello! We are currently using Lab Streaming Layer (LSL) with four streams: two EEG devices and two Pupil Invisible glasses. Previously, before switching to LSL, we used Pupil Cloud to label timestamps and events, and applied the Face Mapper enrichment to obtain the gaze_on_face data. This is our first time testing the LSL workflow. After recording via LabRecorder (using the pupil-labs-lsl-relay), we now have an .xdf file. I would like to confirm: once we record via LSL and have only the .xdf file, itβs no longer possible to apply the Face Mapper enrichment on Pupil Cloud, correct? And the correct way to align the Cloud data with the LSL data is to run the post-hoc time alignment using the lsl_relay_time_alignment tool? (In other words, thereβs currently no offline version of Face Mapper available, right?) Thank you very much!
@user-f43a29 Oh, we try to use also 7-zip. But it also gave an error and unpacked some files - but not all of them..
Thanks. I left a message in a thread here (https://discord.com/channels/285728493612957698/1433829706150707404/1433829988293021778) about an additional test to try. Thanks!
EDIT: @user-f1b9bf , I fixed the link in my message here.
@user-4de205 , it is still possible to achieve what you are asking for. but it requires a bit of extra work when using the lsl-relay.
Basically, the lsl-relay can be configured to send Events to Pupil Invisible at regular intervals. If you run a Pupil Invisible recording at the same time, then these Events will appear in the raw data and on Pupil Cloud, as well as in the final LSL XDF file.
Then, you can run the Face Mapper on Pupil Cloud, download the resulting Enriched Data, and use the common Events in the Pupil Invisible and LSL data to do a post-hoc sync.
A general overview of this process is covered here for gaze data.
Thank you!