Hello! I was wondering if you have information on the Companion software becoming more unstable over the last months? We currently use 22 Invisibles on a remote study, and the amount of unexpected software behavior on various kits has been new to us. Very roughly estimated, one out of four sessions shows software issues. Most of them can be fixed by unplugging cables and restarting the phone, but sometimes there is some data loss too. But also the sheer amount of having to restart and unplug / replug is new to us. We remember cables wearing out being the biggest issue and we will replace them now, but I estimate the amount of wear not to be more then maybe 50 uses.
Hi @user-0b4995 π ! The Invisible Companion app hasn't been updated in the last months. It might be something else, could it be that you accidentally updated the Android version of the companion device to a non supported version? We would love to figure out, could you create a ticket on π troubleshooting such that we can assist you faster?
Hi! I am working on a target detection project to control a robot arm with gaze using Pupil Labs Invisible. Now I want to zoom and crop an area of the scene camera video, around the gaze point. However, I do not know how to synchronise the gaze coordinates from the .csv file with the raw video, so that the right area is cropped at the right time in the video. How can I do that? Thank you in advance!
Hi @user-215e12 π ! The docs for gaze.csv detail the meaning of each column of data. Each gaze datum is timestamped and the "gaze x/y" columns are in pixel units, so in scene (world) camera coordinates. To synchronize the gaze stream with the scene camera stream, you can cross-reference the timestamps of each gaze datum with the timestamps for each scene frame, as found in world_timestamps.csv. One way to be sure that you are doing things right is to compare a gaze overlay render of some frames from your code with the videos+gaze shown in Pupil Cloud.
Thank you so much for your answer! Now I understand what to do, but not how to do it. I know the data that the .csv files provide, but I do not know how to combine those timestamps with the videos.
Hi @user-215e12 , each row in world_timestamps.csv corresponds to each sequential frame in the scene camera video. So, row 10 contains the timestamp of scene frame 10, for example. Then, knowing the timestamps of each frame, you can filter the rows of the gaze.csv to grab all gaze data that occurred between frame 10 and frame 11 for example. The x/y pixel coordinates of each gaze datum can then be directly used as center positions for your cropping process.
Hi all, I am trying to figure out how to create new events that I can mark the recordings for image mapping, I did it in the past but just cannot remember where the option is and cannot find any info on the documentation. To add new custom events here
Hi @user-88386c , if you push the β+ Addβ button next to Events in that screenshot, then that will create a new event at the point of the recording that is currently being played/viewed. So it will create a new event at the time point marked by that blue vertical bar (just to the right in your screenshot) in the playback region. You will be prompted to give it a name and then you can then use your events when making a new enrichment, such as the Reference Image Mapper, by clicking on βadvanced settingsβ and choosing them as the temporal selection for processing.
Hi, I have a Python script, in which I successfully get the gaze and video data from my pupil invisible glasses. Now I would also like to get the IMU sensor data but I cant find an example for this. How can this be done?
Hi @user-4334c3 ! Do you mean in real-time?
okay good to know. I guess that solves my problem or at least ends my search for a solution. thanks.
Hi, i kind of downloaded the face mapper file and differently than to other times i lack some informations in the fixation on face.csv. Like there are columns missing that are very important such as duration [ms] i dont understand why they suddenly are missing. Because i dont do anything except creating the enrichment and then downloading it?
Hi @user-5ab4f5 - There has been a few changes in the export recently that will be reflected soon in an update on our docs.
Regarding your specific question, the fixations_on_face.csv
file in the Face Mapper export does not include the information regarding duration because this can be calculated by the start/end timestamp columns that are already available.
Also note, that the duration [ms]
information is still available on the fixations.csv
file that is included in the Timeseries Data
export.
You can therefore easily find the duration [ms] of each fixation by correlating the fixation IDs between the fixations_on_face.csv
of the enrichment export and the fixations.csv
of the raw data export.
Hi, I am wondering: Is it good to upgrade the OS version 11.0.8.8in21aa to OXYGEN OS 11.0.11.11.in21aa, of my ONEPLUS 8 mobile. I have an issue with connectivity of mobile to Pupils Lab Invisible glasses, and I am not sure that the problem starts from the cable (I tried to connect 2 different cables). It is not possible the app to see the glasses (word camera and eyes).
Hi @user-b37306! Can you please open a ticket in our π troubleshooting channel? Then, we can assist you with debugging steps in a private chat.
I'm facing a challenge and seeking assistance. My work involves processing data from the Invisible. I'm attempting to analyze the gaze data captured by this system. To achieve this, I utilized the 'gover.py' code, which was provided on this platform. This code provides the gaze location for each frame of the world camera. In order to validate the process, I downloaded the data in pupilplayer format. Subsequently, I exported this data and compared the gaze coordinates with the coordinates generated by the 'gover.py' code. However, I've noticed significant discrepancies between the two sets of coordinates. I'm unsure where I might be making an error in this process. Any insights or guidance would be greatly appreciated.
Hi @user-e40297 ! Could you clarify what is the gover.py
? Perhaps, you could point us to where the code is?
This is the code how I've used it
@user-e40297 Are you attempting to compare the gaze points from Pupil Player to those from the Cloud TimeSeries simply based on the index? That won't work for several reasons, namely:
Potential Different Sampling Rates: Recordings from Pupil Invisible from the phone have a gaze sampling rate of max 120Hz due to limitations on the processing power. Still, the eye cameras record at 200Hz and these recordings are reprocessed in the Cloud to give you data at 200Hz. When downloading the Pupil Player Format (a.k.a Native Format), just ensure that there is a 200Hz file in there. Otherwise, you will have really large discrepancies.
Gray frames in Cloud:Due to the higher sampling rate of the eye cameras compared to that of the scene camera, coupled with a possible slight delay in the scene camera's initiation, there is a possibility of capturing gaze data before the first frame from the scene camera is recorded. To ensure no valuable gaze information is lost, Cloud incorporates placeholder grey frames to fill these gaps. This method allows us to retain all gaze data for comprehensive analysis. But each gaze point and eye camera frame is meticulously timestamped. This ensures that the synchronisation of gaze points to the scene camera frames remains precise and unaffected despite the addition of scene frames to bridge any timing discrepancies.
Pupil Player Conversion: Pupil Player removes those grey frames, changes the timestamps to Pupil Time and does some additional conversions. Have a look here.
Could you share your end goal with me? This will help me guide you more effectively in the right direction.
Is it to compare Raw Data with Cloud TimeSeries?
To be sure you didn't miss my message. I forgot to reply to your message and started a new thread (see9-4 09:03) Kind regards,
I want to place a masking image on the location of the gaze. However, the coordinates differ from what I expected
this is my code up to now
Hi Miguel, I have following questions. I defined 5 AOIs and downloaded the enrichment data. How can I recognize in which AOI are the fixations in the file fixations.csv when "true"? For which AOI stays "true"? How are fixations normalized? From 0 to ... and where is the 0,0 begin?
Image When is the "false" case? Image
Hi @user-3c26e4 ! Currently Cloud does not output which AOI is being gazed/fixated on the enrichment output, but it can be computed.
To make everything easier, I have here a gist/code snippet that does exactly that, it will append a column with the gazed/fixated aoi label.
How to run it? You will need to know how to run a Python script, but that's it. Get Python installed on your system, if you haven't.
I'd recommend using a virtual environment, but that's up to you.
cd PATHNAME
where PATHNAME is the path to your folder.pip install -r requirements.txt
python3 fixated_aois.py \
--enrichment_url "HERE_COPY_THE_URL_ADDRESS_FROM_YOUR_ENRICHMENT" \
--download_path "THE_PATH_WHERE_TO_SAVE_THE_NEW_CSV_FILES" \
--API_KEY "YOUR_CLOUD_TOKEN"
You can get your token from Cloud on this page
Hi @user-3c26e4 ! the team made me aware that since the last week there is also a file aoi_fixations.csv that holds all aoi unique IDs, aoi_name and fixations to each aoi, their index and duration.
So, no need to run the gist above unless you want the gazed aoi.
Hi @user-d407c1 unfortunately there is no such file when I download the Enrichment data. Take a look at the screenshot please. These are the files when I unzip but there is nos uch file as aoi_fixations.csv
Hi @user-3c26e4 ! You might need to rerun the enrichment or modify the AOIs to obtain the new files.
Hi! While I am using Pupil Labs Invisible, how can I get gaze coordinates in real time to use them in a Python script?
Hi, @user-215e12 , you can use the Realtime API for this
Hey, Pupil Labs i usually use Pupil Core for my research, i want to try out the Invisible this time so I wanted to know what a sample gaze_position.csv file looks like for the Invisible compared to the Core. what data won't be there for the Invisible? "these are the list of data for the Gaze_position.csv for Core eye tracker - gaze_timestamp world_index confidence norm_pos_x norm_pos_y base_data gaze_point_3d_x gaze_point_3d_y gaze_point_3d_z eye_center0_3d_x eye_center0_3d_y eye_center0_3d_z gaze_normal0_x gaze_normal0_y gaze_normal0_z eye_center1_3d_x eye_center1_3d_y eye_center1_3d_z gaze_normal1_x gaze_normal1_y gaze_normal1_z"
Hi @user-870276! May I ask if you already have access to an Invisible, or are you considering buying a new system?
Our lab already has the Pupil Invisible system
Thanks for confirming! In that case, head over to this section of the Invisible docs. It overviews all of the data made available π
Thank you soon much!
Hi @user-964cca ! No worries, we just want to keep things clear! π
Re-uploading Recordings: Currently, Pupil Cloud does not support re-uploading recordings. However, if you're interested in seeing post-hoc gaze offset correction in Cloud, feel free to suggest it in the π‘ features-requests channel.
Generating Heatmaps in Pupil Player: Yes, you can generate heatmaps in Pupil Player using the surface tracker, similar to the marker mapper tool available in Cloud.
- Is there any way to use specific sections from a video in a heatmap or is there a way to cut the video?
Are you asking about Pupil Player or Pupil Cloud? In Cloud, you can select specific sections for heatmaps by using events and adjusting the settings in Enrichment creation under Advanced Settings > Temporal Selection.
In Player, you can trim the data using trim marks.
Ok thank you that was very helpful and also thanks for the fast answer
Hey. I could not find this information in the technical specs on the website. Could you tell me the world camera sensor size?
Hi! I have been using the pupil under a helmet with a thick protection glass. It looks like the glasses work very well underneath. I also did some measurements without the helmet. Is there a way of testing whether my collected data is valid or is missing some input with the helmet condition?
Hi @user-df855f! Can you please confirm which model of eye tracker you used?
Hi @user-df855f , others have already successfully used Pupil Invisible under helmets. You can also check our publications list to search for other groups that have used Pupil Invisible with helmets.
Since the neural network in Pupil Invisible estimates gaze on a frame-by-frame basis, it shouldn't be much affected by the helmet pushing the glasses upwards, if that is the case.
Have you seen anything in your data that indicates something is invalid or missing?
are there any updates on creating a notebook for offline blinking detection for invisible? I have some videos that got corrupted in pupil cloud but they are fine in their local folder so since I cannot upload them again I would need to run the algorithms offline but not sure if that is even possible for blinking
Hey @user-41ad85 π. This command line tool called pl-rec-export: https://github.com/pupil-labs/pl-rec-export does exactly what you need.
Hi Pupil Team, we have a recording on the pupil player and added a heatmap to it. Is it possible to export the heatmap with a reference picture like in the pupil cloud? kind regards Clemens
Hi @user-964cca! Unfortunately not. Player doesn't have this function. A quick workaround is to make a screenshot. Or if you don't mind some coding, this tutorial shows how you can do it programmatically.
Hi i have a question concerning the pupil cloud. I am working on a general account from my work, I have to delete my footage after I am done. However, I would like to transfer the recordings to my own pupil cloud. Is there a way that i can transfer the recordings to my personal cloud and delete it from the original account?
Hi @user-df855f, this is Wee stepping in for @nmt.
Currently, it is not possible to transfer recordings across workspaces. There is already a ticket for this in on our feature-request channel: https://discord.com/channels/285728493612957698/1212410344400486481 - Feel free to upvote it!
Hi, is there any documentation for saccades.csv?
Hi @user-1af4b6 - the documentation for the saccades will become available in the next few days. In general, saccades are defined as the fragments between fixations as defined by our fixation detector. I encourage you to read more about the detection algorithm in the Pupil Labs fixation detector whitepaper and in this publication that discusses our approach.
I recorded the video in the phone, and it shows the videos has be uploaded to the Cloud but when I can't find it there. And I can find the videos before and after that video. It seems I can't re-upload, how to deal with that?
Hi @user-cc9ad6, could you open a ticket in π troubleshooting and we continue the conversation there? Thanks!
Hello! I'm getting to know Invisible headset. Is there an option to download the recordings to a laptop using a cable? Also, are there desktop alternatives to Pupil Cloud Workplace? Thank you!
Hello @user-4439bd ! It's great to hear you're exploring Pupil Invisible. To export your recordings to a laptop, you can follow the detailed guide here. Once exported, you can use Pupil Player to view and analyse the recordings.
Once exported, you can use Pupil Player to view and analyse the recordings.
Please also have a look at this message regarding some considerations you may want to have https://discord.com/channels/285728493612957698/633564003846717444/1201851567071035392
Hello everyone. I have started pre-testing my experiment which uses Pupil Invisible (protocol is : participants stare at a white wall while recalling personal memories). When looking at the data, even if the majority of the time, the cercle around the eye is dark blue, suggesting good detection, I have very, very small pupil diameter data (around 1 mm, which seems physiologically impossible). Do you know where the problem could come from please ? I am really puzzled by this. Thanks in advance for your help.
Hi @user-ee70bf! Thanks for reaching out. Could you please clarify which product you have? From your description, it sounds like you use Pupil Core, not Pupil Invisible, can you please confirm?
Hi @user-480f4c , so sorry, yes I meant Pupil Core !
Hi folks, can I create a scanning recording to use in Pupil Cloud with a Neon and still have the reference image mapper use data collected with an Invisible?
Hi @user-7aed79 - Running a Reference Image Mapper with a scanning recording made with Neon and the eye tracking footage made with Invisible is possible. If I understand it correctly, you have both eye trackers, right? May I ask why you'd want to use eye tracking data from Invisible instead of using Neon for all your recordings? Regarding your second question, the Reference Image Mapper enrichment only lives on Pupil Cloud
Also, is there a code-based method to create reference image mapper enrichments?