πŸ•Ά invisible


user-0b4995 03 April, 2024, 08:35:40

Hello! I was wondering if you have information on the Companion software becoming more unstable over the last months? We currently use 22 Invisibles on a remote study, and the amount of unexpected software behavior on various kits has been new to us. Very roughly estimated, one out of four sessions shows software issues. Most of them can be fixed by unplugging cables and restarting the phone, but sometimes there is some data loss too. But also the sheer amount of having to restart and unplug / replug is new to us. We remember cables wearing out being the biggest issue and we will replace them now, but I estimate the amount of wear not to be more then maybe 50 uses.

user-d407c1 03 April, 2024, 09:07:07

Hi @user-0b4995 πŸ‘‹ ! The Invisible Companion app hasn't been updated in the last months. It might be something else, could it be that you accidentally updated the Android version of the companion device to a non supported version? We would love to figure out, could you create a ticket on πŸ›Ÿ troubleshooting such that we can assist you faster?

user-215e12 03 April, 2024, 11:53:20

Hi! I am working on a target detection project to control a robot arm with gaze using Pupil Labs Invisible. Now I want to zoom and crop an area of the scene camera video, around the gaze point. However, I do not know how to synchronise the gaze coordinates from the .csv file with the raw video, so that the right area is cropped at the right time in the video. How can I do that? Thank you in advance!

user-f43a29 03 April, 2024, 12:37:13

Hi @user-215e12 πŸ‘‹ ! The docs for gaze.csv detail the meaning of each column of data. Each gaze datum is timestamped and the "gaze x/y" columns are in pixel units, so in scene (world) camera coordinates. To synchronize the gaze stream with the scene camera stream, you can cross-reference the timestamps of each gaze datum with the timestamps for each scene frame, as found in world_timestamps.csv. One way to be sure that you are doing things right is to compare a gaze overlay render of some frames from your code with the videos+gaze shown in Pupil Cloud.

user-215e12 03 April, 2024, 16:21:04

Thank you so much for your answer! Now I understand what to do, but not how to do it. I know the data that the .csv files provide, but I do not know how to combine those timestamps with the videos.

user-f43a29 04 April, 2024, 12:43:33

Hi @user-215e12 , each row in world_timestamps.csv corresponds to each sequential frame in the scene camera video. So, row 10 contains the timestamp of scene frame 10, for example. Then, knowing the timestamps of each frame, you can filter the rows of the gaze.csv to grab all gaze data that occurred between frame 10 and frame 11 for example. The x/y pixel coordinates of each gaze datum can then be directly used as center positions for your cropping process.

user-88386c 05 April, 2024, 20:53:35

Hi all, I am trying to figure out how to create new events that I can mark the recordings for image mapping, I did it in the past but just cannot remember where the option is and cannot find any info on the documentation. To add new custom events here

user-88386c 06 April, 2024, 00:54:48

Chat image

user-f43a29 08 April, 2024, 06:31:00

Hi @user-88386c , if you push the β€œ+ Add” button next to Events in that screenshot, then that will create a new event at the point of the recording that is currently being played/viewed. So it will create a new event at the time point marked by that blue vertical bar (just to the right in your screenshot) in the playback region. You will be prompted to give it a name and then you can then use your events when making a new enrichment, such as the Reference Image Mapper, by clicking on β€œadvanced settings” and choosing them as the temporal selection for processing.

user-4334c3 08 April, 2024, 06:34:50

Hi, I have a Python script, in which I successfully get the gaze and video data from my pupil invisible glasses. Now I would also like to get the IMU sensor data but I cant find an example for this. How can this be done?

user-d407c1 08 April, 2024, 06:42:13

Hi @user-4334c3 ! Do you mean in real-time?

user-4334c3 08 April, 2024, 07:19:27

okay good to know. I guess that solves my problem or at least ends my search for a solution. thanks.

user-5ab4f5 08 April, 2024, 11:12:15

Hi, i kind of downloaded the face mapper file and differently than to other times i lack some informations in the fixation on face.csv. Like there are columns missing that are very important such as duration [ms] i dont understand why they suddenly are missing. Because i dont do anything except creating the enrichment and then downloading it?

user-480f4c 08 April, 2024, 11:26:30

Hi @user-5ab4f5 - There has been a few changes in the export recently that will be reflected soon in an update on our docs.

Regarding your specific question, the fixations_on_face.csv file in the Face Mapper export does not include the information regarding duration because this can be calculated by the start/end timestamp columns that are already available.

Also note, that the duration [ms]information is still available on the fixations.csv file that is included in the Timeseries Data export.

You can therefore easily find the duration [ms] of each fixation by correlating the fixation IDs between the fixations_on_face.csv of the enrichment export and the fixations.csv of the raw data export.

user-b37306 08 April, 2024, 11:33:53

Hi, I am wondering: Is it good to upgrade the OS version 11.0.8.8in21aa to OXYGEN OS 11.0.11.11.in21aa, of my ONEPLUS 8 mobile. I have an issue with connectivity of mobile to Pupils Lab Invisible glasses, and I am not sure that the problem starts from the cable (I tried to connect 2 different cables). It is not possible the app to see the glasses (word camera and eyes).

user-480f4c 08 April, 2024, 11:38:54

Hi @user-b37306! Can you please open a ticket in our πŸ›Ÿ troubleshooting channel? Then, we can assist you with debugging steps in a private chat.

user-e40297 08 April, 2024, 19:01:46

I'm facing a challenge and seeking assistance. My work involves processing data from the Invisible. I'm attempting to analyze the gaze data captured by this system. To achieve this, I utilized the 'gover.py' code, which was provided on this platform. This code provides the gaze location for each frame of the world camera. In order to validate the process, I downloaded the data in pupilplayer format. Subsequently, I exported this data and compared the gaze coordinates with the coordinates generated by the 'gover.py' code. However, I've noticed significant discrepancies between the two sets of coordinates. I'm unsure where I might be making an error in this process. Any insights or guidance would be greatly appreciated.

user-d407c1 09 April, 2024, 06:27:58

Hi @user-e40297 ! Could you clarify what is the gover.py? Perhaps, you could point us to where the code is?

user-e40297 09 April, 2024, 06:30:16

This is the code how I've used it

PL_determine_xy_coords.py

user-d407c1 09 April, 2024, 06:58:19

@user-e40297 Are you attempting to compare the gaze points from Pupil Player to those from the Cloud TimeSeries simply based on the index? That won't work for several reasons, namely:

  • Potential Different Sampling Rates: Recordings from Pupil Invisible from the phone have a gaze sampling rate of max 120Hz due to limitations on the processing power. Still, the eye cameras record at 200Hz and these recordings are reprocessed in the Cloud to give you data at 200Hz. When downloading the Pupil Player Format (a.k.a Native Format), just ensure that there is a 200Hz file in there. Otherwise, you will have really large discrepancies.

  • Gray frames in Cloud:Due to the higher sampling rate of the eye cameras compared to that of the scene camera, coupled with a possible slight delay in the scene camera's initiation, there is a possibility of capturing gaze data before the first frame from the scene camera is recorded. To ensure no valuable gaze information is lost, Cloud incorporates placeholder grey frames to fill these gaps. This method allows us to retain all gaze data for comprehensive analysis. But each gaze point and eye camera frame is meticulously timestamped. This ensures that the synchronisation of gaze points to the scene camera frames remains precise and unaffected despite the addition of scene frames to bridge any timing discrepancies.

  • Pupil Player Conversion: Pupil Player removes those grey frames, changes the timestamps to Pupil Time and does some additional conversions. Have a look here.

Could you share your end goal with me? This will help me guide you more effectively in the right direction.

Is it to compare Raw Data with Cloud TimeSeries?

user-e40297 11 April, 2024, 09:20:11

To be sure you didn't miss my message. I forgot to reply to your message and started a new thread (see9-4 09:03) Kind regards,

user-e40297 09 April, 2024, 07:03:35

I want to place a masking image on the location of the gaze. However, the coordinates differ from what I expected

user-e40297 09 April, 2024, 07:05:32

this is my code up to now

test_2.py

user-3c26e4 09 April, 2024, 10:32:58

Hi Miguel, I have following questions. I defined 5 AOIs and downloaded the enrichment data. How can I recognize in which AOI are the fixations in the file fixations.csv when "true"? For which AOI stays "true"? How are fixations normalized? From 0 to ... and where is the 0,0 begin?

Image When is the "false" case? Image

user-d407c1 09 April, 2024, 10:55:43

Hi @user-3c26e4 ! Currently Cloud does not output which AOI is being gazed/fixated on the enrichment output, but it can be computed.

To make everything easier, I have here a gist/code snippet that does exactly that, it will append a column with the gazed/fixated aoi label.

How to run it? You will need to know how to run a Python script, but that's it. Get Python installed on your system, if you haven't.

I'd recommend using a virtual environment, but that's up to you.

  1. Download the files and unzip them to wherever you want.
  2. Open the terminal and navigate to that folder where you have the files (generally can be done using the command cd PATHNAME where PATHNAME is the path to your folder.
  3. Install the requirements, assuming you have Python and PIP installed, you can do so by running pip install -r requirements.txt
  4. After successfully installing the dependencies, you can run the code like this:
python3 fixated_aois.py \
  --enrichment_url "HERE_COPY_THE_URL_ADDRESS_FROM_YOUR_ENRICHMENT" \
  --download_path "THE_PATH_WHERE_TO_SAVE_THE_NEW_CSV_FILES" \
  --API_KEY "YOUR_CLOUD_TOKEN"

You can get your token from Cloud on this page

user-d407c1 09 April, 2024, 11:27:25

Hi @user-3c26e4 ! the team made me aware that since the last week there is also a file aoi_fixations.csv that holds all aoi unique IDs, aoi_name and fixations to each aoi, their index and duration.

So, no need to run the gist above unless you want the gazed aoi.

user-3c26e4 10 April, 2024, 09:09:24

Hi @user-d407c1 unfortunately there is no such file when I download the Enrichment data. Take a look at the screenshot please. These are the files when I unzip but there is nos uch file as aoi_fixations.csv

user-d407c1 10 April, 2024, 09:21:26

Hi @user-3c26e4 ! You might need to rerun the enrichment or modify the AOIs to obtain the new files.

user-3c26e4 10 April, 2024, 09:09:55

Chat image

user-215e12 11 April, 2024, 17:31:50

Hi! While I am using Pupil Labs Invisible, how can I get gaze coordinates in real time to use them in a Python script?

user-cdcab0 11 April, 2024, 17:59:24

Hi, @user-215e12 , you can use the Realtime API for this

user-870276 14 April, 2024, 15:25:47

Hey, Pupil Labs i usually use Pupil Core for my research, i want to try out the Invisible this time so I wanted to know what a sample gaze_position.csv file looks like for the Invisible compared to the Core. what data won't be there for the Invisible? "these are the list of data for the Gaze_position.csv for Core eye tracker - gaze_timestamp world_index confidence norm_pos_x norm_pos_y base_data gaze_point_3d_x gaze_point_3d_y gaze_point_3d_z eye_center0_3d_x eye_center0_3d_y eye_center0_3d_z gaze_normal0_x gaze_normal0_y gaze_normal0_z eye_center1_3d_x eye_center1_3d_y eye_center1_3d_z gaze_normal1_x gaze_normal1_y gaze_normal1_z"

nmt 15 April, 2024, 02:31:49

Hi @user-870276! May I ask if you already have access to an Invisible, or are you considering buying a new system?

user-870276 15 April, 2024, 02:34:49

Our lab already has the Pupil Invisible system

nmt 15 April, 2024, 03:56:08

Thanks for confirming! In that case, head over to this section of the Invisible docs. It overviews all of the data made available πŸ™‚

user-870276 15 April, 2024, 10:11:27

Thank you soon much!

user-d407c1 15 April, 2024, 10:43:49

Hi @user-964cca ! No worries, we just want to keep things clear! πŸ˜…

  • Re-uploading Recordings: Currently, Pupil Cloud does not support re-uploading recordings. However, if you're interested in seeing post-hoc gaze offset correction in Cloud, feel free to suggest it in the πŸ’‘ features-requests channel.

  • Generating Heatmaps in Pupil Player: Yes, you can generate heatmaps in Pupil Player using the surface tracker, similar to the marker mapper tool available in Cloud.

  • Is there any way to use specific sections from a video in a heatmap or is there a way to cut the video?

Are you asking about Pupil Player or Pupil Cloud? In Cloud, you can select specific sections for heatmaps by using events and adjusting the settings in Enrichment creation under Advanced Settings > Temporal Selection.

In Player, you can trim the data using trim marks.

user-964cca 15 April, 2024, 10:48:57

Ok thank you that was very helpful and also thanks for the fast answer

user-00729e 16 April, 2024, 10:22:32

Hey. I could not find this information in the technical specs on the website. Could you tell me the world camera sensor size?

user-df855f 19 April, 2024, 08:24:17

Hi! I have been using the pupil under a helmet with a thick protection glass. It looks like the glasses work very well underneath. I also did some measurements without the helmet. Is there a way of testing whether my collected data is valid or is missing some input with the helmet condition?

nmt 23 April, 2024, 01:44:17

Hi @user-df855f! Can you please confirm which model of eye tracker you used?

user-f43a29 25 April, 2024, 09:23:47

Hi @user-df855f , others have already successfully used Pupil Invisible under helmets. You can also check our publications list to search for other groups that have used Pupil Invisible with helmets.

Since the neural network in Pupil Invisible estimates gaze on a frame-by-frame basis, it shouldn't be much affected by the helmet pushing the glasses upwards, if that is the case.

Have you seen anything in your data that indicates something is invalid or missing?

user-41ad85 19 April, 2024, 22:58:28

are there any updates on creating a notebook for offline blinking detection for invisible? I have some videos that got corrupted in pupil cloud but they are fine in their local folder so since I cannot upload them again I would need to run the algorithms offline but not sure if that is even possible for blinking

nmt 23 April, 2024, 01:46:48

Hey @user-41ad85 πŸ‘‹. This command line tool called pl-rec-export: https://github.com/pupil-labs/pl-rec-export does exactly what you need.

user-964cca 22 April, 2024, 13:36:40

Hi Pupil Team, we have a recording on the pupil player and added a heatmap to it. Is it possible to export the heatmap with a reference picture like in the pupil cloud? kind regards Clemens

nmt 23 April, 2024, 01:49:57

Hi @user-964cca! Unfortunately not. Player doesn't have this function. A quick workaround is to make a screenshot. Or if you don't mind some coding, this tutorial shows how you can do it programmatically.

user-df855f 23 April, 2024, 12:12:57

Hi i have a question concerning the pupil cloud. I am working on a general account from my work, I have to delete my footage after I am done. However, I would like to transfer the recordings to my own pupil cloud. Is there a way that i can transfer the recordings to my personal cloud and delete it from the original account?

user-07e923 25 April, 2024, 06:20:04

Hi @user-df855f, this is Wee stepping in for @nmt.

Currently, it is not possible to transfer recordings across workspaces. There is already a ticket for this in on our feature-request channel: https://discord.com/channels/285728493612957698/1212410344400486481 - Feel free to upvote it!

user-1af4b6 24 April, 2024, 12:11:38

Hi, is there any documentation for saccades.csv?

user-480f4c 24 April, 2024, 15:21:40

Hi @user-1af4b6 - the documentation for the saccades will become available in the next few days. In general, saccades are defined as the fragments between fixations as defined by our fixation detector. I encourage you to read more about the detection algorithm in the Pupil Labs fixation detector whitepaper and in this publication that discusses our approach.

user-cc9ad6 24 April, 2024, 17:03:44

I recorded the video in the phone, and it shows the videos has be uploaded to the Cloud but when I can't find it there. And I can find the videos before and after that video. It seems I can't re-upload, how to deal with that?

Chat image

user-07e923 25 April, 2024, 06:05:46

Hi @user-cc9ad6, could you open a ticket in πŸ›Ÿ troubleshooting and we continue the conversation there? Thanks!

user-4439bd 25 April, 2024, 09:52:58

Hello! I'm getting to know Invisible headset. Is there an option to download the recordings to a laptop using a cable? Also, are there desktop alternatives to Pupil Cloud Workplace? Thank you!

user-d407c1 25 April, 2024, 10:07:04

Hello @user-4439bd ! It's great to hear you're exploring Pupil Invisible. To export your recordings to a laptop, you can follow the detailed guide here. Once exported, you can use Pupil Player to view and analyse the recordings.

Once exported, you can use Pupil Player to view and analyse the recordings.

Please also have a look at this message regarding some considerations you may want to have https://discord.com/channels/285728493612957698/633564003846717444/1201851567071035392

user-ee70bf 29 April, 2024, 08:14:37

Hello everyone. I have started pre-testing my experiment which uses Pupil Invisible (protocol is : participants stare at a white wall while recalling personal memories). When looking at the data, even if the majority of the time, the cercle around the eye is dark blue, suggesting good detection, I have very, very small pupil diameter data (around 1 mm, which seems physiologically impossible). Do you know where the problem could come from please ? I am really puzzled by this. Thanks in advance for your help.

user-480f4c 29 April, 2024, 08:17:51

Hi @user-ee70bf! Thanks for reaching out. Could you please clarify which product you have? From your description, it sounds like you use Pupil Core, not Pupil Invisible, can you please confirm?

user-ee70bf 29 April, 2024, 08:23:31

Hi @user-480f4c , so sorry, yes I meant Pupil Core !

user-7aed79 30 April, 2024, 20:34:46

Hi folks, can I create a scanning recording to use in Pupil Cloud with a Neon and still have the reference image mapper use data collected with an Invisible?

user-480f4c 02 May, 2024, 09:54:44

Hi @user-7aed79 - Running a Reference Image Mapper with a scanning recording made with Neon and the eye tracking footage made with Invisible is possible. If I understand it correctly, you have both eye trackers, right? May I ask why you'd want to use eye tracking data from Invisible instead of using Neon for all your recordings? Regarding your second question, the Reference Image Mapper enrichment only lives on Pupil Cloud

user-7aed79 30 April, 2024, 20:35:10

Also, is there a code-based method to create reference image mapper enrichments?

End of April archive