🕶 invisible


user-d23b52 01 October, 2024, 08:18:27

Amazing ! Thankyou so much @user-cdcab0 for your quick reply 😄 This is realy helpfull to get started, I'll start working and circle back with our progress!

user-cdcab0 01 October, 2024, 08:19:54

You're welcome, of course. Good luck 🙂

user-266087 02 October, 2024, 10:14:07

I have a question about the red circle that is visible in the recording in the pupil cloud. The red circle in my recordings is moving very fast. Is this normal or is something wrong with my recorded data?

user-480f4c 02 October, 2024, 10:37:31

@user-266087 may I ask: Is there any chance your participant is wearing third-party glasses under the PI glasses?

user-266087 02 October, 2024, 10:56:32

No, the participants were not wearing glasses under the PI glasses

nmt 02 October, 2024, 11:03:02

Hi @user-266087! When loading the recording into Pupil Player, we can see the eye videos, and it's quite evident the participant is wearing third-party glasses. You can see this for yourself by enabling the 'eye overlay' plugin in the Player Plugin menu. The third-party glasses lenses are obscuring the view of the eyes. Unfortunately, in our experience, an offset correction won't help in this case.

user-266087 02 October, 2024, 10:58:19

It is possible to correct the offset in the thumbnail in the pupil cloud. Does this have anything to do with the shaking of the red circle?

user-df855f 04 October, 2024, 13:27:09

Hi ! I have downloaded the pupil player app to analyse my recordings. However when I open the pupil player and drag and drop the document that I have downloaded from the pupil cloud (in pupil player format) and says 'uploading format' this may take a while and then nothing happens. Can you help me?

user-480f4c 04 October, 2024, 14:17:57

Hi @user-8439b4 - may I ask: which version of Pupil Player are you running?

user-bb95f9 07 October, 2024, 12:42:11

Hi, I have a questin, will the infared camera for the pupil monitoring work through polarised 3D lenses?

user-d407c1 07 October, 2024, 13:35:15

Hi @user-bb95f9 ! Could you clarify your goal a bit so we can provide more informed feedback? Are you planning to fit the lenses on the Pupil Invisible frame, or wear them underneath the glasses?

In general, fitting polarised lenses directly on the Pupil Invisible glasses’ frame wouldn’t have any effect, as the eye cameras are integrated into the frame itself. However, wearing polarised lenses underneath the Pupil Invisible might have some impact, though limited. The eye cameras in Pupil Invisible use infrared filters and operate on a different wavelength than 3D polarised lenses, which typically work with visible light (check yours)

For context, 3D polarised lenses filter light waves in different orientations to create a stereoscopic effect, allowing you to see distinct images in each eye for a 3D experience. Since polarised lenses only interact with visible light and not infrared, they won’t directly influence the eye-tracking system used by the Pupil Invisible glasses.

That said the lenses themselves may present some additional distortions which can obscure gaze estimation.

Let us know more details about your setup, and we’ll be happy to assist further!

user-df855f 08 October, 2024, 07:29:52

3.5.7

user-d407c1 08 October, 2024, 07:36:14

Hi @user-df855f 👋 ! Thanks for following up. May I ask how long are the recordings?

Additionally, could you kindly navigate to your pupil_player_settings folder on your computer and share the player.log file with us?

Lastly, you can try deleting the user_settings_ x files, such that Pupil Player starts with the default settings. This will help us rule out any configuration that might be preventing it from loading properly.

user-df855f 08 October, 2024, 07:43:56

They vary between 2 and 9 minutes This is the player.log file

player.log

user-d407c1 08 October, 2024, 07:48:18

Thanks for the info @user-df855f. I asked because longer recordings can take a bit of time to load, but your durations shouldn’t take too long.

Unfortunately, the log doesn’t contain any information—likely because Pupil Player was opened again, which clears the logs. Could you try loading it again, and if it fails, share the logs with us before reopening Pupil Player? That should help us figure out what’s going on.

user-df855f 08 October, 2024, 07:54:58

player.log

user-d407c1 08 October, 2024, 08:02:43

Thanks for the update! It seems like there’s a missing IMU file. If you’re okay with it, let’s move this conversation to a 🛟 troubleshooting ticket so we can keep this channel focused on general questions.

Please open a ticket, and if your recording is in the Cloud, share the recording ID there as well. We’ll take it from there!

user-d5a41b 10 October, 2024, 13:52:26

Hi! We used Pupil Neon and Pupil Invisible in our study. We did not have permission from our IRB to upload the data to Pupil Cloud, so recordings were saved on a computer and deleted from the app. We are now interested in using Pupil Cloud to analyze our data. Is there a way to get our recordings in the Cloud without the direct upload from the app?

user-480f4c 10 October, 2024, 14:50:26

Hi @user-d5a41b! Unfortunately, the recordings that have been deleted from the Companion App/Device cannot be uploaded to Cloud. Upload is only possible through the Companion App. In that case, you can use our desktop applications (Pupil Player for Invisible recordings and Neon Player for Neon recordings) to analyze/visualize your recordings.

user-d5a41b 10 October, 2024, 15:02:09

That is unfortunate. We are interested in the face mapping tool. This tool is not available on Pupil Player or Neon Player, right?

user-4439bd 11 October, 2024, 08:14:07

Hello! I need to find a way to anonymise my video data - e.g. to blur the faces of people captured in the scene video. Can you suggest a way to do it, please?

user-f43a29 11 October, 2024, 08:48:13

Hi @user-4439bd , are you asking if it is possible to do this with the recordings in Pupil Cloud? If so, then please send an email to info@pupil-labs.com to request Face Blurring (i.e., the Anonymization Add-On) for your account. See here for more info: https://discord.com/channels/285728493612957698/733230031228370956/1292784609938898954

user-4439bd 11 October, 2024, 10:19:48

yes, exactly that! Blurring of faces on Pupil Cloud. I will try the email. Thank you, Rob!

user-541efc 14 October, 2024, 03:41:30

error gazes

user-541efc 14 October, 2024, 03:41:39

Hello.. I need help. I was just taking a test with my pupil labs which was going well, but suddenly after 40 minutes the cell phone started to vibrate and the infrared light flashed showing an error: Recording error. I stopped it and as I detected that it was too hot I waited 1 hour.. I checked another recording and it records but the gaze is not observed, the circle that indicates the gaze no longer appears.

I uninstalled the APP and reinstalled it but it is still the same.. It is not the battery because it is at 90% I have 100gb of space...

user-4439bd 21 October, 2024, 09:09:40

Hello! I'm struggling to open my recording in Pupil Player v3.0.7 I have downloaded the recording as Pupil Player Format, unzipped the folder and dragged it onto the Pupil Player window. It starts off accepting the file and telling me it will take a while, but after about 5 seconds it crashes and both Pupil Player windows close. Any idea why?

nmt 21 October, 2024, 09:17:23

Hi @user-4439bd! That's an older version of Pupil Player. The latest version is 3.5. Please download that from here and try again: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-5ab4f5 21 October, 2024, 11:06:53

One question, Hi. Me and my project director. We made many recordings with our Pupil Invisible. We then downloaded the data for 36 subjects, merged dataframes containing data on each fixation (via fixation.csv) and if the fixation is on face or not (using fixation on face csv) By that we defined we region of interest, meaning using python we defined the outer boundary of where the face is supposed to be. (this was plotted using python on the heatmap we calculated before that). What we got out there is however that the face (even considering slight movements etc) looks absoluetly not like a face or something remotely that way. We have rectangular boundaries with jags on it, or sometimes for some participants weirdly long regions. We just wanted to double check what the enrichment is doing and calculate more things such as the mean distance from the region of interest (face) between conditions, however it looks confusing and we don't understand.

The basic thing about our experment is the following: We have 30 trials for each participants containing 3 conditions (one is listening, one speaking and a break). And we wanted to see the difference between speaking and listening and gaze Aversion which is why we used the face enrichment at all.

user-d407c1 21 October, 2024, 11:21:57

Hi @user-5ab4f5 👋 !

Firstly, the Face Mapper defines bounding boxes around the face. It does not segment faces pixel-wise but reports a region (bounding box) containing the face, along with certain landmarks as defined by the face_positions.csv. The fixations_on_face.csv file only indicates whether a fixation falls within that bounding box.

If you want more additional things such as the mean distance from gaze to the face, you could for example compute the centroid of the bounding box and then compute the euclidean distance from gaze to that new centroid position.

Have you seen this Alpha Lab article on mapping gaze onto facial landmarks? (https://docs.pupil-labs.com/alpha-lab/gaze-on-face/) It seems relevant to what you’re aiming for.

Could you also share more about why you’re merging the fixations and fixations_on_face CSV files?

user-c0cd96 22 October, 2024, 02:34:47

質問があります。研究をする際、暗室で行っています。ディスプレイに写っている何かを見ている時の視線の動きをこの装置で測定しています。そこで、部屋とディスプレイの明暗差でseen cameraに白飛びが発生してしまいました。装置のほうで解決方法があれば助かります。よろしくお願いします。

user-d407c1 22 October, 2024, 07:13:50

Hi @user-c0cd96 👋 ! Thank you for your question, please in the future try to use English in this channel, such that we can more effectively assist you and everyone can follow up.

You mentioned that you are conducting research in a dark room and measuring eye movements while viewing something on a display. It seems like the bright contrast between the room and the display is causing overexposure (white-out) in the scene camera, is that correct?

Gaze data shall not be affected by these contrast changes. On to the scene camera, while there are no exposure settings available on the Invisible scene camera, you could consider applying post-processing corrections to your recordings. If that doesn’t resolve the issue, you might want to try placing a polarising filter in front of the scene camera, to reduce the overexposure.

user-5ab4f5 22 October, 2024, 06:48:47

@user-d407c1 Thank you for explaining, do i get out the bounding box too somewhere? And how does that work if the participant moves their head or the persons face is moving slightly. Will the face part be redefined and the other coordinates for the last face position discarded?

I did that (merging fixations and fixaitons on face)because i wanted to identify the on Face value and the fixation itself (using the fixation id) and getting the coordinates out of it. I want to see which fixations are basically on the face and which are not, to plot them all and additionally plot the boundary for the face alone within all fixations.

I just looked into the link you send to, its evry interesting but we are not particulary interested in if they look on the eye or not. (I mean we could maybe make use of it but primilarily we were interested in where the boundary of the face is and how that interacts with movement of the participant or person looked at to understand better how coordinates work). And to have an additional measure to validate what we are doing.

user-d407c1 22 October, 2024, 07:41:18

@user-5ab4f5 It seems there may be some confusion—either I didn’t explain myself clearly, or the documentation might not be detailed enough.

The face_positions.csv file contains one row per face detected for each scene camera frame with the bounding box coordinates. The first thing you would need to do is to check the timestamps (timestamp [ns]). If the timestamps are identical, these rows refer to the same frame, meaning multiple faces were detected in that frame.

Then, the following fields define the bounding box for that face in scene camera coordinates:

    p1 x [px]: x-coordinate of the starting point of the bounding box.
    p1 y [px]: y-coordinate of the starting point of the bounding box.
    p2 x [px]: x-coordinate of the ending point of the bounding box.
    p2 y [px]: y-coordinate of the ending point of the bounding box.

This means the data accounts for the wearer’s head movements or the subject’s movements, as the bounding boxes are reported in scene camera coordinates (egocentric), similar to gaze.

Since the scene camera operates at 30 fps while gaze data is captured at 200 Hz, you would need to either match fixations to the closest frame using timestamps or interpolate the face positions to align with the specific timestamp in between.

Once you have the matched scene camera frame with the face position bounding box and the fixation you can compute the distance to the center.

From your description, it sounds like you’re aiming to plot the data over a static frame or picture with a face—am I understanding that correctly? If so, you’ll want to normalise the coordinates relative to the frame you choose. For example, you could take the bounding box from the first frame, calculate the transformation for subsequent frames, and then apply that transformation to the fixation coordinates to map them onto the static frame

user-5ab4f5 22 October, 2024, 06:50:18

The mean distance from the center / from the face is another measure to then show the differences between fixations that occur on the face/ones that do not

user-73ed95 22 October, 2024, 09:27:18

Hi! I have a question about cropping. I want to crop the first section of a recording (and run Enrichments/PlugIns only on that) for separate analysis but could not find a way to do that in the pupil cloud. Now I read that I could use pupil player for that but I cannot seem to find where or how to do it? Or can this only be done manually according to timestamps in the csvs? Thanks in advance 🙂

user-d407c1 22 October, 2024, 09:35:33

Hi @user-73ed95 👋 ! In Pupil Cloud, you’ll need to define events on your recording first. Then, when creating an enrichment, you can select the appropriate start and end events. You can find more details here

user-73ed95 22 October, 2024, 14:28:11

Thank You Miguel, greatly appreciated! I have another question right away. In the ‘face_detections.csv’ file created in Face Mapper, the coordinate values jump back and forth. For example, ‘nose x [px]’ from point 250 has the following values: 250 984.924 251 130.540 252 997.949 253 141.017 254 1011.425 meaning jumps ~150 px to ~1000 px. Do You have any suggestion what could be the reason for this?

user-d407c1 22 October, 2024, 14:50:07

Without seeing the recording, it is a bit hard to judge, but it could be anything from a false detection to a head rotation. Could you share a screenshot?

user-73ed95 23 October, 2024, 09:08:00

Sadly I cannot share a screenshot. However I just noticed, this only happens when two faces are detected. I figure they are 'competing' and hence the switching back and forth happens?

user-d407c1 23 October, 2024, 09:18:24

can you check the timestamps of those rows?

The first thing you would need to do is to check the timestamps (timestamp [ns]). If the timestamps are identical, these rows refer to the same frame, meaning multiple faces were detected in that frame.

https://discord.com/channels/285728493612957698/633564003846717444/1298189426027008080

user-73ed95 23 October, 2024, 10:27:05

Yes, I did check the timestamps and they are indeed identical, thank you!

user-5ab4f5 25 October, 2024, 06:58:45

@user-d407c1 Thank you. I am not sure if we completly understand all of it but one more question: How do you calculate the onFace (Face enrichtment then for fixations on face)? Do you match the last bounding box until a new one is calculated?

Another questions, the fixation coordinates for fixations.csv are different than the gaze ones, right? At least i once asked this and used the fixation ones since somebody told me i think they accountfor head movement while gaze coords don't.. so i got confused when i saw this here.

And yes the idea was to maybe use a picture with a face and plot the heatmap over it. Or just do a heatmap and plot the boundary of the face. Maybe exclude the face coordinates that are 3 standard deviations away (for when someone moves their head complety in another direction). But your method sounds good

user-d407c1 28 October, 2024, 11:08:41

Hi @user-5ab4f5 !

How do you calculate the onFace (Face enrichtment then for fixations on face)? Do you match the last bounding box until a new one is calculated?

We use the gaze and fixation timestamps to identify the nearest timestamp in the scene camera frames, then compare the face positions in these frames with those from the gaze or fixation data.

Another questions, the fixation coordinates for fixations.csv are different than the gaze ones, right?

Yes, they are different in that they average multiple gaze positions and use optic flow to account for head movement. Can you expand on what is that confuses you?

If you struggle with this, feel free to ask or if you want us to develop the tool for you feel free to explore our support packages.

user-5ab4f5 28 October, 2024, 11:42:41

@user-d407c1 I think i understood it now, thank you 🙂

user-a9f703 30 October, 2024, 12:42:42

Hi all, I transferred my exported data from the phone to the PC. I'm wondering how I can analyze my data now. I think I should use Pupil Player or Cloud, but how can I download the Pupil Lab? because the exported data is now without a circle cue, which shows where the participant looks. 🕶 invisible

user-d407c1 30 October, 2024, 13:40:45

Hi @user-a9f703 👋 ! There are a few ways to analyze your data from Pupil Invisible:

  • Pupil Cloud: If you’ve enabled Pupil Cloud uploads in the Companion App, your recordings are automatically backed up in Pupil Cloud. Just log in with the same account to start working with your data. For more details, please check out our documentation.

  • Offline Analysis: For offline analysis, you can use Pupil Player which you can download here, or alternatively, use our PL Rec Export tool to export data to CSV programmatically.

Let us know if you have any questions!

user-df855f 31 October, 2024, 10:14:09

Hi there! I am trying to figure out how to analyse my recordings from pupil invisible. I am not very good at programming. I was hoping that there was workable way for me to analyse the data, using pupil player. It is very difficult to work with the exports from pupil player as the pupil timestamps are different than the timestamps belonging to the frames. Maybe it is very stupid question but I somewhat lost in the data.

user-d407c1 31 October, 2024, 10:27:53

Hi @user-df855f 👋! What type of analysis are you planning to conduct? This will help me guide you more effectively.

Regarding timestamps, Pupil Player does indeed convert them to the Pupil Core format (in seconds, with an arbitrary starting point). However, you can still access the that frame timestamp in seconds.

user-df855f 31 October, 2024, 10:30:46

Thanks for the quick reply. I have a number of participants that have participated in a number of scenarios. So within 1 recording I want to know for specific timeperiods; the fixation duration, fixation rate, search rate and fixation location (AOI)

user-d407c1 31 October, 2024, 10:38:40

How do you define your AOIs? Do you use April Tags? Is there any specific reason why you opt for using Pupil Player instead of Cloud, which has an Areas of Interest Tool directly embedded?

user-df855f 31 October, 2024, 10:44:03

Yes I don't have access to the pupil cloud anymore, my recordings are not on there anymore. My AOIs are defined as hands, legs, weapon, head, chest

user-df855f 31 October, 2024, 10:44:10

No I don't use April tags

user-d407c1 31 October, 2024, 10:50:30

I assume that you have them dynamically defined for each of your frames is that correct? If you want to export the data in a time series as you do in the Cloud, you can use https://github.com/pupil-labs/pl-rec-export, and you will still get the world_timestamps.csv for your recordings, and from there, you can match them with the fixations and gaze and files using the timestamp and then with your AOIs to compute those metrics.

user-df855f 31 October, 2024, 10:55:42

I haven't dined my AOIs yet. At this point, I have just the recordings downloaded in pupil player format and the timeseries format

user-d407c1 31 October, 2024, 14:59:52

Thanks for following up, @user-df855f . Conceptually, the first step would be to locate those AOIs (Areas of Interest) within your videos.

In Cloud, we provide solutions to remap gaze onto different surfaces or images, allowing you to define AOIs in this 2D space (which is much easier), aggregate data from multiple subjects, and compute relevant metrics. However, Pupil Player only includes the surface tracker, which requires the use of Apriltags, so it won't suffice you.

You may want to explore object detection and segmentation models like Microsoft Florence, YOLO, or Segment Anything to detect objects in your videos and perform segmentation to refine the masks of these AOIs. Keep in mind that this process can be computationally intensive, so make sure your computer has the necessary capability. Once done, you would need to use fixation coordinates matched to the scene video frames which is easier as we provide timestamps in all of our streams, to determine if they fall within these AOIs.

If this approach seems too complex, there are software solutions like iMotions that can simplify the process.

user-df855f 31 October, 2024, 10:55:53

But I struggle how to proceed from here

user-df855f 31 October, 2024, 10:57:15

*defined

user-3b51dc 31 October, 2024, 14:39:13

Hello! In Pupil Cloud, is there a way to adjust the minimum duration, maximum duration, and maximum dispersion for fixations data?

user-d407c1 31 October, 2024, 16:20:59

Hi @user-3b51dc 👋! Currently, there is no way to modify or adjust the fixation detector parameters within Pupil Cloud.

Please note that Pupil Invisible's fixation detector is velocity-based, not dispersion-based, as detailed in our documentation and the accompanying white paper. For additional context, you can also refer to this previous message: Discord reference.

These parameters represent what we identify as the best average candidates. That said, the fixation detector is open-sourced, so you can run it locally and tweak the parameters as needed.

user-cb4bf5 31 October, 2024, 14:50:13

Hi! I’m trying to find some information about the dataset that the machine learning algorithm for gaze detection was trained. I’ve looked around but can’t find anything, can you assist me? 🙂

user-d407c1 31 October, 2024, 16:23:22

Hi @user-cb4bf5 👋 ! You can find details about the dataset in our whitepaper but kindly note that the training dataset is not publicly available.

user-df855f 31 October, 2024, 15:39:20

Ok, Thank you! If we get back one step. i want to dive into the fixations; I have the fixation csv with the fixation id's en start an end timestamps. I want to relate this to the video recording or the pupil player recording so I can determine the fixations during specific timespoints. However, the UTC timepoints and the video timepoints confuse me, do you get my confusion and could you help me?

user-d407c1 31 October, 2024, 16:31:54

I'd recommend working directly with the output from Cloud or pl-rec-export, as these do not transform timestamps into the Pupil Core time format.

The timestamps from these outputs are in UTC Unix epoch, which means they represent the number of nanoseconds that have elapsed since January 1, 1970 (the Unix epoch). This format is commonly used for precise timekeeping and synchronization.

You can then merge the dataframes using these timestamps. For example, you can use the pandas function pd.merge_as_of to align data based on the closest matching timestamps. This is particularly useful for joining gaze data to video frame timestamps, ensuring that each data point is matched to the frame it falls within. The pd.merge_as_of function performs an asof merge, which joins the two dataframes based on the nearest previous timestamp, facilitating synchronization of data streams.

End of October archive