πŸ•Ά invisible


user-a209d9 02 October, 2023, 17:01:19

Hi Pupil Labs team! Quick questions: are there any parameters in the Reference Image Mapper that can be changed to make the detection/mapping to the reference image more sensitive? This could be useful for some of my recordings. Also just wondering if the Reference Image Mapper code open-source? Would be cool to play around with it and fine-tune paraments for my use-case. Thanks and keep up the good work!

user-94f03a 03 October, 2023, 08:15:13

Hello, can we use a one plus 10 with a Pupil Invisible?

user-480f4c 03 October, 2023, 08:18:32

Hi @user-94f03a πŸ‘‹πŸ½ ! Pupil Invisible is compatible with OnePlus 6, OnePlus 8, and OnePlus 8T. You can find all the details regarding compatible devices and Android versions for Pupil Invisible in our documentation: https://docs.pupil-labs.com/invisible/glasses-and-companion/companion-device/#companion-device

user-94f03a 03 October, 2023, 08:24:31

thanks Nadia – I saw that Neon is compatible with One plus 10, was wondering it the documentation was up-to-date. Anyway thanks!

user-2b79c7 04 October, 2023, 05:48:03

Hi Pupil Team (@user-480f4c and @user-d407c1 ) I have a doubt regarding AOI generation. I referred to the code (https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#map-fixations-to-aois) but does it consider the following:

1) What if a participant is moving their head while looking at poster as AOI generation is based on position/coordinates?

2) Does it consider if participant move/walk slightly and analyses the photo/poster etc?

If yes then could you explain which part of code takes care of that? If not, then is there a way to incorporate this?

user-480f4c 04 October, 2023, 06:43:27

Hi @user-2b79c7 πŸ‘‹πŸ½ ! The AOI tutorial works with the data exported after applying the Reference Image Mapper enrichment on Pupil Cloud.

This tool essentially allows you to map gaze data of a 3D environment (e.g., your eye-tracking recording while looking at paintings in a gallery) onto a 2D image (e.g., a photo of the paintings). The mapping applies only when the 2D reference image is visible to the scene camera of your eye-tracking recording.

As you see in the data exported from the Reference Image Mapper, there is a column "fixation detected in reference image" with True/False values that indicates whether the image was detected in the video and by looking into the fixation coordinates (fixation x [px] and fixation y [px]) you can get information of where your participant was looking at a given time point (you can find this info in the table of fixations on the AOI tutorial page: https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#map-fixations-to-aois). Therefore, to answer your questions, you will obtain fixation information even if the participant is moving slightly their head, as long as the reference image is detected in the scene camera.

I encourage you to have a look at the Reference Image Mapper documentation to see more examples of how this enrichment works: https://docs.pupil-labs.com/enrichments/reference-image-mapper/

I hope this helps, but please let me know if you have more questions!

user-2b79c7 04 October, 2023, 10:08:04

Thanks @user-480f4c !! Your first line itself clarified the doubt. I have another small query: I suppose that in case of dynamic environment such as videos, the existing AOI algorithm will not work as the location of AOI itself will change on the screen. However, Is it possible to include AOI detection using image processing for small deviations ( Let say AOI's screen position does not move too much in the video)?

Thanks again for the clarification.

user-2d66f7 04 October, 2023, 08:37:54

Hi! I am searching for a good blink detection alrogithm for my experiment. The participants, however, make quite some vertical eye-movements during the performed task. Therefore, I am a bit hesitant to use your blind detection. We also used the pupil player to get the data from the pupil invisible, since we are not allowed to upload our data to the cloud. I look around for some alternative options, but have not found a solution yet. Do you have any recommondations?

user-480f4c 04 October, 2023, 08:51:58

Hi @user-2d66f7 πŸ‘‹πŸ½ ! Since you cannot use Cloud, you could consider our command-line tool that allows you to easily export gaze data, blinks, fixations, and saved events as CSV files locally on your computer. You can find it here: https://github.com/pupil-labs/pl-rec-export/tree/main

May I ask why are you hesitant using the blink detection algorithm? I'm not sure I fully understand how the vertical eye-movements in your setup would affect blink detection.

user-2d66f7 04 October, 2023, 08:53:59

I read that vertical eye-movements may cause many false positive blinks (instead of saccades) with the current version of the blink detection algorithm

user-480f4c 04 October, 2023, 09:14:09

Thanks for the clarification @user-2d66f7. Higher number of false positives in blink detection can be the case in highly dynamic settings with fast vertical eye movements (e.g., football). Could you elaborate a bit on your setup? What kind of task are your participants performing?

user-480f4c 04 October, 2023, 10:30:18

I'm not sure I fully understand your question. Let me explain the workflow of how you can use the AOI tutorial:

1) Apply the Reference Image Mapper enrichment on Pupil Cloud.

2) Run the AOI tutorial. - This will first present you with the reference image on a pop-up window and then you will be able to define manually your AOIs. - After the AOIs are defined on the image, you will be able to run the remaining code snippets and calculate metrics per AOI.

Therefore, the metrics for the AOI are calculated based on the reference image, rather than the video.

user-2b79c7 04 October, 2023, 11:29:06

@user-480f4c Ok I will explain my question again. Consider I am driving a motorbike on road, and I have placed marker on motorbike's body which in within the camera videoframe. However as I am driving the position/boundary of road are changing as I my car is moving laterally slightly, thus the position of road boundaries is also changing slightly. Or let say participant is watching a video on screen in which object of interest is moving slightly, Is there a way to accomodate for that in AOI algorithm based on image processing?

user-480f4c 04 October, 2023, 12:52:05

@user-2b79c7 thanks for clarifying. Some points below:

  • For optimal mapping output using the Reference Image Mapper, you would need a scanning recording that is similar to your eye-tracking recording (e.g., in terms of scene lighting, features etc.).
  • Additionally, the scene needs to have relatively static features in the environment. If there is a lot of movement or the objects change in appearance or shape (like in your motorbike example), the mapping can fail.
  • Specifically for the screen example you mentioned, we do offer a tutorial that allows you to use the Reference Image Mapper enrichment and then map and visualise gaze onto a screen with dynamic content, e.g. a video. You can find it here: https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/
user-2b79c7 05 October, 2023, 08:03:49

@user-480f4c Thanks for sharing the information. It was indeed helpful.

user-91d7b2 05 October, 2023, 15:08:32

Hey! I recorded some videos (not on wifi) but then was able to get on wifi - they should upload correct?

user-d407c1 05 October, 2023, 15:12:35

Hi @user-91d7b2 ! Yes! If you have internet access and Cloud upload was enabled, they will start uploading when connected. Is it not the case? If so, would you mind performing the following test https://speedtest.cloud.pupil-labs.com/ to ensure you have enough upload speed to our servers? (some company/institutional networks may have limitations on upload). Also please have a look on the Recordings view of the app in case a recording is stuck.

user-91d7b2 05 October, 2023, 16:34:09

Upload speed seems fine!

nmt 05 October, 2023, 16:47:17

Hi @user-91d7b2! 1. Double check that automatic cloud uploads are enabled in the app settings 2. If they are, try logging out and back into the app. That usually triggers the upload.

user-91d7b2 05 October, 2023, 17:53:50

Thank you!

user-1af4b6 05 October, 2023, 19:00:08

Hi Pupil Team,

I recently followed the tutorial at https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/#running-the-code, and I encountered an issue during the process. After uploading the data and selecting the screen corners, I received an error message in the "dynamic_rim.py" file on this line:

for packet in out_video.encode(out_frame):

Upon inspecting the code, I found that the error occurs when the "out_video.width" parameter (line 418) is an odd number. This leads to a program crash.

To resolve this, I manually adjusted my reference picture dimensions from 1600x1204 to 1599x1204, which worked. However, I'm wondering if there's a better and automatic solution to this problem.

Any guidance or suggestions you can provide would be greatly appreciated.

Thank you, Maayan Bahar

user-2d66f7 06 October, 2023, 07:40:45

Hi Nadia, I have a few different task, such as cycling in a urban environment, walking in a shopping street. But I think that the task with most vertical eye movements, is a task were my participants have to walk in a hallway and avoid some obstacles on the ground while also detecting targets that are placed on the walls of the corridor. Here, the participants have to make a lot of eye-movements including vertical to see both the ground objects and the targets that are placed more on shoulder level.

user-480f4c 06 October, 2023, 08:53:14

Hi @user-2d66f7, thanks for sharing more info about your research. We'd recommend giving our blink detector a go and we'd be happy to get some feedback as for its performance in these tasks.

user-bda2e6 06 October, 2023, 19:56:59

Hello! I'm doing image reference mapper in Pupil Cloud on a one minute image. It's been almost three hours and it's still running. Is this normal?

user-480f4c 09 October, 2023, 06:33:45

Hi @user-bda2e6 πŸ‘‹πŸ½ ! Regarding the issue with your enrichment on Pupil Cloud, how long is the recording (or part of it) that you tried to apply the Reference Image Mapper? Could you also please try to hard-refresh the browser page where Pupil Cloud is open ("Ctrl" + "F5" (Windows) or "Command" + "Shift" + "R" (Mac))?

As for your second question, could you please clarify which data are you downloading from Pupil Cloud? The gaze.csv file should be available after downloading the Timeseries Data.

user-bda2e6 06 October, 2023, 19:57:37

Also, I have a Pupil Invisible and I'm trying to export my recordings. I don't have the gaze.csv in my exported files. Is this something I have to generate manually?

user-bda2e6 06 October, 2023, 19:58:32

Thank you!

user-e40297 09 October, 2023, 10:38:04

Hi, I'm not sure if this is the right place for my question... We are working with a pupil invisible. However, the invisible creates multiple small videos of different periods in time as it seems. Different segments appear to be indicated by an increasing index. I couldn't play it back in the Pupil Player.

user-d407c1 09 October, 2023, 10:45:43

Hi @user-e40297 ! It is normal for the Invisible Companion to split the video onto different video parts, one the reasons for this, is the to size limits on Android. Given that you plan using Pupil Player, I assume you do not want, or you are not allowed to use Pupil Cloud, right? Any specific reason?

Nevertheless, both Pupil Player and Cloud can handle multi-part videos. So, do you see any error while loading the folder to Pupil Player?

Also, and just to be 100% on the same page, are you exporting the recordings from the phone as explained here https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html ?

user-e40297 09 October, 2023, 11:03:34

I talked to my colleague.. She followed the same steps. Only difference is that the file is stored in a different map.

user-e40297 09 October, 2023, 10:50:57

Our institute is still unsure about using the cloud based on privacy issues... I'll look at your suggestion and ask my colleague

user-d407c1 09 October, 2023, 11:11:12

Pupil Cloud is GDPR compliant and follows standard security measurements, you can read more about it here. https://docs.google.com/document/d/18yaGOFfIbCeIj-3_GSin3GoXhYwwgORu9_7Z-grZ-2U/export?format=pdf

What do you mean by map? is the structure of the folder different? Might it be that the folder copied onto the computer is not the Exported one (Documents/Pupil Invisible Export/foldername) but rather the one at Documents/Pupil Invisible/foldername ?

user-e40297 09 October, 2023, 11:46:49

The file is not displayed in /pupil invisible export but in 'documents'. This has been like this, since we started using invisble

Maybe unnecessary, bu the file is already split up into multiple files on the companion

user-d407c1 09 October, 2023, 11:57:30

The original recordings made by the Invisible Companion App are located in Internal Storage > Documents > Pupil Invisible, but those are in RAW/binary format. To make them compatible with Pupil Player, you have to :

Export:
For single recordings, the export button is found by clicking on the 3 vertical dots to the right of the cloud symbol
For multiple recordings, click the download symbol at the bottom of the screen

As outlined in the prev. link I shared: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html , this would generate a new folder in Internal Storage > Documents > Pupil Invisible Export, which is the one you have to copy to the computer.

Please ensure that you follow these steps and let us know if you can still not load it into Pupil Player. If not, please let us know what error message Player prompts you with or share the Pupil Player log file.

user-e40297 09 October, 2023, 12:20:06

Thank you The eyetracker is currently somewhere else. I'll get back to you

nmt 10 October, 2023, 03:47:45

Hi @user-2fc592. Responding to your message https://discord.com/channels/285728493612957698/285728493612957698/1161142714696478740 here.

Unfortunately, we don't have a method to correct gaze offsets in Pupil Cloud post-hoc. The current workflow requires these to be set in the Companion App before recording.

May I ask which analysis tools you are using in Cloud or iMotions software? We do have a method to correct gaze offsets for Invisible recordings post-hoc, but this requires our desktop software. Once done, there is no way to reimport them to Cloud or iMotions, which might somewhat limit your analysis.

user-5fc3c6 11 October, 2023, 16:05:55

hello,is the script ready?

user-d407c1 11 October, 2023, 16:09:35

Hi @user-5fc3c6 yes! It is here https://github.com/pupil-labs/pl-rec-export

user-bda2e6 13 October, 2023, 00:20:38

Can I ask how long it's supposed to take for the reference mapper to run on a 20 second video?

user-bda2e6 13 October, 2023, 01:08:55

My reference mapper keeps failing and giving me errors. I thought I followed the instructions. Is it okay if I can talk to someone directly to ask some questions? Thank you!

user-c2d375 13 October, 2023, 10:09:33

Reference Image Mapper fail

user-c03258 17 October, 2023, 08:20:13

Hello, I have been trying to make adjustments on the raw data files, then re-calculate the gaze_on_face, however, I couldn't not align the face_detections.csv timestamps to the gaze.csv timestamps. Is there a script for that? or how can we get info to correctly align the two?

user-8a677a 17 October, 2023, 08:31:31

thanks @user-c03258 I am having a similar issue, maybe @nmt or @user-d407c1 have ideas? Thanks

user-d407c1 17 October, 2023, 14:07:08

Hi @user-c03258 and @user-8a677a ! The face detections csv file contains the timestamps of frames from the scene camera where a face is located. The scene camera goes at a frame rate of 30FPS, while gaze is at 200Hz, meaning there's not a exact match. What you would need is to do is for each gaze row, find the closest scene frame using the timestamp column.

One way to achieve this, is using the pandas function merge_asof, like it is done at the densepose library https://github.com/pupil-labs/densepose-module/blob/main/src/pupil_labs/dense_pose/main.py#L157 or in our tutorial on how to sync with external sensors https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/#match-video-and-heart-rate

user-8a677a 17 October, 2023, 14:16:12

that's very helpful! thank you!!

user-c03258 17 October, 2023, 14:40:56

@user-d407c1 thank you!

user-2fc592 18 October, 2023, 19:58:11

Thank you for your reply, Neil. I have tried to use the Enrichment tools like the reference mapper and others, but none have proven successful.

nmt 19 October, 2023, 06:21:38

Unsuccessful because the gaze is offset, or for another reason, like the enrichment itself failed to compute?

user-563c41 19 October, 2023, 19:06:33

@nmt I used real time api to send events to my invisible during a recording: https://docs.pupil-labs.com/invisible/real-time-api/track-your-experiment-progress-using-events/. But when I loaded the recording in Pupil Player, there is only option to export annotation, not the events. I think Pupil Cloud might help me to extract the event information, but I can't upload the data to the cloud. Is there any way that I can extract the event information?

user-6c7210 20 October, 2023, 10:26:50

Hi there - I'm trying to export a video from my workspace so that I can edit it side-by-side with a static camera that is showing what the wearer is doing from a distance. When I do that, the scene information (gaze, fixation etc) isn't overlaid on the video. Is there a way for me to export in a way that will let me keep that information 'on' the video? Thanks in advance πŸ™‚

user-d407c1 20 October, 2023, 10:35:07

Hi @user-6c7210 ! You will need to generate a video renderer, you can find this option when you create a project, under the analysis tab > "+ New Visualisation"

user-6c7210 20 October, 2023, 10:57:19

Thank you so much! I've found it and it all make perfect sense πŸ™‚

user-80e968 20 October, 2023, 15:04:20

Hi, some begginer questions... I´m tryingo to export video from Invisible glases. There is no way to do it directly from the phone (with the gaze circle)? I´ve found the Pupil Player, but my recordings are corrupted. - Only1 of 3 captured both eyes. - the one with both eyes is 12 minutes long, but the world wideo is not complete (approx 4:30 is OK), it´s runnig in the Invisible companion (the first part), itˇs running on the cloud, but the player crashes.

What to do?I need this video! thank you. Jan

user-480f4c 24 October, 2023, 11:19:02

Hi @user-80e968 πŸ‘‹πŸ½ ! I’m sorry to hear you have issues with your Invisible recordings. Please reach out to info@pupil-labs.com and a member of the team will help you to swiftly diagnose the issue as soon as possible.

user-00729e 23 October, 2023, 11:57:52

Hello, I was wondering if there is an example of how to use the Invisible with Unity for real-time input (mapping the gaze input to screen coordinates). I would be happy for any help πŸ™‚

user-5576d2 23 October, 2023, 20:50:12

heyπŸ‘‹, does anyone know how to check if a surface is observed in pupil capture via python? Any help would be greatly appreciated πŸ˜„

user-cdcab0 25 October, 2023, 05:18:22

Hi, @user-5576d2 - are you interested in this post-hoc (from a recording)? If so, you just need to use the Surface Tracker plugin, which outputs data files you'll be interested in (see: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker)

user-7a517c 24 October, 2023, 15:11:53

Hello @user-480f4c ! I have a question regarding the synchronization of the gaze cameras both with each other, and with an outside event, which is the clapping of a clapboard. We managed to add an "Event" in the Project Editor in the Pupil Cloud, which we can find in the events.csv file. However we had trouble moving frame-by-frame in the video player to precisely locate the clap. Left/right arrows move by 5 seconds. We were advised to use the , and . keys to scroll left and right frame-by-frame, which is not working. Another question concerns whether speed with which the enrichments are processed depends on your server or the server on which our device is. Many thanks in advance 😊

user-480f4c 24 October, 2023, 16:05:12

Hi @user-7a517c πŸ‘‹πŸ½ ! Regarding your first question, please note that you can find the full list of the shortcuts for navigating the recording frame-to-frame on the help menu of Pupil Cloud. You can find it in the top-right of the UI (the question mark icon) > Keyboard Shortcuts. here you can see that you can jump in steps of 0.03 seconds using Shift + Left/Right Arrow Key. This allows you to jump roughly frame by frame through the video as the scene camera's framerate is 30 Hz.

As for your second question, the duration of your recordings and the current workload on our servers are two major variables. Longer recordings or high server usage can extend processing times. Please also have a look at this relevant message for more info on that: https://discord.com/channels/285728493612957698/633564003846717444/1154018234299850813

I hope this helps, but please do let us know if you experience any issues with the completion of your enrichments πŸ™‚

user-334d9a 24 October, 2023, 17:30:55

Hello, just getting started here with Pupil Invisible. 1. Is there a way of exporting a simple video file that displays the World Camera view along with Gaze and Fixation? We were getting ready to try screen recording the playback window of Pupil Cloud as a workaround, but it would be great if we had a simpler way to export this.

  1. Is there a way to reduce the number of fixations that remain on the screen during playback? This would be helpful to us to keep the screen less cluttered.
user-480f4c 24 October, 2023, 17:39:59

Hi @user-334d9a πŸ‘‹πŸ½ ! Regarding your questions:

1) Yes, you can create this video with the gaze and fixations overlayed on the world camera video. This feature is called Video Renderer. To use this feature, you have to create a project with the recordings of interest, click on "Analysis" at the bottom left and then to "+ New visualisation" on the top, then, select "Video Renderer". There you would be able to configure it and download it.

2) Pupil Cloud doesn't allow selective display of fixations. However, you can disable the fixations overlay during playback (on the panel below the video playback, click on the eye symbol next to fixations - this allows you to enable/disable the fixation overlay).

user-334d9a 24 October, 2023, 17:42:07

Thanks!

user-334d9a 24 October, 2023, 19:01:14

In Pupil Invisible, is there a way to adjust the milliseconds threshold of detection of a fixation? I'm trying to adjust the system so that fewer fixations appear on the Fixation Overlay, to keep the screen less cluttered while we examine moving text that is being read by the subject.

user-480f4c 26 October, 2023, 18:27:25

Hi @user-334d9a! Apologies for the delayed reply. Adjusting the parameters of the fixation detector is not possible. However, to create a less cluttered visualization of your video with some fixations, you could download the Timeseries data + Scene video from Pupil Cloud, select a specific set of fixations from the fixations.csv file (e.g., all fixations that are longer than 200 ms), and re-render the scene video overlaying only this specific set of fixations that you've selected.

user-20a5eb 25 October, 2023, 10:55:31

Hi Pupil Team, we have a pretty urgent issue here, as we are conducting an experiment today. The Pupil Invisible don't connect to the controller app (when it is plugged in, the scene and eye cameras do not register, and stay greyed out). Everything was working smoothly yesterday, and nothing notable changed about the setup since then.

We’ve tried the following troubleshooting options already; - restarting the companion device - using alternative USB-C cables - testing the black/other cables on other devices (all work fine) - testing the cables on the companion device (all work to charge the device) - unplugging/replugging the cables between the glasses/device - removing/reattaching the scene camera from the glasses

The specs are as follows: - unit = pupil invisible - companion = OnePlus 8T - software = android 11 - build number = oxygen OS 11.0.6.9.KB05BA - app = 1.4.30-prod

Any help/suggestions would be highlyΒ appreciated!

user-480f4c 25 October, 2023, 12:04:26

Hi @user-20a5eb πŸ‘‹πŸ½ ! Sorry to hear you're experiencing issues with your Pupil Invisible glasses. Let's test whether there is a hardware issue with the cameras. Please install the app "USB Camera" and see if the cameras provide stable streams using it or of the errors persist.

https://play.google.com/store/apps/details?id=com.shenyaocn.android.usbcamera

user-480f4c 25 October, 2023, 13:15:05

Debugging - cameras not working

user-781874 25 October, 2023, 15:44:46

Hi, I am currently using the invisible device for research purposes at university but the Neon device will help my research better, is it possible to swap ? This will help my progress massively.

nmt 25 October, 2023, 16:11:21

Hi @user-781874 πŸ‘‹. Please reach out to sales@pupil-labs.com in this regard πŸ™‚

user-23899a 26 October, 2023, 10:59:08

I have problem with downalding one record

Chat image

nmt 26 October, 2023, 13:28:20

Hi @user-23899a. If the recording is not present on the Companion Phone, that indicates it was manually deleted. But that wouldn't have affected the recording in Cloud. Could you please DM me the recording ID and I'll ask the Cloud team to investigate

user-23899a 26 October, 2023, 10:59:28

also cant find this file on mobile

user-334d9a 26 October, 2023, 18:32:28

Thank you!

user-334d9a 26 October, 2023, 18:52:05

Nadia, could you please give me a detailed description of how to do that?

user-480f4c 26 October, 2023, 19:00:21

What I've described in the previous message would be a very high-level overview of how you would be able to get a less cluttered visualization of fixations overlayed on your scene video. However, we do not have step-by-step instructions for doing that.

user-334d9a 26 October, 2023, 19:03:29

When I go to download raw data, I do see Scene Video on the list, but I don't see Timeseries data or a fixations.csv file. Am I maybe looking in the wrong place?

Chat image

user-480f4c 26 October, 2023, 19:09:12

You can download the raw data by right-clicking on the recording in the main view on Cloud and then selecting Download > Timeseries Data + Scene Video. In the folder, you should be able to find a fixations.csv file, but if you don't please let us know.

user-334d9a 26 October, 2023, 19:14:07

Thank you. That way of downloading worked. If I wanted any fixations below 200 milliseconds to not be displayed, would I basically delete the entire row in Excel from the spreadsheet for each fixation that I didn't want to display?

nmt 27 October, 2023, 20:46:43

Hi @user-334d9a! Unfortunately, the process is not as simple as deleting fixations from the Excel file. To provide some context, the Excel file is generated as an export and is not used for any subsequent visualisations. If you wanted to modify the visualisation in the way you described, you'd need to parse the modified fixation data and render out a custom video with a gaze + fixation overlay. It is technically possible, but we don't have anything set up to do it.

user-08c118 30 October, 2023, 08:39:35

Dear Pupils Labs Team. I finished the data recording in a field study, in which we investigated eye-tracking parameters in the urban environment. Two questions came up in this process:

(1) I have a recording, which was uploaded to the cloud. This video contains sensitive information, so I wanted to cut off the last 10 minutes of the recording. Is it possible to cut the video in the cloud? I dont want to loose the data by deleting the entire video ...

(2) do you have any background information, regarding the sensitivity and accuracy of the pupils labs invisible? This would be really helpful to argue for the removal of outliers in the recordings. In other words, how accurate is the device in detecting a fixation or an eye-blink? Was this ever tested?

Best regards Anton

user-d407c1 30 October, 2023, 11:23:45

Hi @user-08c118 !

If you only want to share the recording outside Pupil Cloud, you can use events in the video renderer to crop sections. If you would like to completely remove that section, unfortunately there is no possibility currently to do this in Cloud. I have suggested it here https://feedback.pupil-labs.com/pupil-cloud/p/remove-scene-camera-in-a-portion-of-a-recording. Feel free to upvote it!

In the meantime, you can download it and work with it in Pupil Player which allows you to trim recordings.

Regarding your second question, here you have a white paper describing the accuracy and precision of Pupil Invisible https://arxiv.org/pdf/2009.00508.pdf A link to our fixation detector analysis https://docs.google.com/document/d/1dTL1VS83F-W1AZfbG-EogYwq2PFk463HqwGgshK3yJE/export?format=pdf and the blink detector https://docs.google.com/document/d/1JLBhC7fmBr6BR59IT3cWgYyqiaM8HLpFxv5KImrN-qE/export?format=pdf

user-08c118 31 October, 2023, 09:41:31

Thank you so much! The papers are super helpful πŸ™‚ I upvoted your suggestion

user-ffd278 31 October, 2023, 12:36:38

hi, i would like to know if anyone of you has the documentation of Pupil Invisible, i need to understand the type of moviment that can be registered

user-480f4c 31 October, 2023, 12:37:55

Hi @user-ffd278 πŸ‘‹πŸ½ ! You can find the full documentation for Pupil Invisible here: https://docs.pupil-labs.com/invisible/

user-1423fd 31 October, 2023, 15:10:41

Hi I am having some problems with my invisible gaze tracker. The red circle was not showing the gaze anymore while wearing the glasses and was stuck in the left corner when attempting to reset the calibration. I do not even see a red circle (the gaze) on the live preview. I wonder what is going on? Any ideas? I had to cancel my experiment and I do also have some planned tomorrow.

user-480f4c 31 October, 2023, 15:14:17

Hi @user-1423fd πŸ‘‹πŸ½ ! I'm sorry to hear you're experiencing issues with your Pupil Invisible. Could you please send us an email to [email removed] A member of the team will help you to diagnose the issue as soon as possible.

user-1423fd 31 October, 2023, 15:23:51

sent! thanks @user-480f4c

user-480f4c 31 October, 2023, 15:25:29

I'm following up right now with instructions for debugging/diagnosing πŸ™‚

End of October archive