πŸ•Ά invisible


user-a97d77 06 March, 2025, 14:45:49

Hi, what is the resolution of the front camera of our device, (like the camera facing the screen, not facing our eyes)

user-d407c1 06 March, 2025, 14:49:24

Hi @user-a97d77 ! The scene camera resolution of Pupil Invisible is 1088 by 1080 px, you can find its Pupil Invisible's specifications here.

user-ce8702 11 March, 2025, 15:52:08

Hello, I would like to know if there is any repository or resource which contain template scripts to analyse the downloadable outputs from the created Pupil Cloud enrichments, as I would like to use python scripts for a more custom analysis. This would help greatly as I do not have much experience in python. Thanks!

nmt 12 March, 2025, 02:05:22

Hello, @user-ce8702! What exactly are you looking to do with the enrichment data?

user-5346ee 11 March, 2025, 21:15:07

Hello, I use different workspaces on Pupil Cloud to separate my student research teams' data. One team forgot to switch their workspace on the phone and their data were accidentally uploaded to another worksapce. Is there anyway to move the data to a different worksapce? I couldn't find out how. Thanks!

nmt 12 March, 2025, 02:06:08

Hey @user-5346ee! Moving recordings between workspaces is not currently possible. But you can upvote this feature request here: https://discord.com/channels/285728493612957698/1212410344400486481/1212410344400486481

user-f03094 17 March, 2025, 09:36:49

Hello, I’m currently conducting anγ€€experiment using Pupil Invisible.

I downloaded the data from Pupil Cloud in the Pupil Player Format, but I noticed that the number of files varies between different datasets. Do you know why this happens?

As shown in the attached image, some datasets contain specific files, while others do not. However, I followed the same procedure when acquiring the data.

I’d appreciate your help in understanding this. Thank you!

Chat image Chat image

user-d407c1 17 March, 2025, 09:50:49

Hi @user-f03094 πŸ‘‹ ! May I ask whether the first one was opened with Pupil Player already?

user-f03094 17 March, 2025, 09:58:35

Hi @user-d407c1 Thank you for your response! I just opened the folder in Pupil Player, and everything is working fine.

user-f03094 17 March, 2025, 10:03:09

Both files are working in Pupil Player.

user-f03094 17 March, 2025, 10:27:39

@user-d407c1 The issue has been resolved. Thank you for your support!

user-f6ea66 19 March, 2025, 13:53:44

Hi, I have some questions regarding when I use the Pupil Cloud website to add an enrichment (marker mapper) to some videos (most of the videos are 30-60 minutes). Some might be simple/stupid, but I'd like to optimise it and it makes a very big difference for the scope of the time I'll be investing in starting and monitoring these processes...

  1. Can I close the browser windows or will it stop or slow down the running of the Enrichment processing?
  2. Can I start more than one Enrichment process at a time?
  3. If so, will they run simultaneously or in sequence, one after the other?
  4. Is it correct that this kind of, let's say 45 minute, video with marker mapping and most of the screen selected as three big AOIs will take several hours to complete each?

  5. Generally, are there smart ways to do it as fast as possible? : )

Thanks!

user-f6ea66 19 March, 2025, 13:55:48

P.S. So far I've completed about 5 of these enrichment processes and on top of that, there were two that failed due to some "error". After that it seems I could restart them from scratch (defining surface, naming, making AOIs) and they would complete the next time. - How can I minimize the risk of processes failing? (after hours of processing) - thanks!

user-f43a29 19 March, 2025, 15:46:41

Hi @user-f6ea66 , don't hesitate from asking such questions!

Here are my responses:

  • When you start an Enrichment, all processing happens on Pupil Cloud's servers. You can even turn off your computer and the processing will continue, with no effect on processing speed.
  • You can certainly start more than one Enrichment at a time.
  • Whether they run simultaneously or in sequence depends on a number of factors, such as the overall load on Pupil Cloud and where your requests sit in the processing queue. The servers automatically schedule Enrichments.
  • The size and number of AOIs will not affect the Enrichment processing speed. You can even save AOI creation until after the Enrichment has completed and you can change the AOIs as often as you wish after processing has completed.
  • The longer the recordings and/or the more recordings there are in an Enrichment, then the longer it will take.
  • Dependent on these factors, it could take some hours for Enrichments to complete.

Generally, with respect to Marker Mapper, the key points are:

  • If the Markers are not clearly visible to you in the scene camera video, then there are low chances that they will be well-detected by the algorithm.
  • The more Markers, the better.

To that end, if possible, it can also help to do some pilot tests with shorter recordings before proceeding to larger batches of processing. If you see any problems at that stage, then it is quicker to adapt & enhance the experimental setup.

With respect to the Enrichments that failed, if you'd like, you can invite us as Collaborators to your Workspace and we can provide more direct feedback.

user-f6ea66 21 March, 2025, 08:56:44

Thank you very much! I'll proceed to do some more. And try starting several at a time. If there's a high 'fail rate', I'll probably ask you to join for more direct feedback, thanks.

user-297950 20 March, 2025, 10:27:23

Hi, I am currently working with some previously recorded videos from the Pupil Invisible, I am exploring blinks in particular. For this, I am using the blinks.csv which is available from the Pupil cloud, but I was also checking these with the eye video from the recordings. In these videos, I noticed that there are quite a few of β€œhalf-blinks” (eyelids don’t close completely). Do you know how these are usually handled by the blink detector algorithm? Are they also considered or not, does the blink detector count them as blinks? Thanks!

user-f43a29 21 March, 2025, 08:41:15

Hi @user-297950 , the details of the Blink Detector are contained here.

If the default settings are not sufficient for your particular case, then you could tweak the settings in the pl-rec-export implementation of the Detector.

But before putting effort there, could you share an example recording with [email removed] Then, we can provide more direct feedback.

user-a578d9 20 March, 2025, 11:57:25

Hi, I'm attempting to upload a reference images for my reference imaging mapper however, whenever I attempt to upload the image it remains a black screen (the uploaded image is a JPEG file). Any help with this issue would be greatly appreciated. Thanks!

user-480f4c 20 March, 2025, 12:11:54

Hi @user-a578d9, can you please hard refresh your browser "Ctrl" + "F5" (Windows) or "Command" + "Shift" + "R" (Mac) - and let me know if the issue persists?

user-a578d9 20 March, 2025, 12:36:38

Hi @user-480f4c unfortunately I'm still get a black screen.

user-480f4c 20 March, 2025, 12:37:47

can you please open a ticket in our πŸ›Ÿ troubleshooting channel? I'll assist you there in a private chat

user-a578d9 20 March, 2025, 16:24:52

Hi, sorry to bother you with another question. I've completed an enrichment using a reference image of the object of interest, along with a 2-minute video of me scanning the object while holding the glasses. How can I now use a recording of someone wearing the glasses and looking at the object to generate a heat map on the reference image?

user-480f4c 20 March, 2025, 16:37:05

Hi @user-a578d9. Let me clarify a few points:

The Reference Image Mapper (RIM) needs 3 things: 1. A Scanning Recording, where you can hold the glasses to easily make a scan of your scene from various viewpoints 2. An Eyetracking Recording, where a person did a task 3. A Reference Image, which is a snapshot of your scene from a viewpoint of interest, including the region where the participant looked during the task

The scanning recording (1) allows us to build a 3D representation of the environment which then allows us to map gaze/fixations from the eye tracking recording (2) onto the reference image (3).

Therefore, the mapping needs to include always one or more eye tracking recordings.

To this end, please upload your eye tracking recordings and your scanning recording to Pupil Cloud, add them to the same project and start an enrichment. Upload your reference image, select which of the project's recordings is the scanning recordings, and hit run. Once the enrichment completes, the mapping will be applied to the remaining recordings you have in the project.

user-a578d9 20 March, 2025, 16:42:57

Thank you once again!

user-a9f703 21 March, 2025, 11:33:45

Hi all. I recorde my experimnet using eye tracking glasses pupil invisible. When I check it on the player I can see the error "player overread" for entire recorded video. Do you know what the issue is?

user-f43a29 21 March, 2025, 13:33:28

Hi all. I recorde my experimnet using

user-3c26e4 22 March, 2025, 09:04:43

Hi, do you plan to program in Pupil Cloud dynamic AOIs to follow an object, which has QR markers on it when it's moving? That would be extremely important for research.

user-f43a29 24 March, 2025, 13:17:54

Hi @user-3c26e4 , as long as the AprilTags are securely fixed on the surface of interest and the surface does not bend, then this is in principle already possible with the Marker Mapper Enrichment. Have you already given that a try and it did not work as expected?

user-3c26e4 27 March, 2025, 12:17:06

Hi Rob, that's the way I work since 5 years, but I am impressed what iMotions have done with dynamic moving AOIs. It would be great if you could implement this.

user-19fd19 25 March, 2025, 15:51:38

Hello all, I am trying to map gaze data to a screen with Invisible using AprilTags and the Marker Mapper enrichment. I created a short recording (~50 seconds) and uploaded it to Pupil Cloud. I then created a Marker Mapper enrichment with the "Begin Event" and "End Event" set to some custom events, so that not the full 50 seconds would be processed, but only the first 8 seconds.

This enrichment has been running for over an hour now, and I'm wondering if there is some fixed time overhead for every processing or if I can expect this to scale linearly with video duration. Because that would mean that for a 15 minute experiment I would get over a week of processing time, which doesn't seem like it should be the case.

Is there maybe any known reason why the enrichment might get "stuck" during processing?

user-d407c1 25 March, 2025, 16:27:55

Hi @user-19fd19 πŸ‘‹ ! Cloud is currently under an exceptionally high load, which might result in longer enrichment processing times.

First of all apologies, as you may experience some delays until results are available. The team is actively working to speed things up and prevent this from happening in the future.

We appreciate your patience !

user-19fd19 25 March, 2025, 16:33:59

I see, then there's nothing to worry about. Thanks for the quick response, and the perfect reason to call it a day for today 😁

user-b14b09 26 March, 2025, 13:33:52

Hi @user-480f4c Last time you suggested to go with 7-zip i tried that but was not working. I am using Ubuntu 22.04.3 LTS. still I am getting same error.

Chat image

user-b14b09 26 March, 2025, 20:48:12

@user-d407c1 If any one can answer the above question. I tried other methods also like downloading usng FDM, and extracting files using jdk based methods but it is not working. I think the error is from pupil cloud as it is giving corrupted files.

user-d407c1 27 March, 2025, 07:33:11

Hi @user-b14b09 πŸ‘‹ ! Sorry to heart that you are having troubles opening the ZIP files. Could you create a ticket on πŸ›Ÿ troubleshooting and share the recording or enrichment IDs that you are trying to download such that we can further investigate it? You can access them by right-clicking over their names.

user-a9f703 30 March, 2025, 09:41:20

Hi. I am using invisible eye tracker for my study. Is there any way that I can open the eye tracking videos on other coding software like Boris and see the fixation ponit for coding? I know that I can upload the scene video on any software but there is no fixation ponit on that video alone.Thank you

user-c2d375 31 March, 2025, 06:47:58

Hi @user-a9f703 πŸ‘‹πŸ» You can use the Video Renderer visualization feature to customize the scene video by overlaying gaze and/or fixation scanpath, then download the video for use with Boris.

user-a9f703 31 March, 2025, 08:37:51

Dear Eleonora. Thank you for your reply. Should I download the software or can I save scene video by overlaying gaze and/or fixation scanpath in pupil player and run it in Boris?

user-c2d375 31 March, 2025, 09:05:19

@user-a9f703 The fixation detector plugin on Pupil Player is specifically designed for Pupil Core recordings, so it won't generate fixation data - or a fixation scanpath visualization - for your Pupil Invisible recordings. Please take a look at our website for a list of the product-specific plugins available in Pupil Player.

The most effective solution in this case would be to use Pupil Cloud. You can generate the scene video with the fixation scanpath overlay using the Video Renderer, then download the video and upload it to your third-party software for further analysis.

Alternatively, instead of fixations, you can overlay the gaze points on your Pupil Invisible recording’s scene video using the Vis Circle visualization plugin. Simply enable the World Video Exporter plugin, and then proceed to export the recording data. The exported files will include the scene video with the visualizations you activated in Pupil Player.

user-a9f703 03 April, 2025, 06:57:34

Thank you for your explanation. I tried and it worked. πŸ‘

End of March archive