Hi, what is the resolution of the front camera of our device, (like the camera facing the screen, not facing our eyes)
Hi @user-a97d77 ! The scene camera resolution of Pupil Invisible is 1088 by 1080 px, you can find its Pupil Invisible's specifications here.
Hello, I would like to know if there is any repository or resource which contain template scripts to analyse the downloadable outputs from the created Pupil Cloud enrichments, as I would like to use python scripts for a more custom analysis. This would help greatly as I do not have much experience in python. Thanks!
Hello, @user-ce8702! What exactly are you looking to do with the enrichment data?
Hello, I use different workspaces on Pupil Cloud to separate my student research teams' data. One team forgot to switch their workspace on the phone and their data were accidentally uploaded to another worksapce. Is there anyway to move the data to a different worksapce? I couldn't find out how. Thanks!
Hey @user-5346ee! Moving recordings between workspaces is not currently possible. But you can upvote this feature request here: https://discord.com/channels/285728493612957698/1212410344400486481/1212410344400486481
Hello, Iβm currently conducting anγexperiment using Pupil Invisible.
I downloaded the data from Pupil Cloud in the Pupil Player Format, but I noticed that the number of files varies between different datasets. Do you know why this happens?
As shown in the attached image, some datasets contain specific files, while others do not. However, I followed the same procedure when acquiring the data.
Iβd appreciate your help in understanding this. Thank you!
Hi @user-f03094 π ! May I ask whether the first one was opened with Pupil Player already?
Hi @user-d407c1 Thank you for your response! I just opened the folder in Pupil Player, and everything is working fine.
Both files are working in Pupil Player.
@user-d407c1 The issue has been resolved. Thank you for your support!
Hi, I have some questions regarding when I use the Pupil Cloud website to add an enrichment (marker mapper) to some videos (most of the videos are 30-60 minutes). Some might be simple/stupid, but I'd like to optimise it and it makes a very big difference for the scope of the time I'll be investing in starting and monitoring these processes...
Is it correct that this kind of, let's say 45 minute, video with marker mapping and most of the screen selected as three big AOIs will take several hours to complete each?
Generally, are there smart ways to do it as fast as possible? : )
Thanks!
P.S. So far I've completed about 5 of these enrichment processes and on top of that, there were two that failed due to some "error". After that it seems I could restart them from scratch (defining surface, naming, making AOIs) and they would complete the next time. - How can I minimize the risk of processes failing? (after hours of processing) - thanks!
Hi @user-f6ea66 , don't hesitate from asking such questions!
Here are my responses:
Generally, with respect to Marker Mapper, the key points are:
To that end, if possible, it can also help to do some pilot tests with shorter recordings before proceeding to larger batches of processing. If you see any problems at that stage, then it is quicker to adapt & enhance the experimental setup.
With respect to the Enrichments that failed, if you'd like, you can invite us as Collaborators to your Workspace and we can provide more direct feedback.
Thank you very much! I'll proceed to do some more. And try starting several at a time. If there's a high 'fail rate', I'll probably ask you to join for more direct feedback, thanks.
Hi, I am currently working with some previously recorded videos from the Pupil Invisible, I am exploring blinks in particular. For this, I am using the blinks.csv which is available from the Pupil cloud, but I was also checking these with the eye video from the recordings. In these videos, I noticed that there are quite a few of βhalf-blinksβ (eyelids donβt close completely). Do you know how these are usually handled by the blink detector algorithm? Are they also considered or not, does the blink detector count them as blinks? Thanks!
Hi @user-297950 , the details of the Blink Detector are contained here.
If the default settings are not sufficient for your particular case, then you could tweak the settings in the pl-rec-export implementation of the Detector.
But before putting effort there, could you share an example recording with [email removed] Then, we can provide more direct feedback.
Hi, I'm attempting to upload a reference images for my reference imaging mapper however, whenever I attempt to upload the image it remains a black screen (the uploaded image is a JPEG file). Any help with this issue would be greatly appreciated. Thanks!
Hi @user-a578d9, can you please hard refresh your browser "Ctrl" + "F5" (Windows) or "Command" + "Shift" + "R" (Mac) - and let me know if the issue persists?
Hi @user-480f4c unfortunately I'm still get a black screen.
can you please open a ticket in our π troubleshooting channel? I'll assist you there in a private chat
Hi, sorry to bother you with another question. I've completed an enrichment using a reference image of the object of interest, along with a 2-minute video of me scanning the object while holding the glasses. How can I now use a recording of someone wearing the glasses and looking at the object to generate a heat map on the reference image?
Hi @user-a578d9. Let me clarify a few points:
The Reference Image Mapper (RIM) needs 3 things: 1. A Scanning Recording, where you can hold the glasses to easily make a scan of your scene from various viewpoints 2. An Eyetracking Recording, where a person did a task 3. A Reference Image, which is a snapshot of your scene from a viewpoint of interest, including the region where the participant looked during the task
The scanning recording (1) allows us to build a 3D representation of the environment which then allows us to map gaze/fixations from the eye tracking recording (2) onto the reference image (3).
Therefore, the mapping needs to include always one or more eye tracking recordings.
To this end, please upload your eye tracking recordings and your scanning recording to Pupil Cloud, add them to the same project and start an enrichment. Upload your reference image, select which of the project's recordings is the scanning recordings, and hit run. Once the enrichment completes, the mapping will be applied to the remaining recordings you have in the project.
Thank you once again!
Hi all. I recorde my experimnet using eye tracking glasses pupil invisible. When I check it on the player I can see the error "player overread" for entire recorded video. Do you know what the issue is?
Hi all. I recorde my experimnet using
Hi, do you plan to program in Pupil Cloud dynamic AOIs to follow an object, which has QR markers on it when it's moving? That would be extremely important for research.
Hi @user-3c26e4 , as long as the AprilTags are securely fixed on the surface of interest and the surface does not bend, then this is in principle already possible with the Marker Mapper Enrichment. Have you already given that a try and it did not work as expected?
Hi Rob, that's the way I work since 5 years, but I am impressed what iMotions have done with dynamic moving AOIs. It would be great if you could implement this.
Hello all, I am trying to map gaze data to a screen with Invisible using AprilTags and the Marker Mapper enrichment. I created a short recording (~50 seconds) and uploaded it to Pupil Cloud. I then created a Marker Mapper enrichment with the "Begin Event" and "End Event" set to some custom events, so that not the full 50 seconds would be processed, but only the first 8 seconds.
This enrichment has been running for over an hour now, and I'm wondering if there is some fixed time overhead for every processing or if I can expect this to scale linearly with video duration. Because that would mean that for a 15 minute experiment I would get over a week of processing time, which doesn't seem like it should be the case.
Is there maybe any known reason why the enrichment might get "stuck" during processing?
Hi @user-19fd19 π ! Cloud is currently under an exceptionally high load, which might result in longer enrichment processing times.
First of all apologies, as you may experience some delays until results are available. The team is actively working to speed things up and prevent this from happening in the future.
We appreciate your patience !
I see, then there's nothing to worry about. Thanks for the quick response, and the perfect reason to call it a day for today π
Hi @user-480f4c Last time you suggested to go with 7-zip i tried that but was not working. I am using Ubuntu 22.04.3 LTS. still I am getting same error.
@user-d407c1 If any one can answer the above question. I tried other methods also like downloading usng FDM, and extracting files using jdk based methods but it is not working. I think the error is from pupil cloud as it is giving corrupted files.
Hi @user-b14b09 π ! Sorry to heart that you are having troubles opening the ZIP files. Could you create a ticket on π troubleshooting and share the recording or enrichment IDs that you are trying to download such that we can further investigate it? You can access them by right-clicking over their names.
Hi. I am using invisible eye tracker for my study. Is there any way that I can open the eye tracking videos on other coding software like Boris and see the fixation ponit for coding? I know that I can upload the scene video on any software but there is no fixation ponit on that video alone.Thank you
Hi @user-a9f703 ππ» You can use the Video Renderer visualization feature to customize the scene video by overlaying gaze and/or fixation scanpath, then download the video for use with Boris.
Dear Eleonora. Thank you for your reply. Should I download the software or can I save scene video by overlaying gaze and/or fixation scanpath in pupil player and run it in Boris?
@user-a9f703 The fixation detector plugin on Pupil Player is specifically designed for Pupil Core recordings, so it won't generate fixation data - or a fixation scanpath visualization - for your Pupil Invisible recordings. Please take a look at our website for a list of the product-specific plugins available in Pupil Player.
The most effective solution in this case would be to use Pupil Cloud. You can generate the scene video with the fixation scanpath overlay using the Video Renderer, then download the video and upload it to your third-party software for further analysis.
Alternatively, instead of fixations, you can overlay the gaze points on your Pupil Invisible recordingβs scene video using the Vis Circle visualization plugin. Simply enable the World Video Exporter plugin, and then proceed to export the recording data. The exported files will include the scene video with the visualizations you activated in Pupil Player.
Thank you for your explanation. I tried and it worked. π