Hello, while doing a recording using pupil invisible during an experiment recently, this error appeared. May I know how this can be rectified/prevented in the future?
Hi! Could you please check if the raw recording includes an android.log file and share it with [email removed] The log file might contain further information about what went wrong.
Okay sure, thank you!
Are there any recommendations for maximum humidity? Would you expect Pupil Invisible to take damage if used in a building with 75-80% humidity over a longer period of time?
Hi @user-6c1e7f π I do not foresee that being a problem. We use Pupil Invisible in tropical locations where humidity levels are always above 70% year-round πΈ
The key is to not let water get into the glasses especially around the cameras and seams. Pupil Invisible is not water resistant.
I am interested in utilizing head-mounted eye tracking technology to identify and describe gaze patterns among novice, intermediate, and expert emergency physicians who are leading trauma resuscitations. Is this something pupil could be useful for? what would the data obtained look like? Can you provide other examples where pupil was used to identify visual heat maps in a real-world complex environment?
Hi @user-ae0f24 π Thanks for your message! Our eye tracking glasses could indeed be useful for your medical application area in trauma rooms. There is a wealth of research on expert and novice eye movements in different medical use cases. In order to provide more tailored advice, it would be useful to know the specific tasks and situations you envisage. Please contact [email removed] with a bit more information addressing the kinds of questions raised below:
We look forward to hearing from you!
Hi I have some doubts about the coordinates in the following figure in the data exported by pupil cloud, and the coordinates in the usage guide are (0.0) in the upper left corner and (1.1) in the lower right corner. What is the relationship between these two coordinates?
Hi @user-4bc389! I am assuming the screenshot is of gaze.csv
file from the Raw Data Exporter? You can find documentation on this export format here:
https://docs.pupil-labs.com/invisible/reference/export-formats.html#raw-data-exporter
The fixation columns are reported in pixels of the scene video. This is different to how fixations are reported in Pupil Player, where instead the coordinate system is normalized and ranges from (0, 0)
to (1, 1)
.
Thank you for your answer, what is the coordinate origin of the fixation in the picture below?
The pixel location (0, 0)
is the top-left corner of the image, the bottom-right corner would be (1088, 1080)
. It is possible that some gaze points are located slightly outside of the range covered by the scene camera.
thanksοΌAlso, is my understanding of the surface coordinate system correct in the picture below?
Yes!
Can pupil cloud export data such as fixation duration in the surface coordinate system, or only the fixation duration of the entire viewing process?
Please see the export format documentation of the Marker Mapper here: https://docs.pupil-labs.com/invisible/reference/export-formats.html#marker-mapper
The export includes fixation data in surface coordinates.
OKοΌthanks
Hi Pupil lab team, When I used my Pupil Invisible, there was no red circle occurrence whatever in the Oneplus device or Pupil Monitor. But it appeared when I viewed the video at Pupil Cloud. Is there any set-up that can solve this problem? Thanks in advance.
And when I look at the records I find that the red circles are very poorly calibrated, how should I calibrate it?
Hi when playing back recordings on the cloud I canβt pick up any sounds what am I doing wrong? Thanks Gary
Hi, could you please check if you have enabled the Recording Audio option in the app settings? It is turned off by default and might be the reason why there is no recorded audio.
Will do Papr and thanks. π
Hi @user-8d365d! Being able to see the gaze signal in Cloud but not locally on the phone is odd, if the gaze signal exists it should be visible everywhere. Just to confirm, you do have a recording for which you can see gaze in Cloud, but if you play back the recording locally on the phone there is no gaze?
FYI note that the glasses recognize if they are not being worn and stop visualizing a gaze signal, so if you are looking for a gaze signal make sure the glasses are actually being worn at that time.
There is no calibration for Pupil Invisible and it should work out of the box. Cases that can lead to bad gaze accuracy are if the subject is looking at things at a distance <1 m. In this case there is noticeable parallax error, which is essentially an offset to the left. For a small percentage of subjects there can also be a general constant offset in their predictions, due to their eye physiology. In both cases you can apply an offset correction. To do this navigate to the live preview in the Companion app, ask the subject to look at a specific landmark, such that the offset becomes apparent, click and hold the screen and drag the gaze circle to the correct location. This offset will be saved in the wearer profile.
Hi Marc, thanks for your reply. I have checked again. When I played back the recording locally on the phone, there is no gaze signal. But there is a gaze signal in Cloud.
Hi I checked the to see if the recording audio option in the app setting had been enabled which had been but it is the level of sound recording is low and we are planning to eye track in a clinical setting where we want to pick up the dialogue between the clinicians. Is there a way to increase the level of sound recording or attach an external microphone? Thanks Gary
How far away are the clinicians from each other during the recording?
Itβs within a morgue setting with Thiel cadavers using ultrasound and regional anaesthesia with a trainee performing the procedure and the trainer within a 1.5m radius of the location of the trainee.
Audio in that range should be audible. Unfortunately, there is no in-app setting to increase the audio. The audio is provided by the operating system and stored as is. Please contact info@pupil-labs.com in this regard.
Thanks Papr and will do
Hi Marc, I re-installed the companion. Now the problem is solved! Thank you!
Just to be clear: in order to get the Pupil Labs Player (for Pupil Invisible) I should download the Pupil Labs Core software, right? I ask because it's not clearly stated anywhere, nor the most intuitive step.
Yes, Pupil Player only comes as part of the Pupil Core software bundle. I would be happy to clarify the documentation in this regard. Could you point me to the location where you would have expected this information to be written down?
Thank you!
I would have expected a direct download link on top of the Pupil Player page at https://docs.pupil-labs.com/core/software/pupil-player/
and possible I'd also add links to the Pupil Player page (link above) in the Invisible user guide, where that software is mentioned, e.g. here at the local transfer guide https://docs.pupil-labs.com/invisible/user-guide/invisible-companion-app/#local-transfer
Thank you for the feedback!
Hi, I'm looking for an automated face redaction/blurring software that would work for Pupil Invisible footage/mp4 footage. I can't find anything that works well. Can anyone here offer any suggestions? TIA
Hey everyone. We recently encountered a problem with the eye tracker (see the attached video). Does anyone knows the reason for this poor quality?
Please contact info@pupil-labs.com in this regard.
Hi π Pupil Cloud will support automatic face blurring on upload soon! We will announce it in π― announcements when it becomes available.
Oh fantastic, thanks!
Is it possible to change the cloud account in the Pupil Invisible Companion application that videos are being uploaded to?
Yes, go to settings, scroll down to the bottom and press log out. Afterwards, you can login with a new account.
Hello ! compared to eyetrackers where you have to do a calibration step to have a reference plan how does Invisible works ? How do you define the plan represented by the computer screen that participants will have to look at ?
Hi @user-679744 π. Do I understand you correctly in that you wish to analyse gaze in relation to a computer screen? If so, we have a workflow that can facilitate this. You can add April Tag markers to the screen (physically or digitally) and use the Marker Mapper enrichment in Pupil Cloud. This enrichment automatically detects markers, defines surfaces, and obtains gaze positions relative to these surfaces. You can also generate heatmaps. Further details here: https://docs.pupil-labs.com/invisible/explainers/enrichments/#marker-mapper
fantastic thank youi so much ! excatly what i needed !
Hi I am having some problems with my invisible gaze tracker. The red light started blinking and I had an error. The error said that I could not start capturing and the red circle was not showing the gaze anymore while wearing the glasses and was stuck in the left corner for a while. I do see what I am looking at but now I do not even see a red circle anymore (the gaze). I wonder what is going on? Any ideas? I had to cancel my experiment and I do also have some planned tomorrow.
Hi @user-04f0a4 π. Please send an email to info@pupil-labs.com with a screenshot of the error dialogue and we can assist!
Sent an email!
Hi, we were doing a test with 2 pairs of pupil invisible the other day, but upon finishing only one of the POVs could upload, and the other says it's uploaded and the length is 14 min. on the companion app, but when you view it in companion you can only see 3 minutes. On cloud it says the file is 99% uploaded.
I suspect the file is just corrupted but my teacher recommended giving you a shout to see if there is anything we can do to fix it on the backend?
Hi @user-783c3d! Could you please DM me the recording ID of the potentially corrupted recording? You should be able to get it e.g. in cloud by righ-click on the recording -> view details. Then we can take a look at what is wrong with it!
Is it possible to open the glasses? I wanted to see if there's any spare space and a spare part on the USB line in the glasses to hack an eSIM into it.
No, it is not possible to open them up without damaging them. The physical space left in the plastic parts for the electronics is also pretty maxed out I think.
Does pupil invisible collect timestamp or each frame? we need this info to sync with other cameras (Two Zed cameras).
Yes, every sensor collects UTC timestamps for every sample! Using them to sync with external sensors should be straight forward.
Thanks @marc , Can we sync two zed cameras and two pupils invisible without timestamps based approach. That means, are there any tools that can record sync video streams from these four devices?
Using the according timestamps is the only way to synchronize. In principle an arbitrary number of devices can be synced this way, but I am not sure what exactly the zed cameras provide.
Thank you very much. We collected some data using pupil invisible. Could you please help us how to get the timestamps of each frame? I noticed some .time files when we download recorded session. One more thing when we download data it has raw video without the gaze circle overlay. How can we download the video with gaze overlay?
The easiest way to get access to timestamps is via the Pupil Cloud Raw Data Exporter enrichment https://docs.pupil-labs.com/invisible/reference/export-formats.html#world-timestamps-csv Read more about enrichments here https://docs.pupil-labs.com/invisible/explainers/enrichments/ (see also the Gaze Overlay enrichment at the bottom of this document)
Hi team, I am trying to upload recordings from the companion device to Pupil Cloud.. On the device it looks like the recordings have finished uploading (tick on a cloud icon), but on Pupil Cloud there are none
but on Pupil Cloud there are none Hi, could you clarify if none of the recordings are listed in Cloud or if none of the recordings listed in Cloud have a tick mark?
none of the recordings are listed in Cloud
Two possible issues come to mind: 1) You are not logged in with the same account as on the phone. 2) The recordings have been uploaded to a different workspace than the one selected in Pupil Cloud. Could you please check that neither is the case?
I am logged in on the same account and I have only one workspace on the companion app
I've just created a new workspace on Cloud to test if it will also be visible on the companion app, but it isn't and both are empty
@user-80fbf8 We had an issue in the past where the uploaded recordings were falsely put into the trash. Could you select the Trash filter from the saved searches in the top right next to the search bar and check if the recordings show up there?
I don't have any saved searches
The icon with the three lines in the top right should give you a dropdown that contains a "trash" filter
ah ok, but there are still no recordings..
Okay, thanks for checking anyway! It would be helpful to know one of the recording IDs to check the recordings state in Cloud. Could you send us one? You will find the recording ID in the info.json
file in the recording folder, which you can access directly on the phone. See also this guide https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html
8f9884f8-036a-4717-b8d9-ae17bde99da6 this is one
Thanks! I'll get back to you as soon as I know more!
thank you
@user-80fbf8 Could you please DM me the email address of your Pupil Cloud account?
I have
Hi all, I want to run Reference Image Mapper. How do I make the reference image?
Hi! You can take the reference image with any camera, e.g. the camera of the companion phone!
thank you!
Hi π I was able to de-distort my world video to counter the fisheye effect of the world camera using OpenCV. Now I am facing a question: is the image distortion considered in the projection of the gaze orientation? In other words, should I trust the gaze position that is projected on the output video of Pupil Player or the pixel gaze position (x, y) from the output file gaze.csv?
Hi, could you elaborate on your workflow? To me, it sounds like you need to undistort the gaze data, too, before being able to render it into the manually undistorted scene video.
@marc is the real-time api available?
Yes! We are a bit behind on the official announcement, but the new real-time API is now available. Please install the newest version of the app and see here: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/
Nice! Does it also have the option to replay pre-made recordings as real-time test data?
This option is not available on the phone. Simulating the whole host API would mean rebuilding the app from scratch. Instead, we could provide a partial simulation of the Python client API. Specifically, we could provide a Python module with the same receive_*
data functions as the original, with the difference that the data would be read from a recorded file instead from a host in realtime. The control api, e.g. start/stop recording, etc, would not be available. Would that be sufficient for your development?
that would be the best!
ok, it will take me a few days to build that given my list of other todos. I will let you know once it is available!
is it possible to fetch the eye camera video, audio and imu data from the real-time api too? https://pupil-labs-realtime-api.readthedocs.io/en/stable/_modules/pupil_labs/realtime_api/streaming/gaze.html#GazeData
No, that is currently not possible.
@user-0ee84d https://gist.github.com/papr/1341a7f8badbb2da8e0e7dddda126c99#file-realtime_api_simple_simulation-py-L419
Here you go. π
with SimulatedDeviceFromRecording(path_to_recording) as device:
while True:
frame, gaze = device.receive_matched_scene_video_frame_and_gaze()
The code includes a working example that can be called from the command line:
usage: realtime_api_simple_simulation.py [-h]
[--mode {matched,scene-only,gaze-only}]
[--debug] [--timeout TIMEOUT]
recording
positional arguments:
recording
optional arguments:
-h, --help show this help message and exit
--mode {matched,scene-only,gaze-only}, -m {matched,scene-only,gaze-only}
--debug
--timeout TIMEOUT
thank you!!!!!!!!!!
Thank for your answer @papr, I think I have my answer then.
1. I guess I can trust the gaze position that is projected on the world video.
2. My current workflow only distort the world video images, but now I will distort at the same time the gaze position and the images from the video.
3. If possible, could you just explain to me quickly how, in Pupil, the gaze position that is projected on the video is modified to consider the fisheye effect ? (is it directly in the trained AI, or is it post process treatment ?)
The gaze estimation algorithm of Pupil Invisible does initially output a 3D gaze direction originating in the scene camera. This value is independent of the scene camera distortion. During manufacturing every scene camera is calibrated and those values are used to project the 3D direction to a 2D gaze point in pixel coordinates of the scene camera. Due to the accurate calibration, this is taking the distortion into account.
Hi Iβm trying out the enrichments on Pupil Cloud and at the moment the one for Marker Mapper. When I have downloaded a recording before from the cloud last year and used Pupil Player it had no problem detecting the markers but on the Cloud it canβt detect the markers on the same recording and Iβm wondering why there is discrepancy? Thanks Gary
A potential explanation might be the "tag family" you used. There are different families of AprilTag with slightly different properties. While Surface Tracker in Pupil Player allows you to specify which family is used, the Marker Mapper in Pupil Cloud only support one family: tag36h11
. This is used by default in Player too though.
Besides that a difference in detection results would be odd as both implementations use the same detection algorithm. We would need an example recording to see what is going on in detail (you could share one with [email removed]
Hello, I had a longer recording (I know, not best practice) that works great on the companion device. Uploading to the cloud on the device says complete (green tick) but within Pupil Cloud it shows a circle. On hover it states "uploading 100%". This has not changed for several days. Is there a way to reattempt upload, or to fix this problem?
Hi @user-6499eb! In principle also long recordings should work just fine. Could you send me a DM with the ID of the recording in question please? You can see the ID when you right-click a recording and select "View Details".
Hello, I hope it's ok to ask for help with this in this channel π I am currently completing my first ever eye-tracking research project, and I was wondering if there are any tools built in the PupilCloud/Player that will make the coding process faster? I know there is a heat map available, which is a great help as well. I am looking at lateral eye movements specifically.
Hi @user-5a2110! There is not tool for manual coding available in Pupil Player or Cloud. As you noted, there are several Tools for automatic mapping though: https://docs.pupil-labs.com/invisible/explainers/enrichments/#marker-mapper https://docs.pupil-labs.com/invisible/explainers/enrichments/#reference-image-mapper
Those require a bit of preparation before the recordings are made though to provide appropriate inputs for the algorithms. If you are looking to code recordings that have already been made these may no longer be applicable.
Hi All, my lab recently has acquired two pairs of Pupil invisible glasses and have been running into some issues with one of them. The app does not seem to recognize the glasses when connected at all times. it works sometimes and other times it doesn't. On occasion, when the phone does connect, it vibrates seemingly unprovoked and is sometimes accompanied with a "recording/sensor failure" error. Looking through the discord, it seems like there might be a hardware malfunction? The glasses are fairly new so I just wanted to check in and make sure there was no easy fix! Thanks in advance
Hi @user-46e93e this could be software, cable, or HW issue. I see that you already emailed us as well. Let's continue the discussion there. One of my colleagues will follow up to help troubleshoot today.
is it also possible to access the feed from the eye camera? π
No, this is currently not possible. Neither in the simulated nor in the real api π
may be can i add it as a feature request?
It is one of many planned features but I am not able to share a release date yet.
Both the feed from the eye camera and scene camera needs to be processed. It would be nice to have such a feature.
Out of interest, what kind of processing are you planning for the eye videos?
I managed to improvise pupil detection in dark environments by using a custom detector/segmentation. I need to access the eye scenes to be able to test offline/online
The only way to get access to the eye video is through loading it from a recording at the moment.
could you please direct me in that direction? or share some snippets?
This is how I would load frames from a video file: https://gist.github.com/papr/1341a7f8badbb2da8e0e7dddda126c99#file-realtime_api_simple_simulation-py-L381-L386
You should be able to reuse the whole VideoStream class by only changing the name
property to the corresponding video file name.
@user-0ee84d not sure what you mean. You should have access to the recording files already, correct?
ah perfect! I will do those changes
I might have asked this before, but is there a hard requirement for processing the data in realtime? It is much more reliable to process the data from the recording and you do not have to wait for the app to implement the eye video streaming. Did you know that you can download recordings programatically from Pupil Cloud? You could automate the processing of new recordings in this way.
thank you!
Thank you so much for your response! We actually are using marker mapper indeed! I've run into a small problem with them, however - it seems like the brightness needs to be a certain level for them to be recognized. I have a couple of videos that needed to be recorded in a low light setting (you can still see the QR codes though, see image attached). Does that mean this video is useless for marker mapping, or can I possibly run it through a brightness adjustment filter and then reupload to run the marker mapping?
When recording screens it can be very difficult to achieve good exposure. In an ideal setting the environment light would be bright enough, such that the screen does not dominate the lighting conditions as much. To accommodate difficult settings you can however switch to manual exposure mode in the settings and optimize the exposure for the markers. This may lead to the screen contents being less legible though.
The recordings like this you have already are unfortunately not really fixable. Processing the video first and re-uploading it is not possible (and likely would not help much). For future recordings I recommend improving the lighting or the exposure settings, and placing the markers such that the screen can not shine through them. I.e. either using more opaque material to print on, or placing them fully outside the illuminated area. Increasing them a bit in size could also improve the detection rate.
Hi Marc thanks for your response and yes you were right as it was the older square tag family I was using which were detected when I specified them on Pupil Player but werenβt picked up on Pupil Cloud. Is there a link to the tag:36h11 family to print them out? Also it possible to export the fixations for each defined surface area from Pupil Cloud when using the surface tracker as you can do for for Pupil Player?
Please see here for markers of the tag36h11
family to print:
https://docs.pupil-labs.com/invisible/explainers/enrichments/#setup
Fixations are included in the Marker Mapper export. See the fixations.csv
file here:
https://docs.pupil-labs.com/invisible/reference/export-formats.html#marker-mapper
Perfect Marc and thanks again for the help. Regards Gary
Hi, can anyone help me explain why sometimes there are missing world_index (video frame) in the gaze data? For example, in this picture, the gazes from the first 53 video frames are missing.
Hi, this happens if the gaze estimation starts later than the scene video recording. This is normal and nothing to worry about π
OK. I also notice that sometimes the world.mp4 video in the exports folder is longer than the original world.mp4 video. So, I think that gaze estimation can also end later than the scene video recording, right?
Mmh, the only reason I can see for that is if the original recording has multiple parts. But I would need to check. If you could share an example with [email removed] then I could confirm it specifically.
Unfortunately, this is private data. But we don't use that data after all. Thanks for your help anyway!
Hello, I just bought the Pupil Invisible glasses for a research project. Initially, the first recordings I took with the glasses were uploaded automatically in the Pupil cloud. However, I deleted them on the Pupil cloud web application. And now I am unable to upload new recordings on Pupil Cloud via the mobile app. The Pupil Invisible Companion on the smartphone is running, but nothing is uploading. Do you know where the problem might come from ? I tried to shut down the app, but I always got notifications "RTSP service started" and "Upload service running" once I restart it.
Thanks in advance for the help !
Hi @user-51951e! I see you have reached out via email as well. We will answer your inquiry there!
I use this code to create scan path visualization video, but i ran out to an keyerror, and I have no idea how to fix it, is there any suggested way to try on this problem? thank you~
Hi @user-a9b74c! Thanks for reporting the error. I have revised the script and updated the gist. Please try again with the new version! https://gist.github.com/marc-tonsen/d8e4cd7d1a322a8edcc7f1666961b2b9
Hello! π Are there any plans in the future that the single eye images of the PupilInvisible will be streamed as well?
Hey @user-fb5b59! This is definitely on the roadmap for the new real-time API, but will take some time to get implemented. Streaming eye video is possible via NDSI though and you could also run NDSI and the new API in parallel to use the new one for everything else.
Thanks for your fast response. Maybe I did not get it completely: utilizing the current App and the implementation based on NDSI I can receive already the eye images? I know that there was a setting in the communication protocol which I could set but I thought it would do nothing..
Yes, using NDSI (the older, now mostly deprecated real-time API) it is possible to stream eye video. This was available for a longish time and we will keep the Companion app compatible with NDSI for the foreseeable future.
The new real-time API, that was just released, does not yet support streaming eye video, but this is planned for the future.
Just an additional question: the new real-time API is "only" for the communication part between the PupilInvisible and the mobile App, am I right? There is nothing integrated related to live pupil radius or eye opening estimation (or any other eye signal)?
I would also like to clarify the components and how they work together: 1. Pupil Invisible glasses connect via USB to the Android phone 2. The Pupil Invisible Companion mobile app - processes the video from the glasses - provides access to the Network API 3. A possible client connects to and receives data from the app via a network connection
The new API allows you to - stream gaze and scene video data - remote control Pupil Invisible devices to start/stop recordings or save events.
No other data stream is currently available. Pupil Invisible will never be able to estimate the pupil radius, as the pupil is often not visible at all in the eye video. Blink detection is about to be released (probably today) for Pupil Cloud. Blink data will only be available as download from Pupil Cloud though, not in real-time via the API.
Hi , due to my client's cybersecurity constraints we cannot use cloud services .. is there a way to use pupil invisible using iMotions without connecting to any cloud service ?
Hi Mohamed! Yes, you can disable cloud uploads and transfer recordings directly off of the phone via USB and import them to iMotions. See here for a guide on USB transfer: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html
Note that this comes with limitations however: you will not be able to access fixation, blink, and 200 Hz gaze data, which is all generated in Pupil Cloud after upload. The 65 Hz gaze signal, which is generated on the phone in real-time is available however. AFAIK iMotions is currently not importing the fixation or blink data anyway though, and has an alternative fixation detection algorithm of their own which may be applicable.
Does you client have a specific issue with cloud uploads that may be fixable? We already have a very restrictive privacy policy and are looking into certification options to proof security and privacy. If there are any specific requirements you can name, I would be very interested!
Hi Pupil team! We have a potential participant who is blind in one eye; would this exclude them from being able to wear the Invisible?
Hi @user-ae76c9! We have not had the chance to test Pupil Invisible in this condition. Since the gaze estimation algorithm is utilizing the images of both eyes simultaneously, there is a good chance that gaze estimation accuracy for this subject would be reduced. Its a bit difficult to predict how it will behave exactly though. If that is a possibility I'd recommend just trying how well it works.
Got it, thank you!
Hi @user-1936a5 π. I'll respond to your question (https://discord.com/channels/285728493612957698/285728493612957698/958360062533976186) in this channel. Pupil Cloud is free for Pupil Invisible customers!
@nmtThank you for your reply. We are considering purchasing hardware products
btw,What is the frequency at which these devices record dataοΌ
Pupil Invisible Gaze data is available at 200 Hz once recordings are uploaded to Pupil Cloud. If Cloud uploads are disabled, gaze data is available at around 66 Hz (generated in real-time on the Companion Device)
Tech-specs available here: https://pupil-labs.com/products/invisible/tech-specs/
thank you!
Hi, I notice that the gaze confidence of Invisible exported from Pupil Player is always either 0 or 1, not in between (unlike Core which has number in between). Is this always the case for Invisible, i.e just binary values for confidence score?
Please see this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/849260790954983424
Thank you
I have Pupil Invisible and recorded a few participants, the footage is now on Pupil Cloud so I can see the gaze overlay. I want to take these short clips out and put them together in one video. I can't seem to download the gaze overlay footage from Pupil Cloud, and when I try to download the raw footage and upload it to Pupil Player, that doesn't work. Do I need to get this footage on Pupil Player to cut clips, and if so, how do I do that?
Welcome to the community @user-8cef73 π. Check out the Gaze Overlay enrichment in Pupil Cloud: https://docs.pupil-labs.com/invisible/explainers/enrichments/#gaze-overlay. You can use that to export the gaze overlay footage, in sections or for the whole recording.
Thanks Neil, I did read this page but it wasn't helpful. It mentions downloading the videos but I can't seem to do that with gaze overlay, there's no option. I have noted some sections with timestamps in my gaze overlay footage, but can't see any way to separate them from my full-length videos.
Hello! i use a pupil invisible for my research, and i have a problem with record.. the world video is cut in 2 parts (Ps1 et ps2) .. why does it happen? can I get back the all record for my data? thank you
Hi Hanna! It sounds like the USB connection to the smartphone could have been inadvertently disconnected during recording. This would cause the multi-part recordings you see. See these messages for reference:
https://discord.com/channels/285728493612957698/633564003846717444/814330706045566997 https://discord.com/channels/285728493612957698/633564003846717444/814429047554703360
Once you have run the Gaze Overlay enrichment, you can right-click on the enrichment and hit download. When setting up the enrichment, you can choose which events to run the enrichment between. Have you looked at the 'Analysis in Pupil Cloud' section of the Getting Started guide? You can find it here: https://docs.pupil-labs.com/invisible/getting-started/analyse-recordings-in-pupil-cloud/. It gives step-by-step instructions on how to run enrichments. It actually uses the Face Mapper as an example, but you can follow the guide just using the Gaze Overlay.
Yes, I've followed those instructions but there's no download option when I right click. Hopefully you can see from the attached screenshot
I see. You'll also need to click the arrow symbol to run the enrichment. Once that's processed, the download option should be available π
Oh I see, the guide made it sound like it was ready once you reached that page and I thought it was a play button for that video clip. Thank you! And just back to my original question, can I edit these videos together in Pupil Cloud or Pupil Player, or do I need to do that on some separate video editing software?
That's actually an additional step. For the face mapper, you don't need to press the arrow. We'll get the docs updated to clarify that!
You'd have to do that using video editing software.
Thanks!
Hi! I am wondering what happens if you use the Pupil Invisible glasses in an area without internet? Will the recordings upload to the cloud once you do get to an area with internet connection (if cloud enable is selected)? Or do you have to record data in an area with connection for this to take place?
Hi @user-4efa6b! If no internet is immediately available, recordings will upload later, once it becomes available. Recordings can be made fully offline. The only hard requirement for internet, is when you first login to the app during the initial setup.
Thank you!