Hi Pupil Labs team! Quick questions: are there any parameters in the Reference Image Mapper that can be changed to make the detection/mapping to the reference image more sensitive? This could be useful for some of my recordings. Also just wondering if the Reference Image Mapper code open-source? Would be cool to play around with it and fine-tune paraments for my use-case. Thanks and keep up the good work!
Hello, can we use a one plus 10 with a Pupil Invisible?
Hi @user-94f03a ππ½ ! Pupil Invisible is compatible with OnePlus 6, OnePlus 8, and OnePlus 8T. You can find all the details regarding compatible devices and Android versions for Pupil Invisible in our documentation: https://docs.pupil-labs.com/invisible/glasses-and-companion/companion-device/#companion-device
thanks Nadia β I saw that Neon is compatible with One plus 10, was wondering it the documentation was up-to-date. Anyway thanks!
Hi Pupil Team (@user-480f4c and @user-d407c1 ) I have a doubt regarding AOI generation. I referred to the code (https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#map-fixations-to-aois) but does it consider the following:
1) What if a participant is moving their head while looking at poster as AOI generation is based on position/coordinates?
2) Does it consider if participant move/walk slightly and analyses the photo/poster etc?
If yes then could you explain which part of code takes care of that? If not, then is there a way to incorporate this?
Hi @user-2b79c7 ππ½ ! The AOI tutorial works with the data exported after applying the Reference Image Mapper enrichment on Pupil Cloud.
This tool essentially allows you to map gaze data of a 3D environment (e.g., your eye-tracking recording while looking at paintings in a gallery) onto a 2D image (e.g., a photo of the paintings). The mapping applies only when the 2D reference image is visible to the scene camera of your eye-tracking recording.
As you see in the data exported from the Reference Image Mapper, there is a column "fixation detected in reference image
" with True/False values that indicates whether the image was detected in the video and by looking into the fixation coordinates (fixation x [px]
and fixation y [px]
) you can get information of where your participant was looking at a given time point (you can find this info in the table of fixations on the AOI tutorial page: https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#map-fixations-to-aois). Therefore, to answer your questions, you will obtain fixation information even if the participant is moving slightly their head, as long as the reference image is detected in the scene camera.
I encourage you to have a look at the Reference Image Mapper documentation to see more examples of how this enrichment works: https://docs.pupil-labs.com/enrichments/reference-image-mapper/
I hope this helps, but please let me know if you have more questions!
Thanks @user-480f4c !! Your first line itself clarified the doubt. I have another small query: I suppose that in case of dynamic environment such as videos, the existing AOI algorithm will not work as the location of AOI itself will change on the screen. However, Is it possible to include AOI detection using image processing for small deviations ( Let say AOI's screen position does not move too much in the video)?
Thanks again for the clarification.
Hi! I am searching for a good blink detection alrogithm for my experiment. The participants, however, make quite some vertical eye-movements during the performed task. Therefore, I am a bit hesitant to use your blind detection. We also used the pupil player to get the data from the pupil invisible, since we are not allowed to upload our data to the cloud. I look around for some alternative options, but have not found a solution yet. Do you have any recommondations?
Hi @user-2d66f7 ππ½ ! Since you cannot use Cloud, you could consider our command-line tool that allows you to easily export gaze data, blinks, fixations, and saved events as CSV files locally on your computer. You can find it here: https://github.com/pupil-labs/pl-rec-export/tree/main
May I ask why are you hesitant using the blink detection algorithm? I'm not sure I fully understand how the vertical eye-movements in your setup would affect blink detection.
I read that vertical eye-movements may cause many false positive blinks (instead of saccades) with the current version of the blink detection algorithm
Thanks for the clarification @user-2d66f7. Higher number of false positives in blink detection can be the case in highly dynamic settings with fast vertical eye movements (e.g., football). Could you elaborate a bit on your setup? What kind of task are your participants performing?
I'm not sure I fully understand your question. Let me explain the workflow of how you can use the AOI tutorial:
1) Apply the Reference Image Mapper enrichment on Pupil Cloud.
2) Run the AOI tutorial. - This will first present you with the reference image on a pop-up window and then you will be able to define manually your AOIs. - After the AOIs are defined on the image, you will be able to run the remaining code snippets and calculate metrics per AOI.
Therefore, the metrics for the AOI are calculated based on the reference image, rather than the video.
@user-480f4c Ok I will explain my question again. Consider I am driving a motorbike on road, and I have placed marker on motorbike's body which in within the camera videoframe. However as I am driving the position/boundary of road are changing as I my car is moving laterally slightly, thus the position of road boundaries is also changing slightly. Or let say participant is watching a video on screen in which object of interest is moving slightly, Is there a way to accomodate for that in AOI algorithm based on image processing?
@user-2b79c7 thanks for clarifying. Some points below:
@user-480f4c Thanks for sharing the information. It was indeed helpful.
Hey! I recorded some videos (not on wifi) but then was able to get on wifi - they should upload correct?
Hi @user-91d7b2 ! Yes! If you have internet access and Cloud upload was enabled, they will start uploading when connected.
Is it not the case? If so, would you mind performing the following test https://speedtest.cloud.pupil-labs.com/ to ensure you have enough upload speed to our servers? (some company/institutional networks may have limitations on upload). Also please have a look on the Recordings
view of the app in case a recording is stuck.
Upload speed seems fine!
Hi @user-91d7b2! 1. Double check that automatic cloud uploads are enabled in the app settings 2. If they are, try logging out and back into the app. That usually triggers the upload.
Thank you!
Hi Pupil Team,
I recently followed the tutorial at https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/#running-the-code, and I encountered an issue during the process. After uploading the data and selecting the screen corners, I received an error message in the "dynamic_rim.py" file on this line:
for packet in out_video.encode(out_frame):
Upon inspecting the code, I found that the error occurs when the "out_video.width" parameter (line 418) is an odd number. This leads to a program crash.
To resolve this, I manually adjusted my reference picture dimensions from 1600x1204 to 1599x1204, which worked. However, I'm wondering if there's a better and automatic solution to this problem.
Any guidance or suggestions you can provide would be greatly appreciated.
Thank you, Maayan Bahar
Hi Nadia, I have a few different task, such as cycling in a urban environment, walking in a shopping street. But I think that the task with most vertical eye movements, is a task were my participants have to walk in a hallway and avoid some obstacles on the ground while also detecting targets that are placed on the walls of the corridor. Here, the participants have to make a lot of eye-movements including vertical to see both the ground objects and the targets that are placed more on shoulder level.
Hi @user-2d66f7, thanks for sharing more info about your research. We'd recommend giving our blink detector a go and we'd be happy to get some feedback as for its performance in these tasks.
Hello! I'm doing image reference mapper in Pupil Cloud on a one minute image. It's been almost three hours and it's still running. Is this normal?
Hi @user-bda2e6 ππ½ ! Regarding the issue with your enrichment on Pupil Cloud, how long is the recording (or part of it) that you tried to apply the Reference Image Mapper? Could you also please try to hard-refresh the browser page where Pupil Cloud is open ("Ctrl" + "F5" (Windows) or "Command" + "Shift" + "R" (Mac))?
As for your second question, could you please clarify which data are you downloading from Pupil Cloud? The gaze.csv file should be available after downloading the Timeseries Data
.
Also, I have a Pupil Invisible and I'm trying to export my recordings. I don't have the gaze.csv in my exported files. Is this something I have to generate manually?
Thank you!
Hi, I'm not sure if this is the right place for my question... We are working with a pupil invisible. However, the invisible creates multiple small videos of different periods in time as it seems. Different segments appear to be indicated by an increasing index. I couldn't play it back in the Pupil Player.
Hi @user-e40297 ! It is normal for the Invisible Companion to split the video onto different video parts, one the reasons for this, is the to size limits on Android. Given that you plan using Pupil Player, I assume you do not want, or you are not allowed to use Pupil Cloud, right? Any specific reason?
Nevertheless, both Pupil Player and Cloud can handle multi-part videos. So, do you see any error while loading the folder to Pupil Player?
Also, and just to be 100% on the same page, are you exporting the recordings from the phone as explained here https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html ?
I talked to my colleague.. She followed the same steps. Only difference is that the file is stored in a different map.
Our institute is still unsure about using the cloud based on privacy issues... I'll look at your suggestion and ask my colleague
Pupil Cloud is GDPR compliant and follows standard security measurements, you can read more about it here. https://docs.google.com/document/d/18yaGOFfIbCeIj-3_GSin3GoXhYwwgORu9_7Z-grZ-2U/export?format=pdf
What do you mean by map? is the structure of the folder different? Might it be that the folder copied onto the computer is not the Exported one (Documents/Pupil Invisible Export/foldername
) but rather the one at Documents/Pupil Invisible/foldername
?
The file is not displayed in /pupil invisible export but in 'documents'. This has been like this, since we started using invisble
Maybe unnecessary, bu the file is already split up into multiple files on the companion
The original recordings made by the Invisible Companion App are located in Internal Storage > Documents > Pupil Invisible
, but those are in RAW/binary format. To make them compatible with Pupil Player, you have to :
Export:
For single recordings, the export button is found by clicking on the 3 vertical dots to the right of the cloud symbol
For multiple recordings, click the download symbol at the bottom of the screen
As outlined in the prev. link I shared: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html , this would generate a new folder in Internal Storage > Documents > Pupil Invisible Export
, which is the one you have to copy to the computer.
Please ensure that you follow these steps and let us know if you can still not load it into Pupil Player. If not, please let us know what error message Player prompts you with or share the Pupil Player log file.
Thank you The eyetracker is currently somewhere else. I'll get back to you
Hi @user-2fc592. Responding to your message https://discord.com/channels/285728493612957698/285728493612957698/1161142714696478740 here.
Unfortunately, we don't have a method to correct gaze offsets in Pupil Cloud post-hoc. The current workflow requires these to be set in the Companion App before recording.
May I ask which analysis tools you are using in Cloud or iMotions software? We do have a method to correct gaze offsets for Invisible recordings post-hoc, but this requires our desktop software. Once done, there is no way to reimport them to Cloud or iMotions, which might somewhat limit your analysis.
helloοΌis the script readyοΌ
Hi @user-5fc3c6 yes! It is here https://github.com/pupil-labs/pl-rec-export
Can I ask how long it's supposed to take for the reference mapper to run on a 20 second video?
My reference mapper keeps failing and giving me errors. I thought I followed the instructions. Is it okay if I can talk to someone directly to ask some questions? Thank you!
Reference Image Mapper fail
Hello, I have been trying to make adjustments on the raw data files, then re-calculate the gaze_on_face, however, I couldn't not align the face_detections.csv timestamps to the gaze.csv timestamps. Is there a script for that? or how can we get info to correctly align the two?
thanks @user-c03258 I am having a similar issue, maybe @nmt or @user-d407c1 have ideas? Thanks
Hi @user-c03258 and @user-8a677a ! The face detections csv file contains the timestamps of frames from the scene camera where a face is located. The scene camera goes at a frame rate of 30FPS, while gaze is at 200Hz, meaning there's not a exact match. What you would need is to do is for each gaze row, find the closest scene frame using the timestamp column.
One way to achieve this, is using the pandas
function merge_asof
, like it is done at the densepose
library https://github.com/pupil-labs/densepose-module/blob/main/src/pupil_labs/dense_pose/main.py#L157 or in our tutorial on how to sync with external sensors https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/#match-video-and-heart-rate
that's very helpful! thank you!!
@user-d407c1 thank you!
Thank you for your reply, Neil. I have tried to use the Enrichment tools like the reference mapper and others, but none have proven successful.
Unsuccessful because the gaze is offset, or for another reason, like the enrichment itself failed to compute?
@nmt I used real time api to send events to my invisible during a recording: https://docs.pupil-labs.com/invisible/real-time-api/track-your-experiment-progress-using-events/. But when I loaded the recording in Pupil Player, there is only option to export annotation, not the events. I think Pupil Cloud might help me to extract the event information, but I can't upload the data to the cloud. Is there any way that I can extract the event information?
Hi there - I'm trying to export a video from my workspace so that I can edit it side-by-side with a static camera that is showing what the wearer is doing from a distance. When I do that, the scene information (gaze, fixation etc) isn't overlaid on the video. Is there a way for me to export in a way that will let me keep that information 'on' the video? Thanks in advance π
Hi @user-6c7210 ! You will need to generate a video renderer, you can find this option when you create a project, under the analysis tab > "+ New Visualisation"
Thank you so much! I've found it and it all make perfect sense π
Hi, some begginer questions... IΒ΄m tryingo to export video from Invisible glases. There is no way to do it directly from the phone (with the gaze circle)? IΒ΄ve found the Pupil Player, but my recordings are corrupted. - Only1 of 3 captured both eyes. - the one with both eyes is 12 minutes long, but the world wideo is not complete (approx 4:30 is OK), itΒ΄s runnig in the Invisible companion (the first part), itΛs running on the cloud, but the player crashes.
What to do?I need this video! thank you. Jan
Hi @user-80e968 ππ½ ! Iβm sorry to hear you have issues with your Invisible recordings. Please reach out to info@pupil-labs.com and a member of the team will help you to swiftly diagnose the issue as soon as possible.
Hello, I was wondering if there is an example of how to use the Invisible with Unity for real-time input (mapping the gaze input to screen coordinates). I would be happy for any help π
heyπ, does anyone know how to check if a surface is observed in pupil capture via python? Any help would be greatly appreciated π
Hi, @user-5576d2 - are you interested in this post-hoc (from a recording)? If so, you just need to use the Surface Tracker plugin, which outputs data files you'll be interested in (see: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker)
Hello @user-480f4c ! I have a question regarding the synchronization of the gaze cameras both with each other, and with an outside event, which is the clapping of a clapboard. We managed to add an "Event" in the Project Editor in the Pupil Cloud, which we can find in the events.csv file. However we had trouble moving frame-by-frame in the video player to precisely locate the clap. Left/right arrows move by 5 seconds. We were advised to use the , and . keys to scroll left and right frame-by-frame, which is not working. Another question concerns whether speed with which the enrichments are processed depends on your server or the server on which our device is. Many thanks in advance π
Hi @user-7a517c ππ½ ! Regarding your first question, please note that you can find the full list of the shortcuts for navigating the recording frame-to-frame on the help menu of Pupil Cloud. You can find it in the top-right of the UI (the question mark icon) > Keyboard Shortcuts
. here you can see that you can jump in steps of 0.03 seconds using Shift + Left/Right Arrow Key. This allows you to jump roughly frame by frame through the video as the scene camera's framerate is 30 Hz.
As for your second question, the duration of your recordings and the current workload on our servers are two major variables. Longer recordings or high server usage can extend processing times. Please also have a look at this relevant message for more info on that: https://discord.com/channels/285728493612957698/633564003846717444/1154018234299850813
I hope this helps, but please do let us know if you experience any issues with the completion of your enrichments π
Hello, just getting started here with Pupil Invisible. 1. Is there a way of exporting a simple video file that displays the World Camera view along with Gaze and Fixation? We were getting ready to try screen recording the playback window of Pupil Cloud as a workaround, but it would be great if we had a simpler way to export this.
Hi @user-334d9a ππ½ ! Regarding your questions:
1) Yes, you can create this video with the gaze and fixations overlayed on the world camera video. This feature is called Video Renderer. To use this feature, you have to create a project with the recordings of interest, click on "Analysis" at the bottom left and then to "+ New visualisation" on the top, then, select "Video Renderer". There you would be able to configure it and download it.
2) Pupil Cloud doesn't allow selective display of fixations. However, you can disable the fixations overlay during playback (on the panel below the video playback, click on the eye symbol next to fixations - this allows you to enable/disable the fixation overlay).
Thanks!
In Pupil Invisible, is there a way to adjust the milliseconds threshold of detection of a fixation? I'm trying to adjust the system so that fewer fixations appear on the Fixation Overlay, to keep the screen less cluttered while we examine moving text that is being read by the subject.
Hi @user-334d9a! Apologies for the delayed reply. Adjusting the parameters of the fixation detector is not possible. However, to create a less cluttered visualization of your video with some fixations, you could download the Timeseries data + Scene video
from Pupil Cloud, select a specific set of fixations from the fixations.csv file (e.g., all fixations that are longer than 200 ms), and re-render the scene video overlaying only this specific set of fixations that you've selected.
Hi Pupil Team, we have a pretty urgent issue here, as we are conducting an experiment today. The Pupil Invisible don't connect to the controller app (when it is plugged in, the scene and eye cameras do not register, and stay greyed out). Everything was working smoothly yesterday, and nothing notable changed about the setup since then.
Weβve tried the following troubleshooting options already; - restarting the companion device - using alternative USB-C cables - testing the black/other cables on other devices (all work fine) - testing the cables on the companion device (all work to charge the device) - unplugging/replugging the cables between the glasses/device - removing/reattaching the scene camera from the glasses
The specs are as follows: - unit = pupil invisible - companion = OnePlus 8T - software = android 11 - build number = oxygen OS 11.0.6.9.KB05BA - app = 1.4.30-prod
Any help/suggestions would be highlyΒ appreciated!
Hi @user-20a5eb ππ½ ! Sorry to hear you're experiencing issues with your Pupil Invisible glasses. Let's test whether there is a hardware issue with the cameras. Please install the app "USB Camera
" and see if the cameras provide stable streams using it or of the errors persist.
https://play.google.com/store/apps/details?id=com.shenyaocn.android.usbcamera
Debugging - cameras not working
Hi, I am currently using the invisible device for research purposes at university but the Neon device will help my research better, is it possible to swap ? This will help my progress massively.
Hi @user-781874 π. Please reach out to sales@pupil-labs.com in this regard π
I have problem with downalding one record
Hi @user-23899a. If the recording is not present on the Companion Phone, that indicates it was manually deleted. But that wouldn't have affected the recording in Cloud. Could you please DM me the recording ID and I'll ask the Cloud team to investigate
also cant find this file on mobile
Thank you!
Nadia, could you please give me a detailed description of how to do that?
What I've described in the previous message would be a very high-level overview of how you would be able to get a less cluttered visualization of fixations overlayed on your scene video. However, we do not have step-by-step instructions for doing that.
When I go to download raw data, I do see Scene Video on the list, but I don't see Timeseries data or a fixations.csv file. Am I maybe looking in the wrong place?
You can download the raw data by right-clicking on the recording in the main view on Cloud and then selecting Download
> Timeseries Data + Scene Video
. In the folder, you should be able to find a fixations.csv
file, but if you don't please let us know.
Thank you. That way of downloading worked. If I wanted any fixations below 200 milliseconds to not be displayed, would I basically delete the entire row in Excel from the spreadsheet for each fixation that I didn't want to display?
Hi @user-334d9a! Unfortunately, the process is not as simple as deleting fixations from the Excel file. To provide some context, the Excel file is generated as an export and is not used for any subsequent visualisations. If you wanted to modify the visualisation in the way you described, you'd need to parse the modified fixation data and render out a custom video with a gaze + fixation overlay. It is technically possible, but we don't have anything set up to do it.
Dear Pupils Labs Team. I finished the data recording in a field study, in which we investigated eye-tracking parameters in the urban environment. Two questions came up in this process:
(1) I have a recording, which was uploaded to the cloud. This video contains sensitive information, so I wanted to cut off the last 10 minutes of the recording. Is it possible to cut the video in the cloud? I dont want to loose the data by deleting the entire video ...
(2) do you have any background information, regarding the sensitivity and accuracy of the pupils labs invisible? This would be really helpful to argue for the removal of outliers in the recordings. In other words, how accurate is the device in detecting a fixation or an eye-blink? Was this ever tested?
Best regards Anton
Hi @user-08c118 !
If you only want to share the recording outside Pupil Cloud, you can use events in the video renderer to crop sections. If you would like to completely remove that section, unfortunately there is no possibility currently to do this in Cloud. I have suggested it here https://feedback.pupil-labs.com/pupil-cloud/p/remove-scene-camera-in-a-portion-of-a-recording. Feel free to upvote it!
In the meantime, you can download it and work with it in Pupil Player which allows you to trim recordings.
Regarding your second question, here you have a white paper describing the accuracy and precision of Pupil Invisible https://arxiv.org/pdf/2009.00508.pdf A link to our fixation detector analysis https://docs.google.com/document/d/1dTL1VS83F-W1AZfbG-EogYwq2PFk463HqwGgshK3yJE/export?format=pdf and the blink detector https://docs.google.com/document/d/1JLBhC7fmBr6BR59IT3cWgYyqiaM8HLpFxv5KImrN-qE/export?format=pdf
Thank you so much! The papers are super helpful π I upvoted your suggestion
hi, i would like to know if anyone of you has the documentation of Pupil Invisible, i need to understand the type of moviment that can be registered
Hi @user-ffd278 ππ½ ! You can find the full documentation for Pupil Invisible here: https://docs.pupil-labs.com/invisible/
Hi I am having some problems with my invisible gaze tracker. The red circle was not showing the gaze anymore while wearing the glasses and was stuck in the left corner when attempting to reset the calibration. I do not even see a red circle (the gaze) on the live preview. I wonder what is going on? Any ideas? I had to cancel my experiment and I do also have some planned tomorrow.
Hi @user-1423fd ππ½ ! I'm sorry to hear you're experiencing issues with your Pupil Invisible. Could you please send us an email to [email removed] A member of the team will help you to diagnose the issue as soon as possible.
sent! thanks @user-480f4c
I'm following up right now with instructions for debugging/diagnosing π