Thank you for your assistance. Could you please tell me the following?
Q1. The time-series data with saccade IDs overlaps with the time-series data with blink IDs (there are also locations with saccade IDs for the entirety of one blink). Which of the following should I understand?
(1) Saccades are also occurring during the blink of an eye. Therefore, it is correct that they overlap. (2) Since the blink location is mistakenly recognized as a saccade, the location with the saccade ID after excluding the location with the blink ID is the location of the saccade.
Hi @user-5c56d0 π! The blink detector uses a machine-learning-based pipeline to identify blink events. This involves analyzing optical flow patterns in eye camera videos and applying a multi-stage post-processing approach to minimize false positives and maximize recall. However, the system may still produce false positives in edge cases like squinting, partial blinks or combined rapid head-eye movements.
Given the potential for these false positives, we don't automatically remove data from other streams. Instead, we recommend manually filtering blink events based on your specific needs. A reliable approach is to cross-reference blink detections with eye camera video footage (you can do this in Pupil Player for example). Once you're confident in the accuracy of the detected blink events, you can simply remove associated data from other streams using their IDs and timestamps.
Lastly, I wanted to mention that the blink detector is open-source, allowing you to modify it as needed for your use case.
Now, why do you see a saccade during a blink?
Gaze estimation during blinking is inherently complex. The neural network used for gaze estimation continuously tries to provide an output, even when the eyelid is closed. As a result, the gaze estimation often shifts downward rapidly, which can be mistakenly interpreted as a saccade.
Determining whether a true downward saccade occurs during a blink is challenging. However, if you are interested, there are studies, such as this EOG investigation exploring how blinking affect saccadic movements and noting overshoots in saccade amplitude during such scenarios.
Let me know if you have any other question.
Hi! I would like to ask you if the circle highlighting the Gaze position has a specific dimension in the Pupil Cloud. That is, it represents the projection of an angle on a surface at a specific distance. If not, what does that circle represent exactly? Thank you for your help.
Hi @user-0f28d7 , the red circle is meant for visualization purposes. The center of the circle is the current gaze point. The size of the red circle itself has no meaning beyond that.
The main idea is that the circle's hollowed middle allows you to more easily see what the person was actually looking at.
For example, if you use the Video Renderer Visualization, then you can change the circle's size, color, and thickness as you wish. The data it represents will remain the same.
Thank you.
Hey there. Is it possible to use the same cloud-analytics-software from neon for invisible?
Hi @user-7fc432 , yes, Pupil Cloud is fully compatible with Neon and Pupil Invisible recordings. You can can even mix recordings from the two devices in the same Workspaces, Projects, and Enrichments.
Thank you
and is it possible to get data of the recordings outside of your cloud service?
like is this open data?
Hi @user-7fc432 , yes, all data provided by our eyetrackers are open, documented, and also stored on the harddrive of the Companion device. The open binary data can be exported via USB cable.
You can find a description of the data formats here:
Hello everyone,
I have the problem that when creating the enrichments for the AOIs, the QR codes from the recording are not recognized by the system. This involves a night drive, and my assumption is that this is caused by the overexposure of the QR codes.
My question: what can I do about this?
Hi @user-8fc9d4 , if you'd like, you can invite us to your workspace [email removed] and we can provide some feedback, but generally, if the AprilTag markers are not well illuminated and clearly visible in the scene camera video feed, then this is not optimal for the Marker Mapper Enrichment.
You could potentially post-process the videos to make the markers more visible and try Pupil Player's Surface Tracker plugin, but these edited recordings will not be usable on Pupil Cloud.
For future recordings, you can manually adjust the exposure of Pupil Invisible's scene camera to be more fitting for your use case.
The QR codes are from the simulation software.
Hi. I am planning to use eye tracking glasses for my study. I want to know whether people with glasses can use an waer eye tracking glasses or not. Can you please help me?
Hi @user-a9f703 , may I assume that you are planning to use Neon?
Can anyone from pupil invisible side suggest me why the marker mapper enrichments pointer position (gaze position) many times is not coming correct. Even though the number of activated marker are more than 3 as shown in attached screenshot.
@user-a9f703 Thanks for the update.
We generally do not recommend wearing Pupil Invisible over or under third-party glasses, as this can block the cameras and also put undesirable pressure on the frames.
Having said that, Pupil Invisible ships with swappable tinted and clear lens variants (no optical powers). We also sell a prescription lens kit that comes with -3 to +3 diopter in 0.5 steps. If you need additional diopters, you can take the Pupil Invisible glasses to an optometrist to have custom lenses fitted, just like a normal pair of glasses. The frame can accommodate -8/+8Β diopter lenses.
Otherwise, if you did ever have interest in upgrading, you could consider the I can see clearly now frames for Neon, which come with quick swap magnetic lenses. You can arrange a Demo Call here, if you'd like.
Hi @user-787054! It can be challenging to pinpoint the exact reason why they are not being recognized as multiple factor affect the AprilTags detection. However, given your setup/image, these might be the reasons:
You can try to address these factors and see if they improve.
As mentioned on the website https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/ to detect the marker white border have role . But in our case the marker is detecting and also the number of detection of marker is greater than or equal to 3. Still the gaze position is not coming correct in the reference image. One thing can you suggest me that after define the surface and before starting the run of enrichment, can we activate marker which is become red by manually. Because in my case I am doing similar. Does it affect the correct position recognition of gaze in reference image. For example, when I am defining surface at that time only three marker is activated, after defining surface we have check in video by pausing it and activate manually by clicking on red marker.
I see! May I ask if you used any of the other markers to define the surface?
It might be helpful to create a ticket in π troubleshooting so we can follow up more effectively. If possible, Iβd also like to request an invitation to your workspace so we can investigate further.
Hi everyone. We are planning an experiment to explore people's cooking behaviour, and I'm currently learning Pupil Cloud for this purpose. I have watched online videos and read materials available on your website. I have a general idea of the process. We'll likey use the enrichment feature and mark events there. However, I'm struggling to fully understand how to build my own enrichment and analyse the data. I found the Master Class video; however, the feature of Cloud has changed since then. Could you point me toward detailed tutorials or resources? Thank you
Hi @user-269e8c , have you already had your Onboarding session?
@user-f43a29, I probably haven't read it, could you share the link to me? So far I went through this: https://docs.pupil-labs.com/invisible/pupil-cloud/
@user-f43a29 sorry, I re-read your sentence. No, I haven't had any Onboarding session (if you are talking about the 30mins).
No problem!
If you send an email to info@pupil-labs.com with the original Order ID, then we can organize it for you & walk through these steps together over a video call.
thank you
Hi @user-f43a29 , hi @user-d407c1 . Could we open a ticket for this issue, so that you can try to tweak it? The problem is that we need to work with these recordings (22) and not to make new ones. Of course I can invite you if BHan69 hasn't done this already. Just let me know how to do it.
Hi @user-3c26e4 , yes, you can invite us to the Workspace [email removed]
If all the recordings look like that image, then you could try a video editor's brightness & contrast post-processing tools in combination with its Surface Tracker plugin.
But in this case I won't be able to evaluate the results with PupilCloud.
If you'd like to see video post-processing tools in Pupil Cloud, you can upvote this Feature Request: https://discord.com/channels/285728493612957698/1300397072712863835
I will invite you now
Or should I send you the 22 IDs?
We do not have access to your data unless you invite us as Collaborators on your Workspace. The IDs will not allow me to look at or assess the videos, so inviting would be the preferred option here.
Which way is better for you
OK, fine.
I upvoted and I sent you an invitation.
Would you be willing to open a brief ticket in π troubleshooting ?
hello π not sure which channel should I use... I have problem downloading video from Cloud, I would like to download the one with visible gaze paths and fizaxtion bubbles and number so far unsuccesfully, could you help me? I have already visited docs.pupil-labs.com and haven't found anything
Hi @user-e66c8c! To get the video with the gaze and fixations overlay, you'll need to run a Video Renderer visualization.
Simply add the recordings of interest to a Project (select them on the main view on Cloud > right click > Add to Project), then go to your project and click on the Visualizations tab and start a new visualization "Video Renderer". You can find more details here: https://docs.pupil-labs.com/neon/pupil-cloud/visualizations/video-renderer/
Thank you Nadia!
Hi @user-f43a29 Rob, 1. How can it be possible to use the Reference Image Mapper with pedestrians and not for example with cyclists? The pedestrians are always moving their heads and the bodies are also not static. 2. I have one research while driving a car at night where the QR codes are not visible at all. Would the Reference Image Mapper work? I was planning to scan the car inside at night and see what wil happen.
Hi @user-3c26e4 π
gaze not mapped
, since they are no longer looking at the building, crossing, etc. If you need something more like a 360 degree mapping, then I recommend consulting this Alpha Lab guide.On a bike, if your intention is to know where they look as they approach or cross an intersection, then you can of course use Reference Image Mapper, in the same way as it is used with pedestrians. However, if the goal is to know where they are looking throughout the whole duration of the recording, as they explore a city for example, then there is likely to be few(er) features shared with the Reference Image. The recording in that case will predominantly be dynamic content that will not correspond to features in the Reference Image, which is sub-optimal for Reference Image Mapper. It can certainly be tried, though, perhaps in combination with Enrichment Sections.
Without seeing those recordings, I cannot say 100% for sure, but if you are already planning to do it, then it is of course worth a try. Running Reference Image Mapper on some test recordings is often the best way to find out, since due to the nature of the problem, it is not possible in general to say "definitiely will work" or "definitely won't work" for all situations.
Thank you @user-f43a29 ! 1. The pedestrians are crossing the street, so they have to look to the left and right as well. 2. I will give it a try and will let you know whether it works.
You are welcome!
If you need to map gaze when they look left and right, then taking a wider reference image from a bit further back could mitigate some of the issue, depending on occlusions & how far left/right they must turn. If that does not resolve it, then I recommend adapting the approach in this Alpha Lab guide about Mapping Gaze Throughout an Entire Room
. You can have Reference Images for "straight ahead", "left", and "right", and three corresponding Reference Image Mapper Enrichments.
Feel free to update us on how it works out!
That's a great idea!
@user-3c26e4 Just to clarify one point that my colleague, @nmt , pointed out:
You could even analyze multiple sections of a bike riding recording this way, with multiple Enrichments and usage of Enrichment Sections.
For example, if you want to know whether they look at a stop sign or traffic light at an intersection.
I updated my earlier response accordingly: https://discord.com/channels/285728493612957698/633564003846717444/1333401484087394377
Hi @user-f43a29 the proposed steps to export fixations run until I try to run the tool in the directory with the pupil format files. Then I get this errom message:
I set the script to be part of the path. Please help me with this issue.
[email removed] the proposed steps to export
Inconsistent connection to scene cam/calibration
Hi @user-f43a29 should I use multiple enrichments or different AOIs in one enrichment in order to know whether they look at a stop sign or traffic light at an intersection? I think I don't understand the idea with the multiple enrichments well. Signs and traffic lights are always getting bigger as a cyclists approaches them. How can I analyze that using multiple Enrichments and will there be different AOIs in them?
Hi @user-3c26e4 :
Right, I understand the second point and it is something I will definitely try even in my previous recordings with cyclists. Regarding the first point I don't understand how Reference Image Mapper can account for changes in perspective. Please explain it to me. That would be more than great, because the AOIs don't work too well for such changes.
Hi @user-3c26e4 , the 3D model that it builds allows it to deal with perspective changes.
Do I understand correctly that you already tried Reference Image Mapper at an intersection? If you want, you can invite us to the Workspace and we can provide direct feedback [email removed]
So far I just tried the reference image mapper in the simulator, where you already have access.
I had removed myself after we had resolved the issue. If you'd like me to inspect again, you can re-invite the support account.
@user-3c26e4 But we might be able to clarify here. May I briefly ask, are you trying to map gaze onto objects that are moving by in the simulator while the people are sitting still?
If so, we can open a thread to discuss that.
Actually I would like to map gaze onto objects that are moving by in real driving and riding bikes etc. as well as in the driving simulator.
Actually I would like to map gaze onto
Good morning, I'm currrently using the imvisible glasses in remote locations that don't have access to wifi (football pitch, golf courses) I thought if the phone connect to the glasses was on a 4g hotspot the footage would be uploaded however, this dosen't seem to be the case. Is there anyway around this?
Hi @user-a578d9 ! If you connect the companion device to a hotspot, it should function the same as a Wi-Fi connection and allow recordings to upload. Could you double-check if Cloud Uploads are enabled in the settings? And whether it shows as a Wi-Fi connection?
Note that direct uploads through the phone's network (i.e. 4G) are not supported on the Pupil Invisible companion app.
Additionally, from the companion device's browser, could you try accessing this speed test and letting us know the results? This will help us determine if the issue is related to connectivity.
Feel free to open a ticket on π troubleshooting to follow up the conversation, if the recordings are not uploading.
Hi I have a question concerning the variable frame rate of the video. For my analysis I use the pl-rec-export to extract the fixations, blinks etc. I then merge the gaze and video document to have a complete overview including the framenummers so I can perform a frame by frame analysis. I wanted to do this because I also have another video (non-eyetracking) of the same scenarios and I want to synchronise these pupil video and this other videos to have a complete picture. However, now I found out that the framerate of the pupil video is variable and is therefore very difficult to syncronise with another video. When I try to make the frame rate constant by using ffmpeg, the output of the pl-rec-export doesn't make sense anymore. Do you understand my troubles and have any tips or ideas of how I can approach this problem?
Hi. I am using eye tracking glasses (invisible) for my study and I am wondering how I can do the calibration at the end of my stuy. My study is in tehe real world and I do not run a task on the screen. Should I downlaod Pupil Calibration Marker v0.4 (https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-process) and just show to the participant? Thank you for your help in advance.
Hi @user-df855f ! Since the scene camera frames are timestamped in the world.csv
, is there anything that precludes you from using these to sync with other sensors? Could you share some additional information about what kind of external camera are you trying to sync with and what is the ultimate aim?
If the external camera is not timestamped and you only need it only for visualisation purposes? You might want to check this plugin for overlaying it with temporal alignment in Pupil Player.
Otherwise, this gist shows you how to read and operate with frames using the TimeSeries format or pl-rec-export which exports similar format.
I am doing research into police shooting decision-making. I am using a high speed camera and I think I can indeed sync it based on the timestamps. So the problem with this is that the framerate here is 120 fps. The high speed camera has recorded the finger movements and I am trying to figure out if deviating finger movements are related to a different gaze behaviour. So that is analysis I want to perform.
I have an additional question, I am afraid that the calibration is not 100% correct and that there is a small offset to the right. Is it correct that the post-hoc calibration plugin is not available for recordings of the pupil invisible?
Hi @user-a9f703 π ! Pupil Invisible uses a neural network to estimate gaze, so it does not require and cannot be calibrated like traditional eye trackers.
If you're looking to correct a gaze estimation offset, you can do so for example in Pupil Cloud by following these steps.
Let me know if you need any further assistance!
Thank you very much fo your reply. But I saw one article used it to be sure about teh accuracy (Eye-tracking. Eye movements were recorded using a Pupil Invisible Eye-Tracker (Pupil Labs GmbH, Berlin, Germany). Tese mobile eye-tracking glasses simultaneously record gaze using two eye cameras with 200 Hz sampling rate, as well as a scene camera with 30 Hz sampling rate. Te eye-tracker does not require separate calibration, because it implements an automatic calibration algorithm. However, at the end of the experimental session, to investigate the accuracy of the mobile eye-tracker, the participant were asked to look at a printed calibration marker (v0.4 marker design, Pupil Labs) in a 5-point calibration type disposition from approximately 2 m distance). I want to know can I do something like this manually? because I meantioned that in mt registered report and I have to follow the methodology
Custom Gaze Validation
Good afternoon,
I've recently gathered eye movement data with the drone pilot division of my local police force drone unit. As I am quite new to using Pupil Cloud software for data analysis, I would greatly appreciate a brief explanation on how to trim the footage to the specific sections the police need to review. Additionally, I need guidance on downloading and sending that footage with the fixation data attached. Your assistance would be invaluable as I am working under tight time constraints. Thank you.
Hi @user-a578d9 , have you seen the Video Renderer Visualization and Enrichment Section documentation? You can export segments of the recordings with the gaze + fixation data overlaid.
Or, do you mean that you want to trim the length of the recordings as they are, directly on Pupil Cloud?
Also, if you have not yet had your free 30-minute Onboarding session, just let us know!
Unfortunately, my university purchased the glasses a couple years a go so I don't qualify for the free 30 minute onboarding session. I wish to download our records with the fixation/gaze data overlay onto my surface pro in order to the send the footage to certain parties.
I see! The Video Renderer Visualization will allow you to do that.
It will produce a video output, with gaze + fixations overlaid, just like you see in the Pupil Cloud interface. You can download the batch processed set of recordings and view the videos locally on your device or send them to others.
Perfect, thank you.