I am using the Neon player. When comparing the fixations export with the fixations on surface export the timestamps differ a little bit in the last 3 places. The duration of the fixations is the same, indicating that the IDs are correctly asinged. What is interesting is that all of the missing fixations are in one block at the end. This means that the last fixations are always missing, e.g. 3000 fixations are found in the fixations export and fixations 1-2800 are found in the fixations on surface export. There are no cases in which a fixations in missing in between. This seems really odd to me. Additionally, in most cases where fixations are missing about 10% of fixations are lost but in some cases less than 30 % of the total fixations are listed in the fixations on surface export.
Hi, @user-ab3403 - would you be able to share a recording with us? If it's on Pupil Cloud, you can invite [email removed] If it's not on Pupil Cloud, you can upload it to file sharing service of your choice and share with data@pupil-labs.com
Hi, @user-ab3403 - thanks for sending the recording. I manually examined the first 10 missing fixation IDs (49, 82, 155, 158, 163, 165, 167, 168, 171, 175) and found that all of these occur during segments of the recording where the surface is not detected (see attached image). This is typically due to motion blur. That's always going to occur when the head is moving, although larger markers are a little more robust in this regard.
Looking at your data, I think this case applies to 102 of your fixations. However, starting from fixation id #1580 to the end (1,901 fixations), a different problem occurs where the fixation event is reported without a position. This will require more investigation
Dear pupil team, we had this problem for the second time now. What yould be the problem?
Hi @user-9a8fd8 ! Could you kindly create a ticket in the π troubleshooting channel so we can follow up with some troubleshooting steps? Please when following include the serial number of your device and if the recording is uploaded to Cloud, please follow also with the recording ID.
I've seen the Neon Companion app is also supported by the Samsung Galaxy S25? If I had a Samsung Galaxy S24, could it work (worse, but work?) as well? Or is that a question that quickly devolves into every dev asking for their mobile phone and whether it works? π
Hi @user-1391e7 , the Neon Companion app is not supported on the S24. Only the devices listed in this part of the Documentation are supported.
thank you!
Frame ids are correlated to frame names, these are written on the frame so you can't change them
Hi everyone, I have observed an issue with the world camera timestamps and was hoping you can help me understand what is happening. My understanding is, that the file wold_timestamps.csv contains an entry/row with a timestamp [ns] corresponding to each world camera video frame (Neon Scene Camera v1 ps1.mp4). However, when comparing the length of the βwold_timestamps.csvβ with the length of the βNeon Scene Camera v1 ps1.timeβ timestamps, the world_timestamp.csv has always more entries/rows. I discovered this when exporting the video frames from the mp4 file (using the method recommended in the tutorial 07_frame_extraction.ipynb). Interestingly, the amount of extracted video frames matches the length of the βNeon Scene Camera v1 ps1.timeβ file, but there are always less frames/timestamps than entries/rows in the corresponding world_timestamps.csv files. The offset is different for each recording I checked (see screenshot below for more examples). Any idea what is going on? I was planning to use the word_timestamps.csv timestamps to match the gaze entries corresponding to each world camera frame, but since the world_timestamp.csv do not match the world camera video frames, I am unsure how to proceed.
Hi @user-33134f π
I think you might be mixing a few things together from different resources π
First β the 07_frame_extraction
tutorial you're referencing is for π core data, not π neon .
Regarding the mismatch between Native and Cloud data: this has come up a few times before β you can check out past discussions here:
- https://discord.com/channels/285728493612957698/1047111711230009405/1355104213067370566
- https://discord.com/channels/285728493612957698/633564003846717444/1227150589352218687
- https://discord.com/channels/285728493612957698/1047111711230009405/1364251433612087416
In short: sensor data (like gaze or IMU) can be available before the scene video starts. Instead of discarding that early data, Cloud fills the video timeline with gray frames β you can see those in the Cloud player.
The world_timestamps.csv
file is usually generated in Cloud and includes timestamps for all frames β including the gray fillers.
On the other hand, the native format doesn't contain those gray frames, which is why the timestamps don't align.
Hope that helps clear things up! If you share whether you prefer to work with the Native data or the time series, I can follow up with some frame extraction examples
ah ok, thanks! I was using the native mp4 file, since some of the data is a bit older, so the time series szene video still had the gaze cursor enhancement. But this makes a lot of sense and I will just use the time series scene video instead. Sorry about that, I totally missed this explanation. Just to double check, if I wanted to synchronize the pupil camera recording (Neon Sensor Module mp4) from the native data as well, should each frame of the sensor correspond to the gaze.csv entries/rows? Or do I need to use a different video file here has well?
Hi everyone, i am very new to all of the NEON features and trying to wrap my head around it. I am utilising it for my research project on eye tracking of participants whilst completing a simulated surgical procedure.
I am wondering if anyone might be able to give me a general idea on how best to begin setting it up. I mainly need to track fixations, saccades, blinks, focus on key areas, pupil dilation, gaze patterns. I had a read through and ideally heatmaps for the key areas would be awesome, and also being able to tabulate my results
i have no background in coding or using similar software so any help would be very much appreciated
Hi @user-96cc18! Setting up Neon to start acquiring eye tracking data is actually very straightforward. Iβm not sure how far youβve got already, but in principle, itβs really just a matter of connecting it to the Companion device, putting the system on a wearer, and hitting recordβfixations, saccades, blinks, and gaze patterns are all recorded by default.
Itβs what comes after, in terms of analysis and how you use these data streams, that requires more careful thought. A good place to start would be to clarify what you hope to achieve with this data in the context of surgical simulations. You mentioned heatmaps; these can be powerful tools for visualising which areas of an environment attracted attention. Perhaps you could elaborate on your research question, or is this more of an exploratory study? Either way, the more details you can share, the better weβll be able to assist.
Moreover, could you describe your surgical simulation environment in more detail? What does the set-up look like, which areas are you interested in mapping gaze to, and so on? A photo would be extremely helpful!
My set up is quite simple, but it will be to track a rectangular block of gelatin on a bench, and the operators eye movements whilst completing a venopuncture procedure. There will also be a ultrasound system set up to help with the placement and correct insertion of the needle.
there have been some studies that have showed a correlation between the operators fixations, saccades, blinks and their expertise (i.e. experts have less fixation on unnecessary tools and environmental distractions, proven by longer fixations on key task at hand, faster saccades etc)
the key areas i would like to map gaze would be the gelatin mould in the middle as well as the computer once that has been set up properly
Thanks for describing your project in more detail and for sharing the images. That's very helpful. One more question: do you plan to use Pupil Cloud for your analysis?
Hello Neon team. We are batch processing several recordings for enricment, each with 20 events. Is it possible to have a table with all the events (anotations in pupil core) related to each time stamp. Not just between two events?
Hi @user-b3b1d3 π ! Just to make sure Iβm following β are you looking for a table with events and their corresponding timestamps? That is available in the events.csv when you download the Timeseries.
You can download it for all your recordings in a project, by going to the downloads tab of the project.
Hi! Is the main difference between the pupil core and the Neon the "no need" of calibration?
Hi @user-7c5b51 , apologies for letting that slip off the radar. To answer your question:
Neon's calibration-free nature is a big difference that simplifies many things, but there are also other main differences, some of which derive from that calibration-free nature:
@user-7c5b51 May I ask what your research goals are?
There are also other reasons, but these are a few that come to mind.
Our research is sound localization in space. With the eye trackers, we are hoping to understand how well patients with hearing loss are able to identify different sounds in space. The eye tracker would be useful in helping us mapping accuracy.
I see. This is another area where Neon can be helpful.
Neon has microphones, whereas Pupil Core does not. The audio streams, when enabled, are part of a standard Neon recording.
With Neon, you don't need to give any instructions, which can be helpful if the participants have hearing loss. You simply put it on and you are eyetracking. With Pupil Core, you have to instruct the participant on how to do the calibration (i.e., "first, roll your eyes around in a circle").
I could also imagine Neon's IMU being helpful here, since you can study how head & eye movements interact when localizing sound.
Thank you @user-f43a29
Hi, I would like to download the recorded eyetracking video including the gaze overlay + fixations from the cloud. Somehow I cannot manage. I have already tried the video renderer visualization but it says 0/19 recordings included and I am stuck. Thank you for the help in advance.
Hi @user-f4d2b3 ππ» May I ask which events you selected in the Temporal Selection section of the Video Renderer visualization? Since 0 out of your 19 recordings were included, it seems they may not contain the selected events. Could you please double-check this?
I seleced recording.beginning und recording.end. Now I first created a reference image mapper as an enrichment and now I think it looks more promising. But from the docs it wasn't clear to me that I need to make an enrichment first before using the renderer. Maybe I also just misunderstood π
You do not need to create an enrichment to use the Video Renderer visualization. Once you've added your recordings to a project, the Video Renderer will automatically generate a video with gaze and fixation overlays for each recording included.
Since you selected the default events recording.begin
and recording.end
, all recordings added to the project should be included in the visualization, as these events are always present.
If you're still seeing 0 out of 19 recordings included despite selecting the default events, Iβd like to take a closer look to better understand the issue. Could you please open a ticket in https://discord.com/channels/285728493612957698/1203979608563650650 ? I'll be happy to continue the conversation there.
hi. I found this video IMU visualisation interesting and useful. However in the code that you have provided i am only able to generate the animation path in the side. Is there any version that allows us to visualise the gaze overlay with the animation beside as shown in this link. link :https://youtu.be/3OdkHo3ThAE
Hi @user-15edb3 , to make that video, the ffmpeg
command-line tool was used to horizontally stack them side-by-side.
Code like this is essentially what was used, but you may need to modify it if your videos have different width/height:
ffmpeg -i video1.mp4 -i video2.mp4 \
-filter_complex "[0:v][1:v]hstack=inputs=2" \
-c:v libx264 -crf 23 output.mp4
You could also use any other video editing tool to achieve the same effect.
The gaze + eye overlay was based on code from these pl-neon-recording examples:
https://drive.google.com/file/d/1OhyfLiLJMVQIwEZReFjdzjekFWcA41L0/view?usp=sharing here are the world video and IMU visualisation video. They dont seem to match i believe. For eg, when wearer moved head up and down , i dont see the sky-earth part being captured rightly. However, the imu.csv file has recorded the data rightly
Hi @user-15edb3 , I double checked. Apologies, as it has been some time since I have looked at that code.
Would you be able to share the data from that recording with us? You can put it on Google Drive and share it with [email removed]
For now, you can change the line that defines relative_demo_video_ts
to the code below and that should include an additional 10 seconds of data in your case:
relative_demo_video_ts = np.arange(
world["relative ts [s]"].iloc[0], world["relative ts [s]"].iloc[-1], 1/30
)
That line had been added to make the visualisation in the Alpha Lab guide. Only a subsection of the recording was intended for demonstration & pedagogical purposes and used to make the animation.
Hello! I have a questions for the staff and other researchers here - Is it possible to extract pupillometry measurements and if so how reliable are they? Did anyone look at pupilometry data of infants? Thanks:)
Hi @user-21cddf π ! Yes, pupillometry is available with Neon β both in real time and post-hoc. It's reported in millimeters and accounts for gaze angle. For details on accuracy and robustness, Iβd recommend checking out our whitepaper, where we replicate several traditional experiments. If you're working with infants, we recommend measuring their IPD and entering it in the wearerβs profile to improve the accuracy.