πŸ‘“ neon


Year

user-ab3403 01 June, 2025, 10:26:47

I am using the Neon player. When comparing the fixations export with the fixations on surface export the timestamps differ a little bit in the last 3 places. The duration of the fixations is the same, indicating that the IDs are correctly asinged. What is interesting is that all of the missing fixations are in one block at the end. This means that the last fixations are always missing, e.g. 3000 fixations are found in the fixations export and fixations 1-2800 are found in the fixations on surface export. There are no cases in which a fixations in missing in between. This seems really odd to me. Additionally, in most cases where fixations are missing about 10% of fixations are lost but in some cases less than 30 % of the total fixations are listed in the fixations on surface export.

user-cdcab0 02 June, 2025, 02:04:07

Hi, @user-ab3403 - would you be able to share a recording with us? If it's on Pupil Cloud, you can invite [email removed] If it's not on Pupil Cloud, you can upload it to file sharing service of your choice and share with data@pupil-labs.com

user-cdcab0 03 June, 2025, 14:11:52

Hi, @user-ab3403 - thanks for sending the recording. I manually examined the first 10 missing fixation IDs (49, 82, 155, 158, 163, 165, 167, 168, 171, 175) and found that all of these occur during segments of the recording where the surface is not detected (see attached image). This is typically due to motion blur. That's always going to occur when the head is moving, although larger markers are a little more robust in this regard.

Looking at your data, I think this case applies to 102 of your fixations. However, starting from fixation id #1580 to the end (1,901 fixations), a different problem occurs where the fixation event is reported without a position. This will require more investigation

Chat image

user-9a8fd8 02 June, 2025, 06:39:22

Dear pupil team, we had this problem for the second time now. What yould be the problem?

Chat image

user-d407c1 02 June, 2025, 06:41:54

Hi @user-9a8fd8 ! Could you kindly create a ticket in the πŸ›Ÿ troubleshooting channel so we can follow up with some troubleshooting steps? Please when following include the serial number of your device and if the recording is uploaded to Cloud, please follow also with the recording ID.

user-1391e7 02 June, 2025, 09:54:05

I've seen the Neon Companion app is also supported by the Samsung Galaxy S25? If I had a Samsung Galaxy S24, could it work (worse, but work?) as well? Or is that a question that quickly devolves into every dev asking for their mobile phone and whether it works? πŸ™‚

user-f43a29 02 June, 2025, 09:56:08

Hi @user-1391e7 , the Neon Companion app is not supported on the S24. Only the devices listed in this part of the Documentation are supported.

user-1391e7 02 June, 2025, 09:57:13

thank you!

user-d407c1 02 June, 2025, 13:40:51

Frame ids are correlated to frame names, these are written on the frame so you can't change them

user-33134f 03 June, 2025, 15:36:01

Hi everyone, I have observed an issue with the world camera timestamps and was hoping you can help me understand what is happening. My understanding is, that the file wold_timestamps.csv contains an entry/row with a timestamp [ns] corresponding to each world camera video frame (Neon Scene Camera v1 ps1.mp4). However, when comparing the length of the β€œwold_timestamps.csv” with the length of the β€œNeon Scene Camera v1 ps1.time” timestamps, the world_timestamp.csv has always more entries/rows. I discovered this when exporting the video frames from the mp4 file (using the method recommended in the tutorial 07_frame_extraction.ipynb). Interestingly, the amount of extracted video frames matches the length of the β€œNeon Scene Camera v1 ps1.time” file, but there are always less frames/timestamps than entries/rows in the corresponding world_timestamps.csv files. The offset is different for each recording I checked (see screenshot below for more examples). Any idea what is going on? I was planning to use the word_timestamps.csv timestamps to match the gaze entries corresponding to each world camera frame, but since the world_timestamp.csv do not match the world camera video frames, I am unsure how to proceed.

Chat image

user-d407c1 03 June, 2025, 15:59:32

Hi @user-33134f πŸ‘‹
I think you might be mixing a few things together from different resources πŸ˜…

First β€” the 07_frame_extraction tutorial you're referencing is for πŸ‘ core data, not πŸ‘“ neon .

Regarding the mismatch between Native and Cloud data: this has come up a few times before β€” you can check out past discussions here:
- https://discord.com/channels/285728493612957698/1047111711230009405/1355104213067370566
- https://discord.com/channels/285728493612957698/633564003846717444/1227150589352218687
- https://discord.com/channels/285728493612957698/1047111711230009405/1364251433612087416

In short: sensor data (like gaze or IMU) can be available before the scene video starts. Instead of discarding that early data, Cloud fills the video timeline with gray frames β€” you can see those in the Cloud player.

The world_timestamps.csv file is usually generated in Cloud and includes timestamps for all frames β€” including the gray fillers.

On the other hand, the native format doesn't contain those gray frames, which is why the timestamps don't align.

Hope that helps clear things up! If you share whether you prefer to work with the Native data or the time series, I can follow up with some frame extraction examples

user-33134f 03 June, 2025, 16:19:27

ah ok, thanks! I was using the native mp4 file, since some of the data is a bit older, so the time series szene video still had the gaze cursor enhancement. But this makes a lot of sense and I will just use the time series scene video instead. Sorry about that, I totally missed this explanation. Just to double check, if I wanted to synchronize the pupil camera recording (Neon Sensor Module mp4) from the native data as well, should each frame of the sensor correspond to the gaze.csv entries/rows? Or do I need to use a different video file here has well?

user-96cc18 04 June, 2025, 01:08:35

Hi everyone, i am very new to all of the NEON features and trying to wrap my head around it. I am utilising it for my research project on eye tracking of participants whilst completing a simulated surgical procedure.

I am wondering if anyone might be able to give me a general idea on how best to begin setting it up. I mainly need to track fixations, saccades, blinks, focus on key areas, pupil dilation, gaze patterns. I had a read through and ideally heatmaps for the key areas would be awesome, and also being able to tabulate my results

user-96cc18 04 June, 2025, 01:08:52

i have no background in coding or using similar software so any help would be very much appreciated

nmt 04 June, 2025, 02:50:01

Hi @user-96cc18! Setting up Neon to start acquiring eye tracking data is actually very straightforward. I’m not sure how far you’ve got already, but in principle, it’s really just a matter of connecting it to the Companion device, putting the system on a wearer, and hitting recordβ€”fixations, saccades, blinks, and gaze patterns are all recorded by default.

It’s what comes after, in terms of analysis and how you use these data streams, that requires more careful thought. A good place to start would be to clarify what you hope to achieve with this data in the context of surgical simulations. You mentioned heatmaps; these can be powerful tools for visualising which areas of an environment attracted attention. Perhaps you could elaborate on your research question, or is this more of an exploratory study? Either way, the more details you can share, the better we’ll be able to assist.

Moreover, could you describe your surgical simulation environment in more detail? What does the set-up look like, which areas are you interested in mapping gaze to, and so on? A photo would be extremely helpful!

user-96cc18 04 June, 2025, 03:01:15

My set up is quite simple, but it will be to track a rectangular block of gelatin on a bench, and the operators eye movements whilst completing a venopuncture procedure. There will also be a ultrasound system set up to help with the placement and correct insertion of the needle.

Chat image Chat image

user-96cc18 04 June, 2025, 03:02:26

there have been some studies that have showed a correlation between the operators fixations, saccades, blinks and their expertise (i.e. experts have less fixation on unnecessary tools and environmental distractions, proven by longer fixations on key task at hand, faster saccades etc)

user-96cc18 04 June, 2025, 03:03:02

the key areas i would like to map gaze would be the gelatin mould in the middle as well as the computer once that has been set up properly

nmt 04 June, 2025, 05:56:38

Thanks for describing your project in more detail and for sharing the images. That's very helpful. One more question: do you plan to use Pupil Cloud for your analysis?

user-b3b1d3 04 June, 2025, 07:32:49

Hello Neon team. We are batch processing several recordings for enricment, each with 20 events. Is it possible to have a table with all the events (anotations in pupil core) related to each time stamp. Not just between two events?

user-d407c1 04 June, 2025, 08:29:42

Hi @user-b3b1d3 πŸ‘‹ ! Just to make sure I’m following β€” are you looking for a table with events and their corresponding timestamps? That is available in the events.csv when you download the Timeseries.

You can download it for all your recordings in a project, by going to the downloads tab of the project.

user-7c5b51 04 June, 2025, 14:10:57

Hi! Is the main difference between the pupil core and the Neon the "no need" of calibration?

user-f43a29 04 June, 2025, 14:13:31

Hi @user-7c5b51 , apologies for letting that slip off the radar. To answer your question:

user-f43a29 04 June, 2025, 14:23:55

Neon's calibration-free nature is a big difference that simplifies many things, but there are also other main differences, some of which derive from that calibration-free nature:

  • Modular: The Neon eyetracker itself is this module, which contains all the sensors. You can freely pop it into different frames and VR headsets, as often as you wish. This makes it easy to use the same eyetracking pipeline in various scenarios.
  • Datastreams: Neon has more datastreams than Pupil Core, including a 9-DoF IMU and more comprehensive 3D Eye State.
  • Development: While we still support Pupil Core and will help you with it, we do not develop new features for it. All development effort is now towards Neon. For instance, check out our Alpha Lab guides.
  • Easier API: Neon's Real-time API is much easier to use than Pupil Core's Network API.
user-f43a29 04 June, 2025, 14:27:24

@user-7c5b51 May I ask what your research goals are?

user-f43a29 04 June, 2025, 14:23:57
  • Fit: Pupil Core has one shape with rigid arms. With Neon, we offer frames that actually are glasses, as well as frames with swappable magnetic lenses to accommodate people who must wear prescription lenses. Using glasses with Pupil Core is sub-optimal. With Neon, you can also use frames with flexible headbands, useful in sports and for patients with bandages. We also have frames for children ages 2-8.
  • Headset slippage resistant: Due to the calibration-free nature, you can take the glasses off in the middle of an experiment and put them back on. This is especially useful when working with kids. For example, if Pupil Core is bumped or slips, you need to pause the experiment, reset the headset, and re-calibrate.
  • Truly mobile: You could plug Pupil Core into a laptop and put that in your backpack, but since Neon plugs into a phone, you can simply slip that in your pocket and walk around. It is easy to study sports activities with Neon, for example.
  • Analysis tools: Our latest & most powerful analysis tools are not available for Pupil Core, such as the Reference Image Mapper.

There are also other reasons, but these are a few that come to mind.

user-7c5b51 04 June, 2025, 14:39:02

Our research is sound localization in space. With the eye trackers, we are hoping to understand how well patients with hearing loss are able to identify different sounds in space. The eye tracker would be useful in helping us mapping accuracy.

user-f43a29 04 June, 2025, 14:47:45

I see. This is another area where Neon can be helpful.

Neon has microphones, whereas Pupil Core does not. The audio streams, when enabled, are part of a standard Neon recording.

With Neon, you don't need to give any instructions, which can be helpful if the participants have hearing loss. You simply put it on and you are eyetracking. With Pupil Core, you have to instruct the participant on how to do the calibration (i.e., "first, roll your eyes around in a circle").

I could also imagine Neon's IMU being helpful here, since you can study how head & eye movements interact when localizing sound.

user-7c5b51 04 June, 2025, 15:10:29

Thank you @user-f43a29

user-f4d2b3 05 June, 2025, 09:42:05

Hi, I would like to download the recorded eyetracking video including the gaze overlay + fixations from the cloud. Somehow I cannot manage. I have already tried the video renderer visualization but it says 0/19 recordings included and I am stuck. Thank you for the help in advance.

user-c2d375 05 June, 2025, 11:08:52

Hi @user-f4d2b3 πŸ‘‹πŸ» May I ask which events you selected in the Temporal Selection section of the Video Renderer visualization? Since 0 out of your 19 recordings were included, it seems they may not contain the selected events. Could you please double-check this?

user-f4d2b3 05 June, 2025, 11:30:26

I seleced recording.beginning und recording.end. Now I first created a reference image mapper as an enrichment and now I think it looks more promising. But from the docs it wasn't clear to me that I need to make an enrichment first before using the renderer. Maybe I also just misunderstood πŸ™‚

user-c2d375 05 June, 2025, 12:13:37

You do not need to create an enrichment to use the Video Renderer visualization. Once you've added your recordings to a project, the Video Renderer will automatically generate a video with gaze and fixation overlays for each recording included.

Since you selected the default events recording.begin and recording.end, all recordings added to the project should be included in the visualization, as these events are always present.

If you're still seeing 0 out of 19 recordings included despite selecting the default events, I’d like to take a closer look to better understand the issue. Could you please open a ticket in https://discord.com/channels/285728493612957698/1203979608563650650 ? I'll be happy to continue the conversation there.

user-15edb3 05 June, 2025, 13:32:51

hi. I found this video IMU visualisation interesting and useful. However in the code that you have provided i am only able to generate the animation path in the side. Is there any version that allows us to visualise the gaze overlay with the animation beside as shown in this link. link :https://youtu.be/3OdkHo3ThAE

user-f43a29 05 June, 2025, 13:58:34

Hi @user-15edb3 , to make that video, the ffmpeg command-line tool was used to horizontally stack them side-by-side.

Code like this is essentially what was used, but you may need to modify it if your videos have different width/height:

ffmpeg -i video1.mp4 -i video2.mp4 \
  -filter_complex "[0:v][1:v]hstack=inputs=2" \
  -c:v libx264 -crf 23 output.mp4

You could also use any other video editing tool to achieve the same effect.

The gaze + eye overlay was based on code from these pl-neon-recording examples:

user-15edb3 05 June, 2025, 14:24:15

https://drive.google.com/file/d/1OhyfLiLJMVQIwEZReFjdzjekFWcA41L0/view?usp=sharing here are the world video and IMU visualisation video. They dont seem to match i believe. For eg, when wearer moved head up and down , i dont see the sky-earth part being captured rightly. However, the imu.csv file has recorded the data rightly

user-f43a29 05 June, 2025, 20:28:14

Hi @user-15edb3 , I double checked. Apologies, as it has been some time since I have looked at that code.

Would you be able to share the data from that recording with us? You can put it on Google Drive and share it with [email removed]

For now, you can change the line that defines relative_demo_video_ts to the code below and that should include an additional 10 seconds of data in your case:

relative_demo_video_ts = np.arange(
    world["relative ts [s]"].iloc[0], world["relative ts [s]"].iloc[-1], 1/30
)

That line had been added to make the visualisation in the Alpha Lab guide. Only a subsection of the recording was intended for demonstration & pedagogical purposes and used to make the animation.

user-21cddf 09 June, 2025, 12:05:10

Hello! I have a questions for the staff and other researchers here - Is it possible to extract pupillometry measurements and if so how reliable are they? Did anyone look at pupilometry data of infants? Thanks:)

user-d407c1 09 June, 2025, 12:30:37

Hi @user-21cddf πŸ‘‹ ! Yes, pupillometry is available with Neon β€” both in real time and post-hoc. It's reported in millimeters and accounts for gaze angle. For details on accuracy and robustness, I’d recommend checking out our whitepaper, where we replicate several traditional experiments. If you're working with infants, we recommend measuring their IPD and entering it in the wearer’s profile to improve the accuracy.

End of June archive