👓 neon


Year

Month (messages)

user-13d297 23 October, 2025, 18:42:11

Hi! Is it possible to apply a gaze offset correction from one recording to another recording in Pupil Cloud? For instance, a participant completes a validation procedure in a recording and an experiment in a second recording. Can gaze offset correction conducted posthoc in Pupil Cloud during the validation recording be applied to the experimental recording? Relatedly, is it possible to extract the size of the offset correction from Pupil Cloud? Thank you!!

user-d407c1 28 October, 2025, 12:15:05

Hi @user-13d297 ! This is currently not possible in through the UI, but we are looking into this. In the meantime, you can use this utility, which let's you copy one gaze offset from one recording to multiple ones via Cloud API.

user-f43a29 27 October, 2025, 10:17:42

Ok, then that is already provided by Neon by default. It is indeed in the 3d_eye_states.csv file, if using the Timeseries data from Pupil Cloud or the exports from Neon Player. A high level overview of that data is here. Let us know if you need any help parsing it.

user-5c48a3 27 October, 2025, 10:31:07

Is there any way I could somehow project the optical axis so to have it with respect of the scene camera? (sorry for the many questions, but I am kind of stuck with this 😄 ) Otherwise, do I have to use the optical axis of the camera directly?

user-f43a29 27 October, 2025, 10:32:19

Feel free to ask questions. So, as documented here, the optical axes are already specified in the scene camera coordinate system (so, with respect to the scene camera).

user-5c48a3 27 October, 2025, 10:37:39

thank you🫶

user-90e66b 27 October, 2025, 10:55:49

Hi, can I transfer a recording from one Workspace to another in order to put it in a single project in PupilCloud?

user-d407c1 27 October, 2025, 10:58:38

Hi @user-3c26e4 👋 ! Currently, you can not transfer recordings across workspaces. You can add it to any project you want within that workspace.

user-9c4952 27 October, 2025, 12:02:43

Hi, I have a query about Neon eye tracker storage on the PupilCloud. I'd appreciate some help!

user-d407c1 27 October, 2025, 12:05:41

Hi @user-9c4952 ! What question do you have?

user-9c4952 27 October, 2025, 12:06:26

Thanks, I already created a support ticket!

user-58c1ce 27 October, 2025, 12:20:06

Hello, I am working with NEON to study golfers’ visual behavior. In my recordings, I created custom event markers such as “first shot,” “second shot,” etc. I have exported the gaze, fixation, saccade, and blink data to CSV/Excel files. I would like to identify all gaze and fixation data that occur within specific time windows relative to these events, for example 2 seconds before and 2 seconds after each shot. Could you please advise:
what is the best approach to align event timestamps with gaze/fixation timestamps in Excel or another tool, to extract those ± time-window segments manually?

Thank you.

user-d407c1 27 October, 2025, 12:38:09

Hi @user-58c1ce 👋 ! All streams are already aligned, in UTC nanoseconds, so you can directly match data across files. So you can open gaze, fixations and events in a worksheet and refer to apply an > and < to the event timestamp of interest.

user-45f4b0 27 October, 2025, 14:54:16

Hi, is neon possible for realtime data acquisition and processing sync with data stream from other platforms? If so, how to realize it? Thanks!

user-f43a29 27 October, 2025, 15:24:20

Hi @user-45f4b0 , yes, both of these are possible and are fundamental features of Neon. To best assist you, may I ask what your research goals are and what hardware do you want to synchronize with?

user-45f4b0 27 October, 2025, 18:07:29

Hi @user-f43a29 , thanks. I want to do experiment on a real car. So the signals may include CAN, ROS, and some other physiological data

user-f43a29 28 October, 2025, 12:48:31

Hi @user-45f4b0 , then you have a few choices:

These methods can also be intermixed and they work over WiFi, hotspot, and Ethernet connection. All are available programmatically in different languages via Neon's Real-time API.

user-3c26e4 27 October, 2025, 18:40:58

Oh, that's a pity. Do you plan to arrange this transfer being possible?

user-4c21e5 28 October, 2025, 00:31:21

Hi @user-3c26e4! Feel free to upvote this in feature requests: https://discord.com/channels/285728493612957698/1212410344400486481

user-2798d6 27 October, 2025, 20:02:57

I'm still getting this message. Both my laptop and the Pupil device are connected to the same hotspot.

Chat image

user-2798d6 27 October, 2025, 20:04:28

I've also tried putting the Neon ip address in and it doesn't work either

user-2798d6 27 October, 2025, 20:07:12

I also tried using my personal phone for the monitor app instead of my laptop and it says "Video Preview may not be available in IOS"

user-f43a29 27 October, 2025, 20:49:59

Hi @user-2798d6 , that is a current limitation of browsers on iOS and iPad. Apple has already been notified about that and should eventually correct it.

Regarding the URL, yes, I meant to replace the text, IP_ADDRESS, with the numbers shown in the Streaming section of the Neon Companion app.

May I ask if you are running latest MacOS? Did you potentially ever tell it to not allow Chrome to search for devices on the local network?

user-fa126e 28 October, 2025, 02:55:36

I'm trying to put together a simple Python script that will search for the two sets of Neon glasses we have (by IP address), start the recording on both of them, send an event to both recordings indicating when they're synchronised (accounting for device offset), then let me stop the recording on both glasses at any point.

I'm having issues with getting it to discover the devices sometimes, and also with understanding how the offset should work. Could I get some help with this?

user-cdcab0 28 October, 2025, 03:57:00

If you know the IP addresses, then you don't need to use discovery

With regards to the offset, don't think about clock differences between the two Neons at recording time. During the recording and event generation, you only need to consider the offset between the PC and Neon 1 and the offset between PC and Neon 2. When you are ready to send an event to both Neons, generate a single timestamp in the PC's clock, and send that to each Neon with their respective offset (to the PC clock) applied.

The events will be recorded with different timestamps within each device's recording, but since you know they came from the same original timestamp when you recorded them, you can easily determine the clock offset between the two Neons post-hoc

user-3c26e4 28 October, 2025, 09:08:08

Hi, I would like to know whether it is possible to define areas of interest in neon player not with the mouse but with coordinates so that the areas of interest of all tested subjects would be exactly in the same place.

user-cdcab0 28 October, 2025, 09:14:29

Areas of interest within Neon Player only exist in the form of the Surface Tracker plugin

As long as your AOIs exist on flat planes that can have AprilTag markers affixed to them, then the Surface Tracking plugin will work for you. You still define the AOI using the mouse, but the corners of the AOI are defined in relation to the AprilTag markers in the scene. So if the eye tracker moves or the surface moves, the AOI moves as well

user-e33073 28 October, 2025, 11:04:21

Hello, my lab's subscription ends on October 31 and we're waiting for the renewal, but it may take a few more days. Can we still use our neon and make new recordings after our subscription runs out? Do we risk losing any kind of data from those new recordings?

user-c2d375 28 October, 2025, 11:32:29

Hi @user-e33073 👋🏻 No worries if you don’t purchase a new Unlimited Storage add-on immediately after the current one expires. You won’t lose any data, as you will still be able to create and upload new recordings to Pupil Cloud, although access to them will be restricted, as your account will revert to the free tier of 2 recording hours until the new add-on is purchased.

user-e33073 28 October, 2025, 11:34:15

Alright, thank you!

user-937ec6 28 October, 2025, 14:55:21

Thanks for adding audio support to the latest Realtime API. I have been exploring the audio support. I am able to 1) show live video preview with audio playback 2) save a singular mp4 file using pyav with the audio data muxed and 3) successfully playback the mp4 file with audio. However, the audio playback is delayed in the preview and further delayed in the mp4 file by about a second when I playback the file. Is there a way to reduce or eliminate the audio delay for live playback and also when muxing with video? I do see there is a timestamp associated with each audio sampling. I believe it may be possible to accumulate the video frames and gaze data, then match then up to the audio frame, but I wanted to get your thoughts. Thank you!

user-2798d6 28 October, 2025, 15:36:24

@user-f43a29 SUCCESS! I had to go change the settings to allow Chrome to search for devices. THANK YOU!

user-3c26e4 28 October, 2025, 16:20:22

Hello again, there is a difference in the position of every fixation between yellow circle and red dot. How is this to be interpreted (for example Fixation No. 64)?

Chat image

user-cdcab0 28 October, 2025, 21:17:27

Fixations are described with a point and a duration, but the scene can move during that time (e.g., the wearer moves their head while fixating on a target).

Neon's fixation algorithm accounts for head movement by integrating motion data from the IMU. These data are incorporated in the calculation of the fixation point and duration.

The visualization of the fixation (yellow circle) is rendered at the fixation point but then adjusted for scene-wide optic flow.

That isn't always going to align well with the gaze point, and the difference is likely especially pronounced in dynamic scenes/situations

user-5bd924 28 October, 2025, 19:36:58

Has anyone else who uses the Python API been having an issue with recordings failing to stop and save correctly? When I run the simple api command device.recording_stop_and_save(), the companion app seems to pause but not stop and save the current recording. The app gets ‘stuck’ and thinks it is still recording, based on device._get_status(). This has only been happening in the last couple days, so it may have something to do with the recent update to the companion app (now 2.9.20-prod). I opened a ticket about this, but since I planned to collect data tomorrow I also wanted to ask if anyone else has experienced this issue or might have a quick fix. Many thanks!

user-f43a29 29 October, 2025, 09:05:15

Hi @user-5bd924 , we have left a message in the Support Ticket. We can continue communication there. Thanks!

user-d407c1 29 October, 2025, 09:29:50

Hi @user-937ec6 👋 Great to hear you’re already experimenting with the new audio support!

A few points that might help clarify and optimize your setup:

  • Audio latency: The audio stream is multiplexed with video and encoded as AAC at 8 kHz, with 1024 samples per frame, which alone introduces ~128 ms of unavoidable latency, that’s the minimum you’ll get even in ideal conditions without considering processing or audio driver latency). Additional delay can come from playback buffering or your system’s audio driver. Are you using the bundled AudioPlayer class from the library? That one uses a single-frame buffer and usually keeps things near real time.

  • Live preview delay: Do you experience it on the example codes or somewhere else? If you’re rendering both video and audio live, most of the perceived desync often comes because we try to consume it as fast as possible. Typically on modern computer both would play almost at same time (see note above), but as you mentioned, you can use the timestamps and slightly delay video to match audio, if you experience a more significant delay.

  • Muxing and post hoc alignment: For recording, similarly each audio frame has a precise timestamp , so, in principle, you can use those timestamps to align or resample when muxing with PyAV. However, if you’re after minimal latency and best sync, I’d recommend instead starting a recording directly on the device via the Realtime API remote control endpoint, and using the recording from the device. Those recordings use 48 kHz audio and are perfectly synchronized internally, eliminating the ~1 s drift you noticed.

That should give you near-optimal performance both for live and saved playback.

user-937ec6 31 October, 2025, 16:06:09

5) Calculate the fractional base for the video (1/30) and audio (1024/8000) and set that in each video and audio frame via time_base.

audio_stream_fraction = Fraction(1024, 8000)
video_stream_fraction = Fraction(1, scene_camera_frames_per_second)

6) Get the most recent video frame and gaze data so as not to use a now old initial frame 7) Offset the first video presentation time stamp (pts) by an appropriate amount such that the first video frame pts is roughly synchronized with the audio.

outside the loop
audio_delay_seconds = 0.3
    video_pts = int(
        audio_delay_seconds / (1 / scene_camera_frames_per_second)
    )  # calculate starting video pts to align with audio delay and apparent misalignment
audio_pts = 0
inside the loop
            ```pyav_video_frame = av.VideoFrame.from_ndarray(array=bgr_buffer, format="bgr24")
            pyav_video_frame.time_base = video_stream_fraction
            pyav_video_frame.pts = video_pts
            for packet in video_stream.encode(pyav_video_frame):
                container.mux(packet)
            video_pts += 1```
while not queue_audio.empty():
                    _ts, audio_frame = queue_audio.get_nowait()
                    av_frame = audio_frame.av_frame
                    av_frame.time_base = audio_stream_fraction
                    av_frame.pts = audio_pts
                    for packet in audio_stream.encode(av_frame):
                        container.mux(packet)
                    #there is only one packet per frame
                    audio_pts += 1
user-937ec6 31 October, 2025, 16:05:43

Thanks for the response. I hope the below helps others, and I am posting my solution here for the edification of others and also to get feedback so that I can refine it. 1) Audio latency: Yes, I read the documentation. I am using the bundled AudioPlayer for live audio. 2) Live preview delay: Yes, the live preview delay is present in the example code. I have pretty much accepted this as unavoidable given #1. Delaying the video to align the two does indeed work but comes at the expense of delayed video. 3) Muxing and post hoc alignment: I wrote some simple code (see below) to delay writing the video until the audio aligns. Aligning audio and video I was able to get the audio and video mp4 synchronized by doing the following: 1) Create separate audio queues for live preview and saving the mp4 file 2) When an audio chunk is received, place it into both queues 3) When the first video frame is received, note the time stamp 4) Drain and wait on the mp4 audio queue until the timestamps for the first video frame align with an observed audio chunk. Note that even doing this, the sound is still significantly delayed by a few tenths of a second. Timestamp alignment alone simply does not seem to be sufficient for synchronized audio and video.

if not audio_is_matched:
                # wait for the first audio frame
                video_datetime, video_frame = await get_most_recent_item(queue_video)
                _, gaze_datum = await get_closest_item(queue_gaze, video_datetime)
                audio_datetime, audio_frame = await queue_audio.get()
                while audio_datetime <= video_datetime:
                    audio_datetime, audio_frame = await queue_audio.get()
                    print(f"Matching audio at {audio_datetime} to video at {video_datetime}")
                audio_is_matched = True
                queue_audio.put_nowait((audio_datetime, audio_frame))
user-7fe82b 29 October, 2025, 19:20:48

Dear all, We are interested in purchasing an eye tracker for our research lab, which focuses on software engineering. Could you please provide information on the differences between the Neon and Core models, and advise which one would be more suitable for our research needs?

Thank you for your time and assistance.

user-3c26e4 29 October, 2025, 21:51:05

Hi, could you please advise how I can lit up the QR markers for a night driving? I used small LED lamps from above but it didn't work. Is there a way to lit them from behind through the whole area?

user-4a1dfb 30 October, 2025, 06:27:59

Hello,

I work in a research lab for HCI embodiment research and we’re interested in the Neon. But currently, the price is quite out of our reach.

I was curious if Pupil Labs are open to, or willing to discuss/negotiate prices for the Neon? Do you guys still offer demo devices?

user-d407c1 30 October, 2025, 07:25:21

Hi @user-4a1dfb 👋 ! Thanks for your interest in Neon and for reaching out! We don’t typically negotiate on pricing, our prices are already highly competitive for the level of accuracy, robustness, and usability Neon offers. But we do offer discounts for academic and research institutions, as well as volume-based options depending on your needs.

We currently don’t loan demo units, though we’d be glad to arrange a guided demo call, and help determine the most cost-effective setup for your research.

You can book a slot here

user-d407c1 30 October, 2025, 07:10:12

Hi @user-7fe82b 👋 ! We’d generally recommend Neon, our latest system. It’s more portable, robust, and easy to use, and it includes additional sensors compared to Pupil Core.

If you’d like to explore the differences in more detail or discuss how Neon could fit your specific research setup, you can visit our website and schedule a chat with one of our product specialists.

user-d407c1 30 October, 2025, 07:19:45

Hi @user-3c26e4 👋 !

Lighting AprilTags (the type of fiducial markers we support) for night driving can be quite challenging, they need even, diffuse illumination without glare or specular reflections.

A few ideas you can try:

  • Backlighting is only viable if the material is translucent and evenly frosted, printing the markers on such material could work well.
  • Use diffused LED strips placed around (not directly above) the markers to minimize glare and hotspots.
  • You can also experiment with IR illumination,while the scene camera mainly captures visible light, it still picks up a small amount of IR, which might improve contrast without being visible to the driver.

Unfortunately, there’s no perfect off-the-shelf solution for this setup, but testing a few lighting configurations should help you find a workable balance.

user-3c26e4 02 November, 2025, 14:47:11

Thank you @user-d407c1 , it is something I am trying to do for a long time. Maybe you can consider making such QR markers like Dikablis do.

user-30f8d5 30 October, 2025, 08:12:28

Hello, at the moment after importing csv data from enrichments, the only way I can identify my participants is via the 'recording id' column. Is there a way to also import in the wearer id? I'm trying to sync my participant's responses on Qualtrics with their data in pupil cloud through subject id

user-d407c1 30 October, 2025, 08:18:50

Hi @user-30f8d5 ! We aim to keep the CSV files minimal to avoid cluttering with excessive columns.

That said you can get easily that information, do you do it manually or programmatically? I can definitely guide you on that.

user-30f8d5 30 October, 2025, 08:28:39

How would I do it manually?

user-d407c1 30 October, 2025, 08:38:51

Manually you can cross check recording IDs with wearer in the UI and change it on wherever you are going to work with, but that's a tedious work and prone to errors.

Instead, you can:

A) If you have the native recordings you can check the info.json which contains both recording IDs and Wearer IDs

B) Leverage the Cloud API to get the wearer of a recording.

user-7fe82b 30 October, 2025, 09:04:49

Thank you so much. could you offer us academic prices for neon model?. because there is academic prices for core model

user-d407c1 30 October, 2025, 09:05:41

Hi @user-7fe82b ! We do offer also academic discounts for Neon as denoted in our webpage.

user-d407c1 30 October, 2025, 10:47:20

@user-30f8d5 I have a little gift/snippet for you.

If you follow the README, you can run that script to append a new wearer name column to any CSV file from Cloud with a recording id column.

It leverages the Cloud API to fetch the wearer id from that recording and then the name, in case you have modified the wearer on Cloud. Thus, you would need to get a token.

user-f6d4a6 30 October, 2025, 11:14:20

Hello, Which chip model is installed in the Neon? Does it have enough power to analyse more data? Is the data analysed in real time on the device itself, or is it sent elsewhere, e.g. to a phone or PC? I would like some additional equipment to go with the NEON setup in my glasses, which would require NN and CV, but it would need to work in real life without any additional devices. Is it possible to easily integrate NEON with other equipment?

user-f43a29 30 October, 2025, 12:55:07

Hi @user-f6d4a6 , if you are looking to integrate Neon into your system, then please check out our Integrators page and schedule a call there.

If you are not looking for such a deep integration, then you can reference Neon's Real-time API. It is network enabled, while being network- & programming-language agnostic. You can connect to Neon via WiFi, hotspot, or Ethernet cable.

You can even run an additional Android app on the phone that locally receives Neon's real-time data for processing. A basic Kotlin implementation of such an app can be found here.

To clarify some points:

  • We cannot share details of Neon's internals.
  • However, we can clarify that the processing does not happen on the module itself. The module is simply where the sensors are housed and acts as the "raw data collection" point.
  • Processing happens in the Neon Companion app on a compatible Android phone.
  • The Neon module is connected to the phone by USB cable and much of the processing happens in real-time.
user-80c70d 30 October, 2025, 11:46:01

Hi, we have a question about pupil cloud. We created two different accounts, as we had issues with the verification for one Google account. For one of these accounts, we bought more storage. Would it be possible to merge the two accounts or transfer data from one to the other account?

user-f43a29 30 October, 2025, 12:49:45

Hi @user-80c70d , could you send an email to info@pupil-labs.com about this? Thanks!

user-d4c059 30 October, 2025, 20:45:25

Hello, I am connecting the device via IP address and I am purchasing a router. I wonder what WIFI frequency band is preferred. Thank you.

user-d407c1 31 October, 2025, 07:53:31

Hi @user-d4c059 👋 ! All Companion Devices support Wi-Fi 6 and can connect over the 5 GHz band.
The latest model, the Samsung S25, even supports Wi-Fi 7, so if your router provides it, you’ll definitely benefit from the faster and more stable connection in busy areas. 🚀

End of October archive