๐Ÿ‘“ neon


Year

Month (messages)

user-af95e6 01 November, 2025, 16:41:00

Hi all, why i did not see "Eyelid aperture" and "Pupil diameter" in pupil cloud ?

user-77aaeb 02 November, 2025, 09:18:29

Hello! I wanted to ask what type of battery Neon uses and it's capacity. I'm preparing technical documentation for an experiment and can't find that info on the website. Thank you

user-f43a29 02 November, 2025, 11:15:55

Hi @user-77aaeb , Neon uses the battery in the attached Companion device, so it depends on which phone you have.

user-77aaeb 02 November, 2025, 15:43:17

So there's no inbuilt battery? And the Companion device could be any phone/tablet attached by cord?

user-f43a29 02 November, 2025, 15:45:02

There is no battery in the module. It simply contains the sensors and receives power via USB cable from the attached Companion device. You can see an explanation & technical overview here.

We only support specific devices and specific Android versions. We maintain an up-to-date list here.

user-77aaeb 02 November, 2025, 15:45:34

Thank you! ๐Ÿ‘

user-5ef6c0 02 November, 2025, 16:56:10

quick question about uploading recordings with pupil neon into pupil cloud. We had not realized we were out of storage so we are getting this in pupil cloud:

Chat image

user-4c21e5 03 November, 2025, 04:19:03

Hi @user-5ef6c0 ๐Ÿ‘‹. The recordings you made on Friday are safely stored in Cloud. But it indeed looks like they over the free quota. To derestrict them, you can either purchase a Cloud plan, or alternatively, delete recordings from the free 2-hour quota to free up space. Note to permanently delete recordings from Cloud, you'll need to also remove them from trash. Be sure to have them backed up offline if they're important because Cloud re-uploads are not possible. We also have offline tools to work with your recordings - check out this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1431119082001793095

user-5ef6c0 02 November, 2025, 16:57:32

we just deleted lots of older recordings. 1) can we expect these recordings we did on Friday and today to be uploaded (the ones with the ! icon)? Or are they lost?

user-3c26e4 02 November, 2025, 18:12:54

Hello, I am wondering why I don't see Eyelid aperture, pupil diameter, audio and blinks anymore. Please help. Yesterday everything was OK.

user-77aaeb 02 November, 2025, 18:57:25

Can I ask for the MSDS of the 'is this thing on' model?

user-4c21e5 03 November, 2025, 04:36:03

Hi @user-3c26e4! I have a few follow up questions to help understand why you're not seeing these data streams: 1. Are you working with recordings loaded directly into Neon Player, or in Pupil Cloud? 2. Did you have real-time eye state enabled in the Neon Companion app at the time of recording? This is in the app settings 3. Similarly, did you have audio capture enabled in the Companion app at the time of recording?

user-3c26e4 03 November, 2025, 07:12:45

Hi @user-4c21e5 , 1. Pupil Cloud 2. Yes 3. Yes

However I tried Alpha Lab - Dynamic AOI Tracking With Neon and SAM2 yesterday. It didn't work. Could it have spoiled these channels? As I told you before that all channels could be seen.

user-3c26e4 03 November, 2025, 07:14:58

Here is a screenshot

user-3c26e4 03 November, 2025, 07:15:00

Chat image

user-480f4c 03 November, 2025, 07:28:53

Hi @user-3c26e4! Nadia here, stepping in for Neil. ๐Ÿ™‚ I'd like to request some additional information:

  • Could you share the recording ID with us?
  • Can you please elaborate on the issue you experienced when using the SAM2 Alpha Lab tool?

Just to clarify: any issue you might have experienced when trying to run the Alpha Lab tutorials does not affect the data streams in your Neon recordings.

user-3c26e4 03 November, 2025, 07:32:52

OK, then it should be a kind of server problem of yours. I will share one ID of one recording but it applies for all recordings I have with neon. Where should I share it? Can you open a ticket?

user-3c26e4 03 November, 2025, 07:36:03

By the way, do I have a defect neon glasses because I alway have to set the exposure manually, because balanced or the other automatic exposures lead to this. It is impossible to see the scenery. If I do it manually it doesn't react to different lightning conditions or daytime if I am driving long. Pupil Invisible were way better regarding the automatic exposure. Is my Neon defect?

user-3c26e4 03 November, 2025, 07:36:08

Chat image

user-480f4c 03 November, 2025, 07:41:13

yes, feel free to open a ticket in our ๐Ÿ›Ÿ troubleshooting channel.

user-3c26e4 03 November, 2025, 07:47:39

I shared an ID.

user-480f4c 03 November, 2025, 07:43:42

Your Neon is not defective. In high-contrast environments, it's expected that you may need to manually adjust the exposure for optimal visibility.

Also, just to let you know, we recently released a new feature on Pupil Cloud that lets you adjust playback brightness directly within the platform. The settings are saved per recording, making it easier to fine-tune visibility during review. You can find the summary of this feature here: https://discord.com/channels/285728493612957698/733230031228370956/1433650414041043025

user-3c26e4 03 November, 2025, 07:46:59

But how come Pupil Invisible change automatically the exposure so good? If I make it darker one still can't see anything. This is at 0.2:

Chat image

user-4c21e5 03 November, 2025, 07:51:18

For very bright illumination, like sunight coming in through a car windscreen, if you want visibility out of the windscreen, you should try the 'highlights' autoexosure mode. It will optimise for this environment.

user-3c26e4 03 November, 2025, 07:56:58

But one should consider that a recording on a real road can take 3-4 or more hours so you can't adjust the exposure for the time of the recording. What can be done for that?

user-4c21e5 03 November, 2025, 07:52:22

If the scene is already overexposed, then it's unlikely post-hoc brightness settings will help. It's important to choose the right exposure mode at the time of recording.

user-4c21e5 03 November, 2025, 07:58:33

The highlights is an autoexposure mode. Meaning it will change to optimise for outside the windscreen. That said, if there is a significant change, e.g. it goes dark outside during the drive, you might need to change to another mode.

user-3c26e4 03 November, 2025, 08:03:40

Can I do this without stopping the recording?

user-af95e6 03 November, 2025, 08:22:42

Hi all, do you know why i did not see "Eyelid aperture" and "Pupil diameter" in pupil cloud anymore ?

Chat image

user-3c26e4 03 November, 2025, 08:54:18

Same with me!

user-d407c1 03 November, 2025, 09:31:39

Hi @user-af95e6 @user-3c26e4 ! We are aware of this issue affecting the plot visualizations, we are currently working on it and we will let you know as soon as it is resolved.

user-d407c1 03 November, 2025, 10:07:30

Hi @user-937ec6 ๐Ÿ‘‹ ! may I ask what kind of computer youโ€™re using? From our internal tests, the delay is typically minimal, but there are a few factors that can influence it. Could you check that your audio drivers are up to date and let us know what kind of audio interface or sound card youโ€™re using, as well as your CPU utilization during playback? Thatโ€™ll help us get a clearer picture.

As I mentioned before, achieving zero latency isnโ€™t currently possible in real time due to the sampling rate at 8 kHz AAC (1024 samples), as capturing one AudioFrame takes about 128 ms. With the scene camera running around 30 FPS (~33 ms per frame), youโ€™re effectively getting almost four video frames per audio frame.

Thatโ€™s why I suggested relying on the recorded data directly on the device, which is captured at 48 kHz and already synchronized, giving you clean alignment without the need to post-hoc delay or mux the streams.

Keep in mind that delaying and storing the data, even when threaded, can increase CPU load and potentially become a bottleneck for real-time playback.

By the way, I noticed you offset the scene video frame by 0.3s in your code, but then you loop matching and discarding audio frames until one is captured after the first video frame which is contradictory.

Instead of manually offsetting and using incremental counters , you should keep the time_base for each stream, pick a zero time and calculate PTS relative to that time, using the timestamp in unix sec already provided with the frames.

E.g. AudioFrame has an av_frame and timestamp_unix_seconds, so the pts for muxing the audio stream should be int((audioframe.timestamp_unix_seconds - start_time)* 8000)

user-937ec6 03 November, 2025, 18:36:18 Computer and drivers

I am using a computer that I built myself of no particular make or model running Windows 11. - CPU: Intel 12700k - RAM: 32 GB - GPU: RTX 3080

Note that the live audio delay seems the same whether I output audio to the graphics card or onboard sound.

CPU utilization

I want to be clear and elucidate the conditions under which I am running.

  • Python debugging enabled
  • Software video encoding with pyav and ultrafast baseline h.264 profile
  • Live preview with audio and video is enabled
  • Saving mp4 file using pyav with audio and video

CPU utilization is nominal and in the 4-15% range. When the video encoding is more dynamic, the utilization increases as one would expect. When the scene is relatively static, CPU utilization is less than 5%.

Drivers

I do believe the drivers to be up to date for both the onboard sound as well as the graphics card.

Latency

My main concern isnโ€™t even the live audio delay, although less delay would, of course, be nice! Rather, itโ€™s synchronization in the saved video file.

Relying on device recordings

I have a bunch of custom software, built atop the API, that does various processing including video overlays, data logging, and LSL input/output. Thus, and as I have said in prior posts, I cannot rely on device recordings for synchronized audio.

Observed audio delay

Even after matching the underlying timestamps of the audio and video frames, I still see a readily visible audio delay when playing back the saved video. Hence offsetting (delaying) the first video frame.

Calculating presentation timestamps

Yes, one could calculate the presentation time stamp from the unix time stamp. I agree that this is mildly superior to incrementing a counter since itโ€™s possible that the encoder could fall behind. Thank you for the suggestion, and I did adjust my code accordingly.

user-8fe6ae 03 November, 2025, 17:35:36

Hi I just got a neon can i ask a really basic question. If i want to monitor and edit the stream on a pc is the only way to do that by streaming the data from the phone/ neon companion app to the pc over wifi?

user-8fe6ae 03 November, 2025, 17:36:56

I've worked with a core before and we had that connected directly into a pc and was able to set up recording directly and edit events

user-937ec6 03 November, 2025, 18:58:10

In the real-time Async API Audio examples on Github there is the following text: - On the simple API examples you can also find how to use the audio for Speech-to-Text using the whisper library.

When I browse to the Simple API Audio examples on Github, I cannot find the example referenced above. Can someone kindly refer me to the right place for the Whisper example?

user-d407c1 04 November, 2025, 08:04:02

Hi @user-937ec6 ๐Ÿ‘‹ ! Thanks for following up. That's a more than capable computer, I highly doubt your CPU is being the issue here.

Letโ€™s focus on synchronization after receiving the streams live, although is a workflow that I would typically not recommend as your also subject to network jitter. I will recommend buffering at least 250ms so there are no frame drops (but note that if you use the same buffer size for preview, there will be more latency on playback).

The key step is to derive presentation timestamps (PTS) from the capture timestamps, not from incremental counters. This isnโ€™t just slightly better, itโ€™s the only correct way to maintain sync, as it ties timing to when each frame was actually recorded, not when it arrived. Did that already solved the delay you observed?

For reference, hereโ€™s the Whisper example you mentioned: examples/simple/audio_transcript.py, note that this is not optimized at all, is just meant to show the possibilities.

user-480f4c 04 November, 2025, 06:32:35

Hi @user-8fe6ae! Your understanding is correct. Neon needs to be tethered always to the smartphone (Companion Device) which runs the Neon Companion App. The Neon Companion App allows you to collect data and stream to other devices. For streaming, you can use one of the following:

1) the Neon Monitor App - a web app allowing you monitor your data collection in real-time and remote control all your Neons from another device.

2) the Real-Time API - offers flexible approaches to access data streams in real time, remote control your device, send/receive trigger events and annotations, and even synchronise with other devices.

Note that for every Neon purchase we offer a 30-minute Onboarding Workshop to help you set up and walk you through our software ecosystem. If you haven't already booked yours, feel free to reach out at info@pupil-labs.com and we can schedule a meeting ๐Ÿ™‚

user-8fe6ae 04 November, 2025, 11:37:19

Ok thanks for the message. We have restrictions on what we can do on our network at the university. Ideally we to just have the eyetracker unit directly connected to a pc that we can start/stop recording and send events into. Is this only possible with the core system?

user-d407c1 04 November, 2025, 08:07:07

@user-af95e6 @user-3c26e4 The timelines visualisation should have been restored, please refresh the page and let us know otherwise.

user-67b98a 04 November, 2025, 08:16:11

Hi all! Weโ€™re conducting a banner ad study using Neon glasses. Iโ€™m getting an average time to first fixation of 21 seconds for one AOI. Does this mean that, for about 20 seconds, the respondents didnโ€™t look at anything within that AOI? Please help โ€” TIA!

user-f43a29 04 November, 2025, 10:59:27

Hi @user-67b98a , that is in principle correct. They did not fixate the AOI for the first ~20 seconds of the recording on average. This means that some respondents looked a bit earlier and some a bit later, depending on the amount of variability in your dataset. You can look at the data on a per respondent basis, if you need to know more than the average.

End of November archive