👓 neon


Year

Month (messages)

user-af95e6 01 November, 2025, 16:41:00

Hi all, why i did not see "Eyelid aperture" and "Pupil diameter" in pupil cloud ?

user-77aaeb 02 November, 2025, 09:18:29

Hello! I wanted to ask what type of battery Neon uses and it's capacity. I'm preparing technical documentation for an experiment and can't find that info on the website. Thank you

user-f43a29 02 November, 2025, 11:15:55

Hi @user-77aaeb , Neon uses the battery in the attached Companion device, so it depends on which phone you have.

user-77aaeb 02 November, 2025, 15:43:17

So there's no inbuilt battery? And the Companion device could be any phone/tablet attached by cord?

user-f43a29 02 November, 2025, 15:45:02

There is no battery in the module. It simply contains the sensors and receives power via USB cable from the attached Companion device. You can see an explanation & technical overview here.

We only support specific devices and specific Android versions. We maintain an up-to-date list here.

user-77aaeb 02 November, 2025, 15:45:34

Thank you! 👏

user-5ef6c0 02 November, 2025, 16:56:10

quick question about uploading recordings with pupil neon into pupil cloud. We had not realized we were out of storage so we are getting this in pupil cloud:

Chat image

user-4c21e5 03 November, 2025, 04:19:03

Hi @user-5ef6c0 👋. The recordings you made on Friday are safely stored in Cloud. But it indeed looks like they over the free quota. To derestrict them, you can either purchase a Cloud plan, or alternatively, delete recordings from the free 2-hour quota to free up space. Note to permanently delete recordings from Cloud, you'll need to also remove them from trash. Be sure to have them backed up offline if they're important because Cloud re-uploads are not possible. We also have offline tools to work with your recordings - check out this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1431119082001793095

user-5ef6c0 02 November, 2025, 16:57:32

we just deleted lots of older recordings. 1) can we expect these recordings we did on Friday and today to be uploaded (the ones with the ! icon)? Or are they lost?

user-3c26e4 02 November, 2025, 18:12:54

Hello, I am wondering why I don't see Eyelid aperture, pupil diameter, audio and blinks anymore. Please help. Yesterday everything was OK.

user-77aaeb 02 November, 2025, 18:57:25

Can I ask for the MSDS of the 'is this thing on' model?

user-4c21e5 03 November, 2025, 04:36:03

Hi @user-3c26e4! I have a few follow up questions to help understand why you're not seeing these data streams: 1. Are you working with recordings loaded directly into Neon Player, or in Pupil Cloud? 2. Did you have real-time eye state enabled in the Neon Companion app at the time of recording? This is in the app settings 3. Similarly, did you have audio capture enabled in the Companion app at the time of recording?

user-3c26e4 03 November, 2025, 07:12:45

Hi @user-4c21e5 , 1. Pupil Cloud 2. Yes 3. Yes

However I tried Alpha Lab - Dynamic AOI Tracking With Neon and SAM2 yesterday. It didn't work. Could it have spoiled these channels? As I told you before that all channels could be seen.

user-3c26e4 03 November, 2025, 07:14:58

Here is a screenshot

user-3c26e4 03 November, 2025, 07:15:00

Chat image

user-480f4c 03 November, 2025, 07:28:53

Hi @user-3c26e4! Nadia here, stepping in for Neil. 🙂 I'd like to request some additional information:

  • Could you share the recording ID with us?
  • Can you please elaborate on the issue you experienced when using the SAM2 Alpha Lab tool?

Just to clarify: any issue you might have experienced when trying to run the Alpha Lab tutorials does not affect the data streams in your Neon recordings.

user-3c26e4 03 November, 2025, 07:32:52

OK, then it should be a kind of server problem of yours. I will share one ID of one recording but it applies for all recordings I have with neon. Where should I share it? Can you open a ticket?

user-3c26e4 03 November, 2025, 07:36:03

By the way, do I have a defect neon glasses because I alway have to set the exposure manually, because balanced or the other automatic exposures lead to this. It is impossible to see the scenery. If I do it manually it doesn't react to different lightning conditions or daytime if I am driving long. Pupil Invisible were way better regarding the automatic exposure. Is my Neon defect?

user-3c26e4 03 November, 2025, 07:36:08

Chat image

user-480f4c 03 November, 2025, 07:41:13

yes, feel free to open a ticket in our 🛟 troubleshooting channel.

user-3c26e4 03 November, 2025, 07:47:39

I shared an ID.

user-480f4c 03 November, 2025, 07:43:42

Your Neon is not defective. In high-contrast environments, it's expected that you may need to manually adjust the exposure for optimal visibility.

Also, just to let you know, we recently released a new feature on Pupil Cloud that lets you adjust playback brightness directly within the platform. The settings are saved per recording, making it easier to fine-tune visibility during review. You can find the summary of this feature here: https://discord.com/channels/285728493612957698/733230031228370956/1433650414041043025

user-3c26e4 03 November, 2025, 07:46:59

But how come Pupil Invisible change automatically the exposure so good? If I make it darker one still can't see anything. This is at 0.2:

Chat image

user-4c21e5 03 November, 2025, 07:51:18

For very bright illumination, like sunight coming in through a car windscreen, if you want visibility out of the windscreen, you should try the 'highlights' autoexosure mode. It will optimise for this environment.

user-3c26e4 03 November, 2025, 07:56:58

But one should consider that a recording on a real road can take 3-4 or more hours so you can't adjust the exposure for the time of the recording. What can be done for that?

user-4c21e5 03 November, 2025, 07:52:22

If the scene is already overexposed, then it's unlikely post-hoc brightness settings will help. It's important to choose the right exposure mode at the time of recording.

user-4c21e5 03 November, 2025, 07:58:33

The highlights is an autoexposure mode. Meaning it will change to optimise for outside the windscreen. That said, if there is a significant change, e.g. it goes dark outside during the drive, you might need to change to another mode.

user-3c26e4 03 November, 2025, 08:03:40

Can I do this without stopping the recording?

user-af95e6 03 November, 2025, 08:22:42

Hi all, do you know why i did not see "Eyelid aperture" and "Pupil diameter" in pupil cloud anymore ?

Chat image

user-3c26e4 03 November, 2025, 08:54:18

Same with me!

user-d407c1 03 November, 2025, 09:31:39

Hi @user-af95e6 @user-3c26e4 ! We are aware of this issue affecting the plot visualizations, we are currently working on it and we will let you know as soon as it is resolved.

user-d407c1 03 November, 2025, 10:07:30

Hi @user-937ec6 👋 ! may I ask what kind of computer you’re using? From our internal tests, the delay is typically minimal, but there are a few factors that can influence it. Could you check that your audio drivers are up to date and let us know what kind of audio interface or sound card you’re using, as well as your CPU utilization during playback? That’ll help us get a clearer picture.

As I mentioned before, achieving zero latency isn’t currently possible in real time due to the sampling rate at 8 kHz AAC (1024 samples), as capturing one AudioFrame takes about 128 ms. With the scene camera running around 30 FPS (~33 ms per frame), you’re effectively getting almost four video frames per audio frame.

That’s why I suggested relying on the recorded data directly on the device, which is captured at 48 kHz and already synchronized, giving you clean alignment without the need to post-hoc delay or mux the streams.

Keep in mind that delaying and storing the data, even when threaded, can increase CPU load and potentially become a bottleneck for real-time playback.

By the way, I noticed you offset the scene video frame by 0.3s in your code, but then you loop matching and discarding audio frames until one is captured after the first video frame which is contradictory.

Instead of manually offsetting and using incremental counters , you should keep the time_base for each stream, pick a zero time and calculate PTS relative to that time, using the timestamp in unix sec already provided with the frames.

E.g. AudioFrame has an av_frame and timestamp_unix_seconds, so the pts for muxing the audio stream should be int((audioframe.timestamp_unix_seconds - start_time)* 8000)

user-937ec6 03 November, 2025, 18:36:18 Computer and drivers

I am using a computer that I built myself of no particular make or model running Windows 11. - CPU: Intel 12700k - RAM: 32 GB - GPU: RTX 3080

Note that the live audio delay seems the same whether I output audio to the graphics card or onboard sound.

CPU utilization

I want to be clear and elucidate the conditions under which I am running.

  • Python debugging enabled
  • Software video encoding with pyav and ultrafast baseline h.264 profile
  • Live preview with audio and video is enabled
  • Saving mp4 file using pyav with audio and video

CPU utilization is nominal and in the 4-15% range. When the video encoding is more dynamic, the utilization increases as one would expect. When the scene is relatively static, CPU utilization is less than 5%.

Drivers

I do believe the drivers to be up to date for both the onboard sound as well as the graphics card.

Latency

My main concern isn’t even the live audio delay, although less delay would, of course, be nice! Rather, it’s synchronization in the saved video file.

Relying on device recordings

I have a bunch of custom software, built atop the API, that does various processing including video overlays, data logging, and LSL input/output. Thus, and as I have said in prior posts, I cannot rely on device recordings for synchronized audio.

Observed audio delay

Even after matching the underlying timestamps of the audio and video frames, I still see a readily visible audio delay when playing back the saved video. Hence offsetting (delaying) the first video frame.

Calculating presentation timestamps

Yes, one could calculate the presentation time stamp from the unix time stamp. I agree that this is mildly superior to incrementing a counter since it’s possible that the encoder could fall behind. Thank you for the suggestion, and I did adjust my code accordingly.

user-8fe6ae 03 November, 2025, 17:35:36

Hi I just got a neon can i ask a really basic question. If i want to monitor and edit the stream on a pc is the only way to do that by streaming the data from the phone/ neon companion app to the pc over wifi?

user-8fe6ae 03 November, 2025, 17:36:56

I've worked with a core before and we had that connected directly into a pc and was able to set up recording directly and edit events

user-937ec6 03 November, 2025, 18:58:10

In the real-time Async API Audio examples on Github there is the following text: - On the simple API examples you can also find how to use the audio for Speech-to-Text using the whisper library.

When I browse to the Simple API Audio examples on Github, I cannot find the example referenced above. Can someone kindly refer me to the right place for the Whisper example?

user-d407c1 04 November, 2025, 08:04:02

Hi @user-937ec6 👋 ! Thanks for following up. That's a more than capable computer, I highly doubt your CPU is being the issue here.

Let’s focus on synchronization after receiving the streams live, although is a workflow that I would typically not recommend as your also subject to network jitter. I will recommend buffering at least 250ms so there are no frame drops (but note that if you use the same buffer size for preview, there will be more latency on playback).

The key step is to derive presentation timestamps (PTS) from the capture timestamps, not from incremental counters. This isn’t just slightly better, it’s the only correct way to maintain sync, as it ties timing to when each frame was actually recorded, not when it arrived. Did that already solved the delay you observed?

For reference, here’s the Whisper example you mentioned: examples/simple/audio_transcript.py, note that this is not optimized at all, is just meant to show the possibilities.

user-480f4c 04 November, 2025, 06:32:35

Hi @user-8fe6ae! Your understanding is correct. Neon needs to be tethered always to the smartphone (Companion Device) which runs the Neon Companion App. The Neon Companion App allows you to collect data and stream to other devices. For streaming, you can use one of the following:

1) the Neon Monitor App - a web app allowing you monitor your data collection in real-time and remote control all your Neons from another device.

2) the Real-Time API - offers flexible approaches to access data streams in real time, remote control your device, send/receive trigger events and annotations, and even synchronise with other devices.

Note that for every Neon purchase we offer a 30-minute Onboarding Workshop to help you set up and walk you through our software ecosystem. If you haven't already booked yours, feel free to reach out at info@pupil-labs.com and we can schedule a meeting 🙂

user-8fe6ae 04 November, 2025, 11:37:19

Ok thanks for the message. We have restrictions on what we can do on our network at the university. Ideally we to just have the eyetracker unit directly connected to a pc that we can start/stop recording and send events into. Is this only possible with the core system?

user-d407c1 04 November, 2025, 08:07:07

@user-af95e6 @user-3c26e4 The timelines visualisation should have been restored, please refresh the page and let us know otherwise.

user-67b98a 04 November, 2025, 08:16:11

Hi all! We’re conducting a banner ad study using Neon glasses. I’m getting an average time to first fixation of 21 seconds for one AOI. Does this mean that, for about 20 seconds, the respondents didn’t look at anything within that AOI? Please help — TIA!

user-f43a29 04 November, 2025, 10:59:27

Hi @user-67b98a , that is in principle correct. They did not fixate the AOI for the first ~20 seconds of the recording on average. This means that some respondents looked a bit earlier and some a bit later, depending on the amount of variability in your dataset. You can look at the data on a per respondent basis, if you need to know more than the average.

user-4c1bae 04 November, 2025, 12:14:08

Hey all, we have been using psychopy to send record/stop to align eye tracking with stimuli presentation and things have been working mostly smoothly. Sometimes, though, one of the eye tracking recordings seems to get stuck on the phone and not upload to the cloud. How can I get the phone to push the recording to the cloud?

user-f43a29 04 November, 2025, 12:22:48

Hi @user-4c1bae , try using the three button menu next to the recording to pause its upload. If other uploads then start, first let them complete. Then, once they are done, use the same three button menu to then Upload the recording.

user-4c1bae 04 November, 2025, 12:24:33

Will try that! If I try to select to folder button (where I expect to find what you are describing) my application always crashes for some reason

user-f43a29 04 November, 2025, 12:24:54

Could you open a Support Ticket in 🛟 troubleshooting about this? Thanks

user-4c1bae 04 November, 2025, 12:25:39

you bet!

user-2eefe1 04 November, 2025, 14:41:50

i'm on a new pc with windows11. I installed python then i run pip install pupil-labs-realtime-api. i then copied this example: Scene Camera Video with Overlayed Gaze to a file named test.py. I run python test.py and this is the error i get: no module named 'cv2'. On another pc i did the same andeverything is ok

user-d407c1 04 November, 2025, 14:47:06

Hi @user-2eefe1 👋 ! You are almost there. As you can see in the error message there, you are missing one dependency to run the example.

Namely, you need OpenCV to display the video.

A quick way to do so would be to run pip install opencv-python

user-2eefe1 04 November, 2025, 14:47:37

i did it but still does not work

user-d407c1 04 November, 2025, 14:49:02

Are you using a virtual environment? Could you share the output of pip show and where pip?

user-d407c1 04 November, 2025, 14:49:35

Also could you share whether you are using CMD or Powershell in the terminal?

user-2eefe1 04 November, 2025, 14:53:26

no virtual environment. i'm on cmd. the output is: D:>pip show WARNING: ERROR: Please provide a package name or names.

D:>where pip C:\Users\Ale\AppData\Local\Microsoft\WindowsApps\pip.exe C:\Users\Ale\AppData\Local\Python\pythoncore-3.14-64\Scripts\pip.exe.

user-d407c1 05 November, 2025, 07:32:12

@user-2eefe1 It looks like you’re using Python 3.14, which was released not so long ago, and OpenCV (along with other dependencies) may not yet have prebuilt wheels for it.

Additionally, running Python globally can easily cause dependency conflicts, so it’s best to work inside a virtual environment.

I'd recommend using uv by Astral to manage virtual environments, Python versions, etc, it’s fast and makes managing Python oddicities easy.

On Windows, you can install and set it up following any of the steps here.

Using powershell:

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

or WinGet:

winget install --id=astral-sh.uv  -e

Then, you simply navigate to the folder where you want to run it and you can

uv venv -p 3.13 # Create virtual environment on Python 3.13 (It will download required version)
.venv\Scripts\activate # Activate the environment
uv pip install pupil-labs-realtime-api --group examples # Install the realtime library with additional dependencies to run the examples
python example/script.py # Run the example you want.

Are you also using JetBrains or any IDE? If so, you would need to select the correct environment.

user-2d542b 05 November, 2025, 11:26:28

Hi there! This isn't directly related to the eye tracker, but I couldn't find a better channel to ask this. I'm working with this for the first time, and I can't seem to open neon player on my MacBook. An error message returns saying that Apple can't check the neon player application for malware.

I've tried to override the security settings with the 'open anyway' feature, but that hasn't worked for me either. Now, when I click on the neon player app, nothing happens. Is this an incompatibility issue with mac/apple products? Or am I doing something wrong here? Thanks!

user-d407c1 05 November, 2025, 12:17:11

Hi @user-2d542b ! What version of MacOS do you have? In newer versions, you would need to open the "Settings" and navigate to "Privacy & Security", upon trying to open Neon Player, a new section would appear asking to proceed.

user-2d542b 05 November, 2025, 12:55:37

It's version 12.7.6! I got to the new section, which is an option to 'open anyway'. Unfortunately, all this has done is to make the application unresponsive, that is, now when I double click on it, nothing happens... 🙁

user-d407c1 05 November, 2025, 12:59:54

which version of Neon Player are you trying to install?

user-2d542b 05 November, 2025, 13:00:45

The file says v5.0.5

user-2d542b 05 November, 2025, 13:07:34

Hi there! I've managed to bypass this problem by downloading v4.8.4 instead

user-d407c1 05 November, 2025, 14:09:08

Hi @user-2d542b 👋 ! Just a heads-up, macOS 12 reached end-of-life in November last year, and is no longer supported by Apple, which makes it harder to support.

The runners we use to compile builds dropped support for it last year, so starting from v4.5.3 onward (including v4.8.4 and all v5.x releases), all mac builds are made on macOS 13.

Honestly, you’re lucky that v4.8.4 runs at all on macOS 12. If it does everything you need, that’s fine, but make sure you’re aware that recordings made with newer Companion app versions (with built-in detectors) won’t include fixations or blinks when processed on that older version of Neon Player. I'd recommend updating your version of MacOS if that's an option.

user-eebf39 05 November, 2025, 18:38:35

I have an issue that pops up every once in a while as shown by the image. I haven't found any related content on this Discord or the Github repo, should I make a minimum working example and open an issue in the simple-realtime-api repo?

Chat image

user-f7408d 05 November, 2025, 22:59:43

Hi Team, just wondering if the Demo Workspace data is counted towards the quota for cloud storage and if so, is there a way to delete it as it appears to be disabled -

Chat image

user-f7408d 06 November, 2025, 03:18:52

I have also noticed I can't download the blocked videos, I can only trash them to remove them from the site.

user-4c21e5 06 November, 2025, 03:50:41

Hi @user-f7408d! The demo workspace does not count towards your quota. You're correct, it's not possible to download the recordings that are over the quota as they become restricted. To derestrict recordings, you can either purchase a Cloud plan, or alternatively, delete recordings from the free 2-hour quota to free up space. This will cause some of the restricted recordings to become available to work with. To permanently delete recordings, you'll also need to remove them from trash. Be sure to have them backed up offline if they're important because Cloud re-uploads are not possible. We also have offline tools to work with your recordings - check out this message for reference: ⁠https://discord.com/channels/285728493612957698/1047111711230009405/1431119082001793095

user-f7408d 06 November, 2025, 03:53:00

Does once you go over the quota does it restrict all videos?

user-4c21e5 06 November, 2025, 03:57:16

No - only those over the 2 hour free quota

user-f43a29 06 November, 2025, 10:04:25

Hi @user-67b98a , the Time To First Fixation metric means the time until a fixation finally lands on each respective AOI. It is not the time until the very first fixation happens in the recording overall. Do you mean to say that they fixate the specific AOI almost instantly?

user-67b98a 12 November, 2025, 08:27:39

Thank you

user-f43a29 06 November, 2025, 10:05:37

Hi @user-eebf39 , are you using the timeout_seconds parameter?

user-eebf39 07 November, 2025, 16:35:56

No.

user-c855e9 06 November, 2025, 14:58:15

I recently received my Neon device and I'm experiencing an issue that may be related to either the hardware or Neon Player version 5.0.5. I recorded a video, downloaded it via Pupil Cloud, and then imported the file into Neon Player for processing. The AprilTags were detected correctly, and I was also able to create my AOIs without any problem. However, when I export the final fixation data, some of the AOIs produce duplicate fixation results. For example, if I define four AOIs, two of them end up showing identical fixation data in the output, even though their AOI regions are not spatially close to each other. What could be causing this, and what am I doing incorrectly

user-c855e9 06 November, 2025, 15:08:20

here is what I am talking about. Fixation x and y are different but fixasting ids, start, end times and durations are identical for 2 distinct AOIs

fixations_on_surface_Surface_1.csv fixations_on_surface_Surface_2.csv

user-f43a29 06 November, 2025, 15:13:58

Hi @user-c855e9 , would you be able to share the data for that recording with [email removed] Then, we can provide better feedback.

user-c855e9 06 November, 2025, 15:17:39

I just uploaded two files (see above). Is that what you need?

user-f43a29 06 November, 2025, 15:18:09

That is the output from Neon Player, but it would be helpful to see the actual recorded raw data that you initially loaded into Neon Player to produce those two files.

user-c855e9 06 November, 2025, 15:20:58

just sent it

user-b8afe7 06 November, 2025, 15:38:13

Hi Pupil Labs team, I have a quick question about the Neon frames for motion capture: Are the arms removable, in case of some experiments that do not require motion capture? Thank you for your help.

user-f43a29 06 November, 2025, 15:39:52

Hi @user-b8afe7 , yes, they are. If you would like to see them in action, make sure to schedule a Demo and Q&A call with us.

user-b8afe7 06 November, 2025, 15:41:14

Hi @user-f43a29 , thanks for the clarification

user-f43a29 06 November, 2025, 16:51:35

just sent it

user-299825 07 November, 2025, 02:34:37

Hi team! I've been going through the NeonNet doc (https://pupil-labs.com/products/neon/neonnet) and the accuracy test report.

I imagine the training involves labeled data (mapping binocular IR eye images to gaze coordinates), collected across diverse viewing conditions. I'm curious whether, and how, were the ground-truth target locations defined in dynamic cases, beyond static lab calibration conditions?

Also, is NeonNet still being actively updated, or has the deployed model remained fixed since its initial release? And could you give a quick sense of what the network’s architecture looks like? Thanks!

user-4c21e5 07 November, 2025, 03:21:09

Hi @user-299825 👋. Thanks for your questions. You're correct that diverse training data is used. The challenge of achieving accuracy in dynamic conditions is overcome through our proprietary methods, which ensure NeonNet remains exceptionally accurate and slippage-invariant during real-world use. Regarding updates, we are always working to refine and improve NeonNet outputs. In fact, we were added 3D eye poses and pupillometry to Neon’s measurements just last year. However, the specifics of our ongoing development process and NeonNet's architecture are confidential. We appreciate your deep interest in the technology!

user-5a90f3 07 November, 2025, 04:26:58

Hello team!I would like to know if Neon can calculate the percentage of eyelid closure time (PERCLOS), as I don't seem to see it in the output file. Is this something I need to calculate myself using the 3D eyeball pose file?

user-4c21e5 07 November, 2025, 04:41:01

Hi @user-5a90f3! PERCLOS isn't provided as a turnkey measurement. But we have an Alpha Lab guide that shows you how to compute it from Neon's data streams. Check it out here: https://docs.pupil-labs.com/alpha-lab/perclos/

user-299825 07 November, 2025, 04:57:43

Thanks @user-4c21e5 for your response!

We're early users working on a project with a scientific focus, and we'd like to ensure that the model generating the raw outputs in gaze.csv (not the derived signals such as fixations, 3D poses, pupillometry, or other measures) has remained unchanged since Neon's release. Could you confirm whether any updates have been limited to components derived from gaze.csv, rather than affecting the raw values themselves?

user-9a1aed 07 November, 2025, 08:19:46

Hi team, I am trying to install psychopy program on my computer. However, I realized that it does not have the plugin/packages under the tool menu, which means I cannot install the eye tracking component on my PC. May I know if your team could indicate which version of psychopy I should be using. Thx!

user-480f4c 07 November, 2025, 08:50:53

Hi @user-9a1aed! May I ask, which version are you trying to download?

user-9a1aed 10 November, 2025, 05:52:29

2025.1.1 the latest one

user-edb34b 07 November, 2025, 11:30:46

Hi team, I would like to ensure I understood it right and ask for a precision concerning a statement on the website acknowledging digital clock drifts that may cause time synchronization issues (https://docs.pupil-labs.com/neon/data-collection/time-synchronization/). Let say we do a computerized experiment where task event messages are sent from the experiment computer using the Python API. Am I correct to suppose that their timestamps is attributed following the computer time system? As gaze and eye event timestamps come from the companion device, these two timing systems may be out of sync. So, if I did not force a fresh sync-up of the companion before running the experiment, I should expect a slight difference in timing between the eye events and the task events. Do I got it correctly? Furthermore, are the companions clocks automatically readjusted to internet from time to time (like when switching off / on), or is it only up to us to do it?

user-f43a29 07 November, 2025, 13:15:54

With respect to sync:

  • If you use basic Events and do not do any sync (e.g., both devices are completely disconnected from the internet or have automatic time disabled), then you could have any kind of difference. It probably will not be slight.
  • If you use basic Events and have the devices connected to the Internet with automatic time enabled, then you could expect a difference of up to ~1s, maybe more, depending on system and relative sync frequency.
  • If you use basic Events and force a fresh sync, then you can expect ~3 ms difference, depending on Operating System, and that difference can grow, again with a rate depending on Operating System, until the devices do another automatic NTP sync. The Companion devices are flagship Android smart phones, so they will readjust their time using NTP as necessary.
  • If you use offset-corrected Events, then you should see millisecond precision sync.
user-f43a29 07 November, 2025, 13:15:50

Hi @user-edb34b , your understanding some of the basic principle is correct. Some clarifications:

  • Timestamps for basic events come from the Companion device when it receives the event, not from the experiment PC's clock.
  • However, timestamps used for precise sync are assigned by you. In that case, you use the clock of the experiment PC and convert that timestamp to the timebase of the Companion device. This mitigates transmission latency.
  • Timestamps for gaze, eye, and all other Neon data come from the Companion device.
user-5f3c38 07 November, 2025, 11:40:05

Hi, I have data from PupilLab Neon, and I used 'Neon Player' to obtain the CSV files for fixations, saccades, etc. However, when I download the 'Exports' folder, the 'Fixation' file is always empty. I have selected the "Fixations and Saccades" plug-in manager correctly, but I am only getting saccade files and not fixation files. What can be the reason for this? Thanks for the help!

user-f43a29 07 November, 2025, 13:16:38

Hi @user-5f3c38 , did you have Compute fixations enabled in the Neon Companion app during that recording?

user-5f3c38 07 November, 2025, 13:40:26

Yes, I did. Both the eye state and the fixation were opened.

user-f43a29 07 November, 2025, 13:42:02

Would you be able to share the recording data with data@pupil-labs.com so that we can provide more direct feedback?

user-5f3c38 07 November, 2025, 13:42:49

Sure, I am sending an email now. Thank you!

user-edb34b 07 November, 2025, 14:06:59

Thanks for the answer. When using basic events and having devices connected to internet with automatic time enabled, can the difference of up to ~1s also be a result of the wireless connection between the experiment computer and the companion? Therefore, timestamps of basic events would have potential delays as compared to gaze or eye data.

user-f43a29 07 November, 2025, 14:11:58

To clarify, the NTP sync method is an easy way to have the devices synced well enough for many use cases, without needing to use Events, triggers, markers, etc. for a "manual post-hoc sync".

user-f43a29 07 November, 2025, 14:08:33

That is not related to transmission latency, but rather independent of it. It is the difference between the time reported by the clocks on both devices.

Transmission latency will depend on the type and quality of your network. If you need low latency, then you can use Ethernet with Neon, but this is not necessary for precise sync, which can be achieved with offset-corrected Events.

user-f43a29 07 November, 2025, 14:09:36

So, I only refer to clock offsets above. You will then need to also factor in transmission latency when using basic Events (but not with offset-corrected Events), if that is relevant for your use case. I will edit my message above to be clearer about that.

user-edb34b 07 November, 2025, 14:20:04

Ok, so there are two different processes potentially impacting timestamps of basic events, clock offset and transmission latency. Both are of interest to me as I am trying to figure out all the factors that could alter the time duration between a basic event (like "trial start") and an eye-event (like "saccade start"), and how to optimize it as a desync or delay in the reception of the basic event could lead to a shorter difference duration, which would be interpreted as a faster eye movement onset than in reality.

user-f43a29 07 November, 2025, 14:22:54

Then, offset-corrected Events are what you are looking for. They simultaneously:

  • Correct for clock offset
  • Mitigate transmission latency

You can use them with or without NTP sync. The final precision is the same, so long as you check and update the time offset estimate every now and then, say at the end of every trial or even every block.

@user-edb34b , if I remember correctly, you are using MATLAB?

user-edb34b 07 November, 2025, 14:31:23

I see, thanks for the answer. Indeed, if the offset correction also estimates transmission latency, it would be quite useful. Unfortunately for us, I guess there is no way to use it post-hoc if the procedure was not implemented during data collection. And yes, we used MATLAB in our experiment.

user-f43a29 07 November, 2025, 14:49:34

No, it is not possible to implement it post-hoc, unless you have some sort of other external physical event that can be used as a shared sync point. Some people use a loud clap or a flash of light, but that might not be precise enough for your purposes.

user-f43a29 07 November, 2025, 14:48:56

Hi @user-edb34b , it does not estimate transmission latency, but rather mitigates it. I have to be in a meeting for a moment, but I will explain afterwards.

user-9a1aed 10 November, 2025, 05:54:00

Now i changed a different laptop. But the original gaze contingent feature does not work anymore.

I tried the gaze-contingent demo program; the feature worked perfectly, but my program did not, although my program is basically copying the demo program. Here is the detailed program: https://hkustconnect-my.sharepoint.com/:u:/g/personal/yyangib_connect_ust_hk/EbXkWgJhCQxNvKP0uBtIx7EB0xKIkMeRdZxsvl7UBflOTg?e=Dj6aSH

May I know the solution to this, please? Not sure what leads to the problem. Thanks!

user-f43a29 10 November, 2025, 14:18:27

Hi @user-9a1aed , can you open a Support Ticket about this and share the full PsychoPy logs when you try to run that example? The plugin is compatible with 2025.1.1 and has been tested as working on a fresh install.

user-9a1aed 10 November, 2025, 06:08:23

Chat image

user-579c7c 10 November, 2025, 07:49:36

Hi, I am Leena from Panoramix. We are trying to download a 36 minutes file from Pupil Cloud. But it is not yet downloaded even after 1.5 hours. Is it normal?

user-f43a29 10 November, 2025, 14:19:16

Hi @user-579c7c , that is not expected. Could you send an email to [email removed] Thanks

user-2d542b 10 November, 2025, 11:36:24

Hi there! Is it possible to change how I define a fixation in Neon Player (i.e., min/max duration, max dispersion)? I recall being able to do this on pupil player under the 'Fixation Detector' plugin. However, I can't seem to find this in Neon Player when I look at the 'Fixations and Saccades' plugin. Thanks!

user-f43a29 10 November, 2025, 14:20:56

Hi @user-2d542b , the fixation detector runs in real-time in the Neon Companion app on the phone. Neon Player then loads that saved data. If you want to tweak the parameters of the fixation detector, then you could try modifying the open-source implementation in pl-rec-export.

user-52e548 10 November, 2025, 12:41:40

Hello team, I exported a world video with Vis Circles overlaid via Neon Player. In the middle of the video, I noticed that the circle is missing for about 10 seconds consecutively. However, upon checking gaze_positions.csv, I found that coordinates were recorded for this time period. I would appreciate any advice on the cause and possible solutions.

I'll list the parameters from export_info.csv that are useful. Player Software Version 5.0.4.0
Data Format Version 4.1

user-f43a29 10 November, 2025, 14:22:24

Hi @user-52e548 , could you try uploading to the latest version of Neon Player (5.0.6) and try again? If you still experience the issue, then could you share that recording with [email removed] Thanks!

user-f43a29 10 November, 2025, 14:18:53

Hi @user-eebf39 , may I ask then if you are running that function in a thread and also running other receive_* functions in that thread?

user-eebf39 11 November, 2025, 22:18:19

Yes it is in a thread. No I am not calling other receive_* functions in that thread.

user-159183 10 November, 2025, 17:29:37

Hi, does Pupil Cloud has api so that we can access/download our data automatically via script?

user-f43a29 10 November, 2025, 20:43:13

Hi @user-159183 , yes, it does. It works via standard HTTP GET/POST/DELETE requests. You can find the details here.

user-16e110 10 November, 2025, 21:24:49

Hi, I had a quick question on video recordings stored on Pupil Labs Cloud. How long do I have until those recordings are deleted? Additionally, are future recordings affected by this? Our Unlimited Storage Add On recently expired and I am wondering how to keep the files without losing them? Currently there is no option to download them off of the site

user-a84f25 10 November, 2025, 22:07:47

I can't use Neon Player 5.06 - it crashes with the following traceback: Traceback (most recent call last): File "pyi_rth_pkgres.py", line 177, in <module> File "pyi_rth_pkgres.py", line 44, in _pyi_rthook File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "pyimod02_importers.py", line 457, in exec_module File "pkg_resources/init.py", line 90, in <module> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "pyimod02_importers.py", line 457, in exec_module File "setuptools/_vendor/jaraco/text/init.py", line 231, in <module> File "pathlib.py", line 1058, in read_text File "pathlib.py", line 1044, in open FileNotFoundError: [Errno 2] No such file or directory: '/opt/neon_player/_internal/setuptools/_vendor/jaraco/text/Lorem ipsum.txt' [PYI-277543:ERROR] Failed to execute script 'pyi_rth_pkgres' due to unhandled exception!

I'm on ubuntu 24.04

user-f43a29 11 November, 2025, 09:04:03

Hi @user-a84f25 , could you open a Support Ticket about this in 🛟 troubleshooting ? Thanks!

user-f89b02 10 November, 2025, 22:14:18

Hi- my app/glasses don’t seem to be recording fixations data anymore. When I export the data I have a fixations dytpe file but no fixations ps1.raw or .time file. It was working prior so not sure what changes

user-f89b02 10 November, 2025, 23:01:55

I uninstalled and reinstalled the app which seems to fix the problem. However, I seem to be able to reproduce this problem when I switch the gaze frequency from 200hz to 33. I would like to use 33hz to minimize phone heat. Could this be looked into?

user-4c21e5 11 November, 2025, 03:10:05

Hi @user-f89b02 👋. This is expected behaviour - you'll need to keep the gaze rate set to 200 Hz to obtain real-time fixations. Where do you have the phone located during testing? I ask because phone warmth shouldn't be an issue during typical use. It's common just to slip the phone into the participants pocket, for example.

user-f89b02 11 November, 2025, 03:12:24

Ok good to know thank you? Out of curiosity, why is this expected behavior? We usually have it the users back pocket, but they are thin and the records are going for hours, which why heat has come up before as an issue, so I was hoping to minimize this

user-4c21e5 11 November, 2025, 03:23:54

It's expected because the Fixation Detector algorithm performs optimally around this sampling frequency. In terms of warming, while continuous processing for extended periods will cause the phone to warm, the app operates within the device's internal thermal operating range, meaning the phone will handle the increased load safely. That said, it's also possible to have the phone in an external holster so it's not directly in contact with the wearer.

user-4c21e5 11 November, 2025, 03:28:35

Hi @user-16e110! We are reserving the right to delete data that is not covered by a plan. But for now, no automatic deletion or anything is scheduled. Yes, all future recordings that exceed the two hour free plan will be restricted. If you would like to work with these, you can purchase a Cloud plan. Alternatively, you can simply download recordings from within the two hour free quota, and then delete them from Cloud. Once deleted, it will free up space and other recordings will become available in turn. Note to permanently delete recordings from Cloud, you'll need to also remove them from trash. Be sure to have them backed up offline if they're important because Cloud re-uploads are not possible. Once downloaded, we also have offline tools to work with your recordings - check out this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1431119082001793095

user-16e110 11 November, 2025, 03:36:58

Thank you for the response. I do have an additional question and I apologize for my confusion, but what do you mean by 2 hour free plan? We currently have several recordings (that are roughly 10 minutes each) but I am still unable to download any of them even after deleting several older files on the cloud to make room. I am just worried months of data will be lost. In the meantime I will contact my team to work on a possible solution to either purchase a plan (based on pricing) or to offload onto a physical storage.

user-ffef53 11 November, 2025, 13:26:39

could someone explain what the duration column of the annotations.csv is for? whenever I manually add annotations in the Neon Player or monitor (while conducting experiments live) I get 0.0 for the duration. It would be awesome if the durations (or windows between each annotation) were also exported to the CSV, but not sure how to do it. https://docs.pupil-labs.com/neon/neon-player/annotation-player/

user-f43a29 11 November, 2025, 14:20:05

Hi @user-ffef53 , the duration column in the Neon Player export is a vestige from Pupil Player. Neon's Events mark instantaneous moments in time and do not have a duration as such, so that column is not relevant here.

If you want to use Events to estimate the duration of something, such as a trial in an experiment, then you need 2 Events: one to denote the start and one to denote the end. Then, you just take the difference between their timestamps to get the duration.

user-ffef53 11 November, 2025, 14:21:34

That's what we've been doing so we can calculate our own durations, just thought it might be cool to find another way. Regardless, thanks for your help and quick response!

user-f43a29 11 November, 2025, 14:21:50

You are welcome 🙂

user-a84f25 11 November, 2025, 19:04:00

I have a question about pl-neon-recording: The gaze offset is saved in an info file, but is that offset post-hoc applied to all gaze data or is it included in the gaze data by default, and any additional offsets must be done manually after the fact. Thanks

user-f43a29 13 November, 2025, 15:13:23

Hi @user-a84f25 , this one slipped off our radar. Sorry about that.

So, Neon Player and pl-neon-recording always treat recordings as read-only. They do not, and will not ever, modify recording data.

The rest of your question can be summarised as follows:

  • If you have used a Wearer Profile with a saved Gaze Offset Correction at the time of recording, then it has already been applied to the raw gaze data, as well as the data on Pupil Cloud.
  • If you have not applied it during the recording and instead apply it afterwards on Pupil Cloud or in Neon Player, then the exported gaze CSV files will have the gaze offset already applied. However, the raw recording data from the phone, as loaded by pl-neon-recording for example, will not have it applied.
  • If you did use a Gaze Offset Correction at the time of recording, but then modified it afterwards on Pupil Cloud or in Neon Player, then the exported gaze CSV files will have the updated gaze offset applied. However, the raw recording data from the phone, as loaded by pl-neon-recording will still have have the previous Gaze Offset Correction.

In all cases, the values in the info.json file are simply for your reference.

user-2429e8 12 November, 2025, 05:16:28

While starting the recording, we get this issue called sensor failure. And it is constantly repeating so we were not able to collect any data.

Chat image

user-4c21e5 12 November, 2025, 06:39:41

Hi @user-2429e8! I can see you've opened a ticket. That's the correct place - we'll continue there 🙂

user-53b146 12 November, 2025, 10:17:54

Hi, Is there an API that can retrieve the coordinates of the provided markers in real time?

user-f43a29 12 November, 2025, 10:22:42

Hi @user-53b146 , you want to know the positions of the AprilTags?

user-53b146 12 November, 2025, 10:24:55

YES, that's correct. I want to obtain the tag's coordinates in real time.

user-f43a29 12 November, 2025, 10:25:25

Do you need them in physical 3D coordinates or in pixel coordinates on the monitor? Do you display them as images on the monitor?

user-f43a29 12 November, 2025, 10:36:54

It is fine if they extend beyond the borders, but then what is the ultimate goal of knowing the pixel coordinates? If they extend beyond the border, then their coordinates will be in "virtual display" space.

user-a4aa71 12 November, 2025, 14:48:17

Hi everyone, I’m analyzing the IMU data and I noticed in the yaw traces (but also quaternions obv) some portions of the signal where the subject is moving in which the sensor gives me rotation angles that are clearly excessive compared to the actual rotation performed by the subject — I can also see this from the world camera view. This doesn’t always happen, but occasionally. Could this be an issue caused by the magnetometer? Thanks!

user-f43a29 12 November, 2025, 14:54:59

Hi @user-a4aa71 , could you share the recording with [email removed] Then, we can take a closer look. Also, to be sure, was the IMU calibrated before running the task?

user-a4aa71 12 November, 2025, 15:02:23

Yes, I followed the procedure, I've performed the “eight” movement. I have some doubts about the magnetometer, also because I can’t really be sure that this calibration was successful — or is there a way to verify it? Without having to stream data in real time to check whether the values make sense

user-edb34b 13 November, 2025, 15:14:15

[email removed] , here is the diagram. Let

user-314b02 14 November, 2025, 08:51:50

Hi team

The USB C connector end of the glasses has started fraying. Is there a DIY fix for the same?

user-f43a29 14 November, 2025, 08:52:33

Hi @user-314b02 , please send an email to info@pupil-labs.com about this and a member of the Operations team will assist. It will be helpful if you include the original Order ID.

user-f9bb4c 14 November, 2025, 14:00:18

Hi, there is a product called sugru, that should work well for this.

user-314b02 14 November, 2025, 08:54:05

Thanks. Will do.

user-f89b02 14 November, 2025, 14:52:45

Hi- we had a flashing red light during one of our recordings the other day. I was not present for it but there was a warning about a connectivity issue in the app when this occurred. I was wondering if I could get more information on what this could have been and also if there is a way to turn off the flashing light warning (it can be distracting to other users during our recordings). Thanks!

user-f43a29 14 November, 2025, 14:58:07

Hi @user-f89b02 , could you open a Support Ticket in 🛟 troubleshooting about this? Thanks

user-4c21e5 15 November, 2025, 12:20:53

You can read about the exposure modes here: https://docs.pupil-labs.com/neon/data-collection/scene-camera-exposure/#scene-camera-exposure Cycle through them in the live preview and see which one works for your setting!

user-5803e0 15 November, 2025, 12:21:51

Thank you, I ll try it.

user-77aaeb 16 November, 2025, 12:43:25

Do the Neon glasses have CE biomedical certification in EU? Or just CE?

user-4c21e5 17 November, 2025, 03:07:51

Hi @user-77aaeb 👋. Neon is CE- and FCC-certified, but not medical. That said, many users employ Neon for medical research, and for applications that require medical certification, our partner Reyedar offers an EU compliant solution: https://www.reyedar.com/

user-5c48a3 16 November, 2025, 13:49:34

Hi! To enhance my gaze estimation project, I am trying to convert the gaze of the person recorded from the Neon scene camera to world using IMU data. I was thinking about using yaw, pitch and roll and build up the R matrix for conversion but I am troubling with signs and conventions, can you help me? Thank you🫶

user-4c21e5 17 November, 2025, 03:14:10

Hi @user-5c48a3 👋, good idea and totally doable.

You can transform gaze orientation into the 'global coordinate system' using Neon's onboard IMU data. We have written a handy guide that shows you how to do this: https://docs.pupil-labs.com/alpha-lab/imu-transformations/. It includes several code snippets, explanations and a visualisation guide at the end.

Note that this provides orientation only, namely heading (yaw) and the vertical axis (pitch and roll), because of the nature of IMU data.

If you want to go further and compute absolute position, take a look at this guide instead: https://docs.pupil-labs.com/alpha-lab/tag-aligner/. It is a bit more advanced, but enables full spatial tracking.

user-ebd8d5 17 November, 2025, 12:48:41

Chat image Chat image

user-ffef53 18 November, 2025, 14:32:28

I'm playing around with the surface tracker and I exported surfaces. however is also seems to generate a world.mp4.canceled file does anyone know why? it has happened twice.

also i'm noticing that due to the exposure of our experiment room, not all tags on the table are visible at all times. when I manually make a surface and try to drag the corners back into place, each frame generates a wonky interpretation of the surfaces (with directions changing) that's only accurate sometimes. is there a resource that discusses best practices with april tags and the surface tracker plugin beyond this part of the documentation? https://docs.pupil-labs.com/neon/neon-player/surface-tracker/#setup i think i need to revise our setup, but want to know the best way. Thanks!

user-4c21e5 19 November, 2025, 02:59:06

Hi @user-ffef53 👋. That file is created when you start exporting a 'world' video file in Neon Player, but something causes it to cancel before it finishes. If you don't need to export world videos, you can disable the video exporter plugin in the Plugin Manager to prevent these files from being generated.

Could you share a recording showing your marker set up? That way we can provide concrete feedback.

user-ffef53 19 November, 2025, 14:46:15

the video is pretty long so no need to watch the whole thing, but just wanted to demonstrate the lag im experiencing and the direction changes of the surface

user-a6c56f 19 November, 2025, 15:21:05

Hey everyone, I'm really new to Neon and I'm currently setting up a study with PsychoPy where I want to use it. I'm showing an April Tag Frame with your Plugin in my experiment so I get the gaze data mapped onto my screen in the MonocularEyeSampleEvent part of the HDF5 file output by PsychoPy. Unfortunately I couldn't find any documentation on that specifically, so I have a few questions on how to interpret the data. - I think what I'm mainly interested in is 'gaze_x' and 'gaze_y', right? So which coordinates do these values use? I thought it would be 'norm' from PsychoPy, so ranging from -1 to 1 in both axes (or maybe even the coordinates set in the experiment settings, which would be 'height' in my case), but sometimes I get values that don't really match what I was expecting (for example, values on the x-axis going from -0.8 to -2.4) - Following that, is there any way to find out whether PsychoPy/the pupil labs plugin was actually able to properly map the gaze onto the screen? I found a 'confidence_interval' column in the data, but it's always -1 for me. - Is this even the proper way to approach it? I like that doing the mapping inside PsychoPy gives me the eye tracking data in the same place as my other data, but it seems like it's not very flexible and I can't monitor if the mapping is working. Any help is much appreciated, also please let me know if there is any documentation that I'm missing!

user-f43a29 19 November, 2025, 16:58:41

Hi @user-a6c56f , are you running this on a system with two monitors? May I also confirm that you are using PsychoPy Builder?

user-f43a29 19 November, 2025, 17:25:13

Also, the confidence_interval column is not relevant for Neon.

To check if mapping is going well, you can briefly display a target on the screen that is located at the surface-mapped gaze coordinates. We have an example of this in our Gaze Contingent PsychoPy Demo.

user-a6c56f 19 November, 2025, 15:47:06

Maybe to add to that, this is a plot I'm currently getting from gaze_x/y when I'm looking at all 4x3 April Tags at the edge of the screen one after another. So it actually looks quite good, but the coordinates don't really make sense to me, both x- and y-axis seem to be shifted into the negative area by quite a bit. Hope this helps!

Chat image

user-ffef53 20 November, 2025, 08:34:47

just liked because it does look great so far and relevant to what I'm doing as well. best of luck to you and your team!

user-a6c56f 19 November, 2025, 17:05:18

Hi @user-f43a29, yes I have two monitors and I am using the PsychoPy builder.

user-f43a29 19 November, 2025, 17:20:03

Ok, thanks, because those look like coordinates for a monitor that is to the left and a bit down from the primary monitor. Do you have AprilTags on both monitors? PsychoPy may have labeled the monitors backwards. I think this can be changed in their Experiment Settings.

With respect to your questions, the format of the PsychoPy HDF5 file, as produced by Builder, is developed and documented by the PsychoPy team. Our plugin sends Neon's data to it such that it conforms to their specification. In other words, in this context, yes, gaze_x and gaze_y are in normalized monitor coordinates.

Overall, what you are doing is workable and you are in principle doing nothing wrong. However, we would also recommend running a native Neon recording in parallel. The phone is capable of recording & streaming to PsychoPy simultaneously. Then, you have the (open) raw data in our format, sampled at full fidelity and which is compatible with our ecosystem of tools, such as Pupil Cloud and pl-neon-recording.

user-a6c56f 20 November, 2025, 08:51:04

Thanks for the detailed response! That makes sense, but is also a bit weird, because the monitor is actually to the right and a bit up from the primary monitor 😄 (and I checked that I don't have the axes flipped) There are no AprilTags on the other monitor. However, when I disable the other monitor it works as expected, so thank you for the help!

I am already also running the recording with the phone as a backup. I'm just hoping to be able to use the data mapped onto the screen in PsychoPy because this makes everything a bit easier and I don't have to deal with mapping myself.

user-f43a29 20 November, 2025, 08:56:44

Hi @user-a6c56f , if you want the data mapped onto the screen with our raw data post-hoc, then you can use the Marker Mapper on Pupil Cloud or the Surface Tracker of Neon Player.

Regarding two monitors, yes, our experience is that PsychoPy and eyetracking setups work optimally with one monitor.

user-ffef53 20 November, 2025, 15:40:17

Ok will do, my understanding was that he wanted a recording of our marker setup so I just sent that screen recording to demonstrate the issues I'm facing as well as the setup

user-f43a29 20 November, 2025, 15:40:47

~~No worries. The intention is to see if we can replicate your issue on our end and more quickly debug it here.~~

user-f43a29 20 November, 2025, 15:41:57

@user-ffef53 Apologies, I re-read the message. One second. What you sent should be fine. Let me take a look.

user-f43a29 20 November, 2025, 15:45:49

Ok, I see. So here are answers to your questions

  • Those AprilTags are just a bit too small and they are also overexposed, causing some glow artifacts at the white-black edges that interfere with the AprilTag detection algorithm. Basically, they appear somewhat blurred and smeared out. Neon Player has an Image Adjustments plugin that might help with this, but it is best to have exposure set well from the beginning. You can use the live preview in the app and also run a brief test to be sure your AprilTag setup is optimal.

  • The lag you are seeing is not exactly lag. Please see this message for clarification: https://discord.com/channels/285728493612957698/446977689690177536/1438945704881029150

user-f43a29 20 November, 2025, 15:46:42

You can also place the tags further out, along the edges of the display, to maximize screen real-estate for your stimuli.

user-ffef53 21 November, 2025, 12:51:38

Thanks for the help on this @user-f43a29 ! Regarding my last question is it possible to somehow export the marker detections and create my own surfaces in a python script (given that we already recorded some of the data like this)?

user-f43a29 21 November, 2025, 13:18:15

Hi @user-ffef53 , I feel that will not solve the issue. The reason your surfaces are not stable are because the markers are not detectable. Our Surface Tracker uses methods behind the scenes to increase robustness. You can take a look into that code if you want to extract it and try it yourself, but it is substantial code and if it is worth the time investment is up to you, of course.

user-ebd8d5 21 November, 2025, 13:14:13

Hi, sorry, is there any alpha lab project which integrates Mocap (Vicon) system and neon? I remember reading a previous post about Vicon or another Mocap system but I cannot find any linked project. Thanks a lot

user-f43a29 21 November, 2025, 13:19:30

Hi @user-ebd8d5 , we are working on it and plan to have it out by December at the latest. There is working code and it supports multiple motion capture systems, including Vicon, but it is not yet ready for public release.

user-ebd8d5 21 November, 2025, 13:21:40

@user-f43a29 : Great thank you. I will keep an eye out for it. Thanks

user-4c1bae 21 November, 2025, 13:41:41

Hey everyone! is there any way to change the color of the gaze circle on the live view or can I only change the size of the circle?

user-f43a29 21 November, 2025, 13:42:55

Hi @user-4c1bae , do you mean the live view in Neon Monitor or in the Neon Companion app?

user-4c1bae 21 November, 2025, 13:46:35

thanks!

user-a84f25 21 November, 2025, 16:52:08

Do you happen to have 3D models of some of your other frames? We want to 3D print some to send home to participants for acclimation if thats possible

user-f43a29 24 November, 2025, 10:29:28

Hi @user-a84f25 , we provide the current open-source geometries as a reference for you to build your own frames/mounts. We do not have 3D models of the other frames to share.

user-eebf39 21 November, 2025, 21:39:35

When streaming with recieve_eyes_video_frame, does this do anything with the GPU? I'm trying to chase down a bug where I'm streaming eye frames in a seperate thread, and using a pyqt gui. The gui is crashing only when I'm fetching eye frames, however, there are no exceptions raised or error messages.

user-f43a29 24 November, 2025, 10:31:33

Hi @user-eebf39 , it does not use the GPU.

I have recently made a script that streams the eye frames in a separate thread and uses a PyQT GUI, without any crashes or issues.

What exactly is your code doing upon receipt of the eye frames?

user-d5ee2f 24 November, 2025, 10:54:28

Hi everyone, have anyone encountered downloading problem with pupil cloud? I can download through "share -> copy native recording data link", but when I click any download option in "Download", it will pop up a "XXX started Download" in the right bottom corner, but I wait and nothing begins to download🥹

user-d407c1 24 November, 2025, 11:01:33

Hi @user-d5ee2f 👋 ! I'm sorry to hear that you are experiencing issues with downloading the data from Cloud. Is it transient or did you experience this before? Could you kindly open a ticket on 🛟 troubleshooting and share the workspace ID and recording you experience issues with?

user-13d297 24 November, 2025, 19:48:29

Hi Pupil Labs Team! I am using Face Mapper and Alpha Labs to map the gaze onto facial landmarks. I noticed the timestamps in the 'merged_data' output file do not match the timestamps in my 'events' csv. Are the timestamps defined differently between these two files? How can I best add my events to this new data file? Thank you!

user-d407c1 25 November, 2025, 07:22:50

Hi @user-13d297 👋 ! On the merged_data.csv generated by that tutorial, you should get a column named timestamp [ns]. This is essentially the same time format as the events. If you want to add events to that file, you can check the same column on both files and place the closest event with a tolerance to that file.

user-5c56d0 25 November, 2025, 02:05:30

Hi, I would appreciate it if you could clarify the following technical detail: Is there a built-in function available to calculate the eye openness level (distance between the upper and lower eyelids) on a per-frame basis? This function is used for PERCLOS estimation or another one.

user-480f4c 25 November, 2025, 07:31:46

Hi @user-5c56d0! Neon offers eye openness measurements that include eyelid opening angles for the upper and lower eyelids relative to the scene camera’s horizontal plane with values provided in radians. It also measures eyelid aperture in millimeters, quantifying the maximum arc length between the upper and lower eyelids. Feel free to look here: https://docs.pupil-labs.com/neon/data-collection/data-streams/#eye-openness

user-d407c1 25 November, 2025, 07:31:56

Hi @user-5c56d0 ! I see my colleague beat me to it, but if you’re looking to compute PERCLOS, kindly note we have a minimal Alpha Lab example that walks through it in realtime, have a look here.

user-cc70ed 25 November, 2025, 03:45:34

Hi, everyone! I am trying to analyze head movements while driving. However, when driving on curves, the yaw angle shows a displacement that clearly differs from the actual value. Consequently, when returning to the initial position, it does not return to the initial value. I thought this supported the idea that the geomagnetic field is not affected except at the start of recording. Is this supposition correct? Also, is the error in estimating the yaw angle on curves at around 20 km/h due to the influence of acceleration? I am aware a similar question was asked two months ago, but no solution was provided. I appreciate your assistance.

user-d407c1 25 November, 2025, 07:40:57

Hi @user-cc70ed ! Thanks for the detailed question. Neon uses a 9-DoF IMU (accelerometer, gyroscope, magnetometer), and yaw estimation comes from a fusion of these sensors. A few things to keep in mind:

  • The yaw offset you’re seeing after taking curves could appear in some cases. In dynamic conditions (like driving), the fusion algorithm has to combine rotation, acceleration, and magnetic readings, and small errors may accumulate.
  • The magnetometer is used, but inside a vehicle it can be influenced by nearby metal and electronics, which reduces how well it can “lock in” to the Earth’s magnetic field.
  • At lower speeds or sustained turns (e.g., ~20 km/h on a curve), lateral acceleration can temporarily bias the orientation estimate, so the yaw may not return perfectly to its starting value.

If you can share a short segment of the recording or the relevant IMU export, we can take a closer look and confirm whether this matches typical behaviour or if there might be something else at play.

May I ask also whether you calibrate the IMU before starting?

user-4558b4 26 November, 2025, 15:45:20

Hello,

Im trying to test run the pupil labs software from the github. I installed the 3.5 version but when I try to drag and drop the sample from https://pupil-labs.com/products/neon/specs , it tells me the version is too old and I need to upgrade. I figured the data was probably too old for the new version, and that it was mistaken. I installed version 2.6 which seems to be the right one to use with the data if we go with the info in the json, but I also get the same exact message (This version of player is too old! please upgrade.)

Is there somewhere were I can find a more up to date recording?

Thank you

user-cdcab0 26 November, 2025, 15:49:16

Hi, @user-4558b4 - for Neon recordings, you'll need to use Neon Player

user-4558b4 26 November, 2025, 16:01:43

Thank you, that one is on me. Another follow up question, is there a version of the android app with source code available? One of the projects we are considering is to build an application so museums can have a tour guide or ambient sounds that responds depending on what you are looking at, so we would have to modify the app heavily I'm guessing (I haven't tried the app yet). I suppose if the app can easily send out the data, that would work too, and we could make a companion app with either the phone alone or a pc.

user-f43a29 26 November, 2025, 16:11:29

Hi @user-4558b4 , the source code of the Neon Companion app is not available, but we have a basic Kotlin app example showing how to receive data via the Real-time API. If it runs on the same phone, it does not even need a network connection to receive the data. However, just be careful of resource management, as the Neon Companion app is designed with the expectation that it has full access to system resources. To that end, you could have your app run on a separate phone and receive streamed data over a hotspot made by one of the phones.

user-f43a29 26 November, 2025, 16:12:25

You could also use a PC, as you say. Check out our Python package for Neon's Real-time API.

user-4558b4 26 November, 2025, 16:12:36

Perfect, that is exactly what i needed! Thanks!

user-ccf2f6 27 November, 2025, 16:14:45

Hi Pupil Labs, Could you share the specs on microphones on the Neon hardware please?

From the docs, I could only see “dual microphones” but I was wondering if there are more specs available about them

user-ccf2f6 28 November, 2025, 16:01:03

@user-4c21e5 @user-cdcab0 Would you or someone from the team be able to help me find this out please?

user-9a1aed 27 November, 2025, 16:24:41

hi team, the code was all good before, why all of a sudden, it does not work?? thx!

Chat image

user-9a1aed 27 November, 2025, 17:19:16

this is the code. it was all working good. but not sure what is going on that is does not run. and says GazeTimeseries has no attribute ts

Chat image

user-f43a29 27 November, 2025, 17:54:02

Hi @user-9a1aed , did you update your copy of pl-neon-recording? Some changes were made to the API in the recent update (v2.0.0) to make naming more consistent. For example, .ts is now .time, which is probably why it is giving that error.

The examples have been updated to reflect these changes and you can reference those.

If you are not able to update your code just yet, then do the following (although it should generally not be many changes necessary):

pip uninstall pupil-labs-neon-recording
pip install pupil-labs-neon-recording==1.0.3
user-9a1aed 28 November, 2025, 01:15:27

ohhh. I see. Thanks!

user-9a1aed 28 November, 2025, 01:15:42

but I checked that gaze structure was changed also?

user-d407c1 28 November, 2025, 07:41:46

Hi @user-9a1aed 👋 ! Yes, there are a few breaking changes. You can find the full list here. These reflect new conventions we’re adopting in an effort to keep naming consistent across all our libraries. If anything in the update isn’t clear, feel free to ask!

user-9a1aed 28 November, 2025, 10:40:03

Hi team, could u please help with this issue with apriltag module? thx!

user-f43a29 28 November, 2025, 10:44:47

Hi @user-9a1aed , as the error message mentions, you need to install CMake to proceed.

user-9a1aed 28 November, 2025, 10:40:09

Chat image

user-ccf2f6 29 November, 2025, 06:22:35

Thanks @user-4c21e5 Is it possible to get the brand/model of the microphones as well?

user-9a1aed 30 November, 2025, 15:06:45

I ran into another issue related to detecting the April tags. I am using the same script to detect april tags and then to map the fixations. But for some recordings, the detection rate is 0. For example, the recording with the faces has a 98% detection rate, but the recording that has the fixation at the center has a 0% detection rate. May I know why that is and how to fix it?

Chat image Chat image

user-cdcab0 30 November, 2025, 22:52:57

Your tags are simply too small to be detected reliably

End of November archive