👓 neon


Year

user-15edb3 01 August, 2025, 10:12:00

Hello so i have been using this GPS features launched recently but i dont see a way to download the map video. Can it only be played online ? Screen recording causes synchronisaiton issues

user-f43a29 01 August, 2025, 10:20:38

Hi @user-15edb3 , you mean you want to make a video like is shown in the GPS Alpha Lab guide?

user-15edb3 01 August, 2025, 10:21:21

yes.

user-f43a29 01 August, 2025, 10:26:23

Ok. We had simply used the built-in screen recording functionality of Ubuntu and MacOS to record the map playback for that video. If that does not work for you, then you could also try a tool like OBS Studio.

If that also results in synchronization issues on your computer, then you could:

user-15edb3 01 August, 2025, 10:28:47

Ok but pls consider adding a download option.

user-f43a29 01 August, 2025, 10:33:27

Hi @user-15edb3 , I have noted the request, but just to clarify, the Alpha Lab tools are essentially provided as-is, showcasing how to go further with Neon, to spark ideas.

They serve as a basis for users to see what is possible and to build their own tools. If an issue is found in an Alpha Lab tool, we do investigate that to fix it, but further development to add new features to published Alpha Labs is not usually planned.

If you would like a customized version, then that could potentially fall under a Support Package, such as a Custom Consultancy.

user-04e026 04 August, 2025, 15:15:23

Hello! Our team was wondering if locking the phone while recording with the Neon glasses can cause any errors with recording over time

While testing stuff out, we had the phone locked while recording for several minutes.

After roughly ten minutes, the glasses began flashing a red light and the phone vibrated. We unlocked the phone and it displayed a sensor error. Our data stream seemed to still be receiving data from Neon.

Is locking the phone the potential cause of this? Or might there be something else?

This was on the Motorola Edge 40 Pro

user-d407c1 04 August, 2025, 15:21:59

Hi @user-04e026 ! Locking the phone screen shouldn't cause any issues, in fact, it’s the recommended workflow to help reduce battery drain and avoid phone's temperature increase.

The sensor error you encountered likely stems from something else. Do you recall the exact error message that appeared on the phone? And is the recording saved? That info could help us pinpoint the issue.

In the meantime, I’d recommend double-checking that the app and phone have all the necessary permissions enabled. If the issue happens again and you catch the exact error, or you have the recording stored, feel free to open a ticket in 🛟 troubleshooting so we can dig deeper.

user-04e026 04 August, 2025, 15:23:56

Sadly we didn’t save any of the recordings nor did we write down the exact error message. However, there is likely to be more testing like that this week so I’ll reach back out if we are able to record any of that.

Thanks!

user-3e88a5 05 August, 2025, 10:53:31

hello, is it possible to download audio data from pupil cloud? I now see that in the cloud is displayed the audio trace.

user-f43a29 05 August, 2025, 10:57:12

Hi @user-3e88a5 , yes. The audio track is in the scene camera MP4 video that you can download either via the Timeseries + Scene Video option or the Native Recording Data option.

You can also use the scene camera MP4 video exported from the phone.

If using the Native Recording Data or the raw data exported from the phone, then you may be interested in our pl-neon-recording Python library that makes it easier to extract the audio stream and work with it. See this script for an example.

user-f43a29 05 August, 2025, 11:09:00

@user-3e88a5 You can also see how we worked with the audio from the MP4 in the Timeseries Data download in our Audio-Based Event Detection With Pupil Cloud and OpenAI's Whisper Alpha Lab guide.

nmt 05 August, 2025, 11:03:42

Surface-mapped fixation validation

user-3e88a5 06 August, 2025, 09:04:02

Good morning, if I used the monocular option while recording with pupil neon is there any possibility to change it from left to right or viceversa in the cloud or in the pupil player?

user-f43a29 06 August, 2025, 09:06:52

Hi @user-3e88a5 , it needs to be set at the time of recording. Changing it afterwards in Pupil Cloud or Neon Player is not a supported workflow.

user-3e88a5 06 August, 2025, 10:58:20

Thank you very much. Sorry, another question, with the neon player is there any way to analyze and export more recordings contemporaneously, something like a batch export? or do I need to download the source code from github and make some modification myself?

user-f43a29 06 August, 2025, 11:09:12

Hi @user-3e88a5 , no problem, feel free to ask.

Neon Player does not support batch export, but you can achieve this, in the same Data Format, at the command line with this script from the pl-neon-recording Python package.

user-3e88a5 06 August, 2025, 11:11:45

Perfect thank you very much!

user-802786 06 August, 2025, 16:25:10

Hi I'm looking at Neon product. I would like to know it there is any intrinsic and extrinsic information of two eye tracker near the nose pads.

user-f43a29 06 August, 2025, 17:48:42

Hi @user-802786 , do you mean that you would like the intrinsic & extrinsic camera parameters of each eye camera? In other words, values related to the camera calibration and the camera's position relative to the eyes?

user-802786 06 August, 2025, 20:18:11

Hi @user-f43a29 . Kind of. Not all of them. I'm more instested in two eye camear optical axis(vector) or where it looking at. We would like to play it with different poisition but we would like to make sure, both eye pupils are clearly shown in both camera(not clipping).

user-f43a29 07 August, 2025, 07:48:27

Hi @user-802786 , I can interpret this in a few ways, so I provide each potential answer:

  • It sounds like you want the 3D geometry of the Neon module, so that you can simulate how it sits on different faces? Are you planning to build your own custom frame? While such a simulation is possible, I have also attached an image from our webpage, showing a wide sample of eyes that were imaged with Neon in our standard frames. This should hopefully provide an answer, as simulating this will only be so accurate.
  • We can also provide example intrinsics for a Neon eye camera, including the Field of View (FoV), if you want.

  • Since you mention "optical axis of where it is looking", there is also the possibility that you mean the optical axis vector of each eye with respect to each eye camera? If so, then yes, you can also measure this with Neon. It's deep-learning powered pipeline, NeonNet, provides the 3D Eye Poses, so the extrinsics of each eye in scene camera coordinates, and again knowing the 3D geometry of the Neon module and the intrinsics of an eye camera, you can transform that to the coordinate system of each eye camera.

  • Since you mention "clipping", then you could also be referring to the intensity of pixels in each image? Neon works equally well in pure darkness to bright sunny beaches. I refer again to this page showing examples (make sure to watch the videos at the bottom to the end). The eye cameras have auto-exposure and handle such wide variations in dynamic range fine, so you don't need to be concerned about "clipping" of pixel intensity values, if that was being asked.

Chat image

user-5a90f3 07 August, 2025, 09:38:12

Hi @nmt When they are combined, there are still gaps between the temples and the frame, and if you 3D print them with nylon material, they will still fall apart. I would like to ask if the original file of the glasses itself is made up of three parts (one frame and two temples), and these three parts are not connected and not integrated.

nmt 07 August, 2025, 09:43:34

Correct. You'll need to fuse them in your cad software.

nmt 07 August, 2025, 09:45:36

We can also provide dummy frames for prototyping - reach out to info@pupil-labs.com if that's of interest 🙂

user-5a90f3 07 August, 2025, 09:48:03

Okay, thank you very much!

user-ffef53 07 August, 2025, 10:16:43

I set some event buttons in my Neon Monitor but if I open the monitor on another device connected to the same network they do not appear. How can I reuse these?

user-f43a29 07 August, 2025, 10:37:49

Hi @user-ffef53 , please feel free to post this as a 💡 features-requests so we can pass it onto the team and keep track of it.

user-d407c1 07 August, 2025, 11:08:24

Hi @user-ffef53 👋 Just to add to what @user-f43a29 mentioned, events created in the Monitor App are saved locally in your browser’s localStorage.

That’s why they don’t sync across devices (PCs). It’s definitely worth posting a feature request in 💡 features-requests if you'd like to see that change!

That said, there is a quick (and a bit hacky 😅) workaround if you want to copy events over manually:

  1. Open the Monitor App in your browser
  2. Press F12 (or Cmd+Option+I on Mac / Ctrl+Shift+I on Windows) to open Developer Tools . Note: This might change depending on your web browser.
  3. Go to the Console tab and type the following, changing the events list to your choice
const myEvents = [
  "Event1",
  "Event2",
  "Event3"
];

localStorage.setItem('plConstMonitor_presetEvents', JSON.stringify(myEvents));
  1. Hit Enter, then refresh the page, your custom events should now be available!
user-ffef53 07 August, 2025, 11:56:26

Thanks guys!

user-ff00f7 07 August, 2025, 16:26:59

Hi, I am a PhD student looking for an eye tracker for our user study. The project requires an eye tracker to return the 3d eye pose and pupil diameter of users in real time. I'm having trouble deciding between the neon and core trackers. Could I have some help understanding the differences between neon and core?

user-f43a29 07 August, 2025, 16:30:11

Hi @user-ff00f7 , briefly, Pupil Core is our original eyetracker, first introduced over 10 years ago. While Pupil Core is powerful, Neon takes everything we have learned from Pupil Core and improves on it in every way, while being easier to use, modular, and truly mobile.

Neon's deep-learning powered & calibration-free pipeline help to enable this. With Pupil Core, you must calibrate every time you put on the eyetracker, as well as when it has slipped. With Neon, you don't need to think about this. You simply put it on and you are accurately eyetracking.

For more details, please see these two messages for a comparison of both eyetrackers:

user-14fea1 07 August, 2025, 17:59:47

Hi I'm a PhD student working with the Neon device, I was wondering if there's a way to capture the data from a laptop rather than the companion device? Like the pupil capture. Thank you!

nmt 08 August, 2025, 03:39:00

Hi @user-14fea1! To make use of Neon's native recording format, it must be tethered to the Companion phone for operation. But you can easily stream all the data to your computer in real-time using Neon's real-time API and/or Monitor app, if that's of interest!

user-0001be 08 August, 2025, 12:35:27

Hi team! If my phone storaeg is low and each recording has a cloud sign with a tick in the middle, is it safe to say it's been uploaded to the cloud and I can delete the local phone recordings? This won't also sync and delete the recordings in the cloud?

user-d407c1 08 August, 2025, 12:44:55

Hi @user-0001be 👋 Yes, that means they’re backed up in Cloud.

And no, deleting them from the phone won’t remove them from Cloud. If you’d like, you can also transfer them locally first as an extra backup before removing them from the device.

user-d086cf 08 August, 2025, 17:05:36

Hey guys. Apologies if there ends up being multiple messages from me in a row, my last ones weren't showing up after pressing send.

We are having an intermittent problem with the neon companion app on the Motorola edge 40 pro where the app freezes when you press stop recording. When this occurs, the android OS will pop up with a notification that the app has stopped responding, and you can choose wait/stop. We've tried waiting several minutes for it to respond, but it seems the only way to recover is to hit stop or switch out of the app. When we reopen it, the recording is lost...

I've tried searching the Documents/Neon directory on the phone to see if any of the data was saved, but I couldn't find anything matching the timestamps of the recording. Any idea if there's a way to recover this data? If not, is there any way to prevent this?

nmt 09 August, 2025, 00:40:34

Hi @user-d086cf 👋. Thanks for your message. Sorry to hear you're encountering app crashes.

Please try the following:

Press and hold the Companion App icon in the app launcher (home screen), then select:

App InfoStorage & cacheClear cache

You can also clear storage.

Does that solve the issue?

user-d086cf 08 August, 2025, 17:19:49

Here's the log for the crash...

app_android1.log

nmt 09 August, 2025, 00:40:55

(Note that this won't delete any recordings)

user-f578a4 10 August, 2025, 11:55:49

Hello, I am an instructor and researcher at a pilot training & research facility.
Due to strict information security policies, our environment cannot connect to the external internet.
I am looking for an eye tracker that can be operated completely offline (standalone).
My questions:
1. Can Pupil Neon be used entirely offline for recording and analysis (no internet connection at all)?
2. Is a Pupil Cloud account required for data capture and/or analysis?
3. Are there any other Pupil Labs products that can operate in a fully standalone/offline mode?

Background:
- I previously used Pupil Neon during my graduate studies (internet connection available at that time).
- Now I work in a facility where external internet access is not possible.
- I was initially considering Tobii Pro Glasses 3 (wireless version) + Pro Lab perpetual license package,
but Glasses 3 will be discontinued within a few years.
- My use case is to measure the gaze of pilots during flight simulator sessions (e.g., GL-4000, AIR-FOX motion platforms).
- Desired features: live gaze monitoring from a nearby control room during sessions,
post-flight debriefing with gaze video overlay, and offline analysis later for cognitive/psychological research.

If anyone knows whether this is possible with Pupil Neon (or other Pupil Labs devices),
or can recommend a suitable configuration for offline use, I would greatly appreciate it.

nmt 11 August, 2025, 04:55:28

Hi @user-f578a4 👋. Great to hear you're considering Neon for this work! Since you're familiar already with Neon, you'll know about the Companion phone. You'll need to create and log into a Pupil Cloud account once when you set up the Companion app. But after that, you can use Neon for recording, live viewing, and data analysis without ever connecting to the internet. The workflow is straightforward: 1. Recording: Data is captured using Neon and recorded onto the companion phone. This process is self-contained and does not require an internet connection. You can disable Cloud uploads from within the Companion app. 2. Data Transfer: After a session, transfer the recording from the companion phone to a computer via USB. 3. Analysis and Playback: For offline analysis, we provide Neon Player, a free desktop application. With Neon Player, you can load and visualize recordings, play back the video with the gaze overlay for debriefing, and export raw data, including fixations and saccades, for more detailed research. 4. Live Gaze Monitoring: For live monitoring from a control room, you can connect the Neon companion device to a local network (no internet required) and use Neon's monitor app to see live gaze data. If you'd like to discuss options in more detail, we can also schedule a demo and Q&A session via video. Just reach out to [email removed]

user-2d1b95 11 August, 2025, 04:23:20

new user: i recorded a pitcher throwing but the target only shows prior to the throw

user-2d1b95 11 August, 2025, 04:23:20

how do i get the ttacking

user-2d1b95 11 August, 2025, 04:23:36

tracking for the actual throw

nmt 11 August, 2025, 04:38:10

Hi @user-2d1b95! Thanks for your message. I'm not sure I understand what you mean. Could you elaborate?

user-ffef53 11 August, 2025, 12:57:33

Hi again! Ran into an issue with my blink files. We ran an experiment in the morning and in the afternoon. The morning recordings look fine and nothing is missing. The afternoon files i exported contain empty blink files. I tried to re-export from the companion device, but still getting a zero bytes file. Any idea what the problem could be with this?

user-d407c1 11 August, 2025, 13:00:18

Hi @user-ffef53 ! Could you kindly create a ticket at 🛟 troubleshooting so we can have a look?

user-e0026b 11 August, 2025, 13:42:25

Hi, i am using the pupillabs neon for a research project. I just noticed there is a limitation of 2hours. I´ve deleted some old coverage but the used minutes doesn´t reduce. Are there any tips to expand these without the upgrade ?

user-d407c1 11 August, 2025, 13:47:27

Hi @user-e0026b ! Yes, you can manually delete older recordings from Pupil Cloud to free up space. Just make sure to also empty the trash folder to permanently remove them. Once space is available, any newer recordings that were previously blocked will automatically become accessible. Please note, however, that recordings cannot be re-uploaded to Pupil Cloud once deleted

Chat image

user-d407c1 11 August, 2025, 13:49:57

Alternatively, you can work completely offline, transferring your recordings from the Companion Device and using Neon Player or pl-neon-recording to analyse it.

user-c704c9 12 August, 2025, 11:46:08

Hi, In pupil cloud, I can see the graph for pupil diameter and blink graph. How can I get certain part of the graph with proper scales with X and y axis? Is downloading CSV file is the only way? I'm not a programmer btw but I need to analyse the graph in certain parts of events that I marked.

user-d407c1 12 August, 2025, 12:56:47

Hey @user-c704c9 👋 On Cloud, the graphs show two things:
- The current value at the playhead position
- The value at wherever you hover

They’re plotted with mean (right/left) values on the y-axis and time on the x-axis.

If you want more granular control over the plot, it’s often easier to build it yourself, you don’t need to be a programmer for that.

Are you familiar with Excel? If so, you can: 1. Download the TimeSeries data from Cloud
2. Open the 3d_eye_states.csv file in Excel 3. Plot the values against the timestamps

That way, you can zoom in, filter, or format the graph exactly how you like.

user-c704c9 12 August, 2025, 13:01:17

So there is no other way than using csv to plot. I thought I could get easier way to plot faster because I had to convert the data to plottable parts and sometime it become slow as I add sheets with different recordings. Thanks for reply.

user-d407c1 12 August, 2025, 13:12:26

Right, Pupil Cloud doesn’t have turnkey tools for fine-tuning plots. It’s not designed to replace dedicated charting or full suite analysis tools.

That said, if having more flexible plotting options directly in Cloud would be useful for your workflow, it’s worth adding it as a suggestion in 💡 features-requests so the team can consider it for future updates.

user-c704c9 12 August, 2025, 13:18:09

Thank you. I'll ask in suggestion. It will be a lot helpful to plot things faster and also for presentations. Also, how can I record the video with fixations and also those graphs under. I tried recording in OBS while opening the pupil cloud site in browser. But the fps drop and the fixations points and video doesn't match.

user-f43a29 12 August, 2025, 13:38:20

Hi @user-c704c9 , apologies, I did not see that you also want the graphs in the video export. That is currently not supported. It would be recommended to make a custom visualization for that purpose.

user-f43a29 12 August, 2025, 13:41:10

To export the videos as they appear on Pupil Cloud, make sure to use the Video Renderer Visualization.

Also, if you have not yet had your free 30-minute Onboarding call, be sure to send an email to info@pupil-labs.com with the original Order ID to organize that.

user-c704c9 12 August, 2025, 14:02:08

Yes, I want to show the video as the play head is moving to exactly show what is happening in certain parts. I'll just have to use Video renderer just like you said and record the graph from the cloud and stitch them together I guess.

I don't know about onboarding call because the owner is my boss and I'm just assigned to do the work. Thanks for reply @user-f43a29 @user-d407c1 .

user-f43a29 12 August, 2025, 14:20:13

You are welcome. Your boss is free to organize the Onboarding and they can invite the whole team to the call, if they like.

user-057596 12 August, 2025, 17:13:17

I’ve downloaded the Pupil Core Bundle onto a Microsoft Surface Pro but neither the Workd View camera or eye cameras feeds are coming through.

user-f43a29 12 August, 2025, 21:42:30

Hi @user-057596 , the Pupil Core Bundle is not built for ARM processors, which are found in many models of the Microsoft Surface. You could try running it from source (see this message: https://discord.com/channels/285728493612957698/285728493612957698/1354350692688592947), but may I ask, are you trying to use that software with Neon?

user-4ba9c4 12 August, 2025, 21:36:50

Hello, I have a pupil neon. I am trying to get gaze fixation in real-time on a marked surface. However, I don't see any way to define the surface on tue companion app. I know how to do it in post-hoc but that it not what I want.

Also, I tried to connect Neon to the pupil capture software on windows, it fails to detect the hardware.

Is there anyway I can get real-time fixation data on a surface from Neon?

user-f43a29 12 August, 2025, 21:39:37

Hi @user-4ba9c4 , you want to look into our real-time-screen-gaze package for Neon.

user-a93c37 13 August, 2025, 14:10:14

Hi, I would like to get some recommendations:

Our teams is using the eye tracker for a user study for prolonged duration, roughly 2 hours or more. I am wondering if anyone can recommend any way for us to make sure that we are not running out of battery. We have been thinking to use a wireless powerbank that sticks at the back of the phone, but when tried that, the phone gets really hot. So I am wondering if anyone has any recommendation on the method/powerbank brand/cable splitter for powerbank + tracker that works that would be helpful. Thanks!

user-f43a29 13 August, 2025, 14:15:22

Although you can record for longer than that with Neon, if you want extended battery life, then here are some suggestions:

  • You can use a quick charger between sessions to charge the phone from 0% to 100% in a short time (usually ~25 mins).
  • You can try this hub, which we have tested, to simultaneously connect Neon and a power source to the phone. See here for details.
  • You could purchase a second phone and simply swap Neon to the other phone when one of them runs out of battery.
  • You could use a wireless charger, but the Moto Edge 40 Pro and Samsung S25 Companion Devices are not magnetic, so it will not stick, although there are phone cases that include magnets. Note that the supplied wattage is lower with this method. As you correctly note, this can also cause the phone to heat and you might want to try the above methods first. You could use a wireless charger that has a cooling fan, such as the one my colleague listed in this message (https://discord.com/channels/285728493612957698/1047111711230009405/1397177709037092964), but we cannot make an official recommendation about it.
user-a93c37 13 August, 2025, 15:00:11

I see. Thanks for the recommendation. Also, since I have previously had error with the sensor, could it be due the overheated phone with the wireless charging?

user-057596 13 August, 2025, 14:38:16

@user-f43a29 Hi Rob, no we are using it with Capture this time in a 2 days Mastery Learning course for regional anaesthesia as our analytical software is only designed as yet to connect to the Core pipeline and we are holding a worlds first clinical skills completion based on qualitative data so we just wanted to have the new Surface Pro as a back up but if it’s not compatible then that’s fine. Thanks for getting back so promptly.💪🏻

user-613332 13 August, 2025, 15:37:24

Has anyone else faced the issue of not being able to install the Neon Companion App?

user-f43a29 13 August, 2025, 17:13:57

Hi @user-613332 , what phone are you using?

user-613332 13 August, 2025, 17:14:10

Samsung S25

user-f43a29 13 August, 2025, 17:14:45

Did you initialize Android with a basic Gmail account?

user-613332 13 August, 2025, 17:16:01

I did, I set up the phone with a gmail and samsung account. Interestingly, as soon you messaged me, it started downloading. Which is great! Will keep you posted.

user-613332 13 August, 2025, 18:05:53

It worked! How do I schedule the 30 minute demo?

user-b98aae 14 August, 2025, 10:25:33

Hi, I'm working with Mr. Ax's parabolic flight group - we conducted some research using the Neon device. We're aware that it is possible to elucidate blink completion metrics from the data and wondered how to compute these?

user-d407c1 14 August, 2025, 10:34:33

Hi @user-b98aae 👋 ! Neon does capture eyelid aperture since April on the latest app version.

If you had "Compute Eye state" enabled that metric should already be there, if the recording are made with an older version they won't have it.

user-b98aae 14 August, 2025, 10:51:31

Hi Miguel! I see, the recordings were made in 2023/2024, during the parabolic flight campaign, so eyelid aperture was not automatically selected. Is there a way to re-run the analysis to retroactively see eyelid aperture in our data?

user-613332 14 August, 2025, 18:36:53

Is there a way to estimate how much time a recorded video will take to process in the background? Especially for videos that have the anonymization add on.

user-f43a29 14 August, 2025, 18:45:15

Hi @user-613332 , there is no set time. It depends on not only on the length of the recording, but also on where it sits in the processing queue, which is variable.

user-c529ed 15 August, 2025, 08:01:26

Hi, I would like to get some suggestion about preparing AprilTag for Maker Mapper.

I'm using AprilTags which is distributed on the Pupil Labs website for surface detection following, but tags are not detected properly when I process of Maker Mapper for some reasons. There are multi monitors at the data collecting environment, and these monitors are right next each other and very dense. I would try to detect each monitor surface with 4 tags per monitor. The user stands in 2/2.5m apart from monitor, and tags become small on the scene camera. For tag calibration, I have some questions and would ask you some tag recommendations:

  1. Is there any minimum tag pixel size (black edge to black edge length) within the scene camera for proper detection? Or do you have any experiences of properly working tag size?
  2. How can I make the tag's white and black contrast clear?
  3. Is valid to flip the Apriltags color and change to black quiet zone + white tag
  4. Does Maker mapper work with other colors AprilTags?
  5. Any other solution related to tags adjustment?

What I did so far: - Change scene camera exposure mode (works a bit, but didn't change that much) - Adjust the tag size slightly (but cannot make much bigger due to monitor restriction)

Thanks for your support!

user-cdcab0 15 August, 2025, 08:16:49

Hi, @user-c529ed - tag size is a very important factor in tag detection and localization, and from the sounds of it, this is definitely what you should focus on first.

You mention that the tags can't be made much bigger because of the monitor restriction, but would the configuration in this image work for you (where the pink squares are AprilTag markers)

Chat image

user-ad7ce5 15 August, 2025, 18:42:52

Hi! I was attempting to get blink data but after viewing the data in the cloud, there is no blink data for my last two trials. A trial I ran a couple days earlier has blink data. The only difference is that the trial with blink data starts recording after the glasses are put on, while the ones without that data start recording a few minutes before the glasses are put on. I'm wondering if this is the cause of the issue, and if there's any way to rerun the blink analysis but starting from later in the recording i.e. when the glasses were worn.

nmt 16 August, 2025, 03:25:32

Hi @user-ad7ce5! Please open a ticket in 🛟 troubleshooting and share the recording ID. We can look into it there!

user-ccf2f6 15 August, 2025, 19:30:51

Hi Pupil Labs, we were working with the IMU data and found that the number of samples recorded for each second are not consistent. Shouldn't there be 110 rows (or nearby) for each second in the dataframes?

user-6c6c81 15 August, 2025, 19:49:14

I also observed this. According to neon (as im keeping 120hz) i should receive 120 data frames per second, but for whole 10 seconds im receiving only around 500-600. I assumed it is ommiting the non-tracked data (like eyes closed, pupils not found etc).

user-d407c1 18 August, 2025, 06:03:49

Hi @user-ccf2f6 👋 ! Could you also kindly open a ticket at 🛟 troubleshooting so we can further investigate it?

user-ec4dab 15 August, 2025, 19:40:17

Is the automate_custom_events package still working? I am getting a "ValueError: invalid literal for int() with base 10" error no matter what prompts I use. Thanks!

user-6c6c81 18 August, 2025, 14:54:56

@user-d407c1 i did not get your last question. I am using Motorola 5g for neon which came with neon. And im using neon xr for unity and tracking and recording realtime time series data such as (pupil diamenter, pupil centers, gaze point, gaze origin, gaze vectors, eyelid angles, etc and saving into CSV file

user-d407c1 19 August, 2025, 06:30:14

@user-6c6c81 Just to clarify, the sampling rate you’re seeing is based on the received data from NeonXR, not the recorded data, correct?

If so, this is expected because the current library uses WebSockets and batches packets before sending (buffering). There’s a new beta version that uses UDP directly, which lets you send and receive the full sampling rate. You can try it here: UDP branch.

Since this is specific to 🤿 neon-xr and to avoid mixing with other topics, let’s continue the discussion there if you have more questions.

user-6c6c81 19 August, 2025, 06:39:10

Ok thank you

user-9a1aed 19 August, 2025, 08:21:31

Hi team, if the phone cannot connect to the wifi, is it possible that the setting of the phone's settings have some problems? I have three phones, two of which cannot connect to the wifi

user-f43a29 19 August, 2025, 11:42:39

Hi @user-9a1aed , were these two phones provided by us?

user-9a1aed 19 August, 2025, 09:30:32

Also, if I want to record the session, do I have to store the recording to the cloud to analyze the data? which means I have to purchase the addon service, right?

user-f43a29 19 August, 2025, 11:44:43

Hi @user-9a1aed , while you can still use the 2 hour free space on Pupil Cloud, it is not necessary to use Pupil Cloud to collect or analyze data. You don’t even need an Internet connection. All the raw data collected during a recording are saved on the phone. You can then export them via USB cable and load them into Neon Player or our pl-neon-recording Python library.

However, our most powerful analysis tools, such as Reference Image Mapper are only available on Pupil Cloud. If using Pupil Cloud, you also benefit from the data logistics it provides, such as Workspaces and sharing. It is designed to streamline the process from data collection, to qualitative review, to quantitative analysis & visualization.

user-dc9cc7 19 August, 2025, 21:56:53

Is there any capacity to adjust the settings of the scene camera? For example, is it possible to increase on-sensor binning (thus reducing spatial resolution) in exchange for a higher frame rate?

nmt 20 August, 2025, 06:58:51

Hi @user-dc9cc7 👋. This isn't possible with Neon's scene camera. But if you don't mind a bit of post-processing, what you can do is map gaze onto a third-party scene camera, e.g. a specialised camera with higher frame rate, using this Alpha Lab guide: https://docs.pupil-labs.com/alpha-lab/egocentric-video-mapper/

user-f43a29 20 August, 2025, 08:34:20

yes. thanks!

user-8e9bb0 20 August, 2025, 12:07:55

hey community! I'm using neon glasses and I read its data stream using OpenVibe aquisition server. I strem triggers in paraller and OpenVibe is suppsed to merge the triggers with the data. Both streams are visible in the aquisition server. But once visualized in OpenVibe designer the triggers don't appear and they are not recorded either. What is strange, is when I replace data stream from Neon by data stream from Muse, keeping the same trigger stream, everything works perfectly fine. So it is not the problem of the trigger stream format. Have you ever experienced such a problem?

user-d407c1 20 August, 2025, 14:46:22

Hi @user-8e9bb0 ! Just a quick note, there’s no official support for OpenVibe, so it’s a bit outside what we normally test. Could you explain a bit more about how your setup works or how you connect it to OpenVibe?

It might also be worth trying to stream the data directly via LSL (LabStreamingLayer) instead of routing through OpenVibe first, just to confirm whether the issue is specific to OpenVibe’s handling of the stream.

user-ffef53 20 August, 2025, 14:57:55

Is the blink detection in neon player calculated the same way as it is in the real time API? Watching the video in the player and checking the blink detection bar, I’m noticing some discrepancies (ie missing blinks). Luckily this is not throughout the entire recording and most blinks are registered, just noticing some gaps

user-cdcab0 20 August, 2025, 15:19:50

That depends on what version of the Companion App and Neon Player you're using. With older versions (before the app supported real-time blink detection), Neon Player used an older algorithm that's computed post-hoc. Now that the Companion App supports real-time blink detection, Neon Player simply loads the blinks that are detected while recording (and makes no attempt to detect blinks post-hoc)

user-a84f25 20 August, 2025, 20:23:58

I was wondering if I could get any support on using adb to sync time? Whenever I do adb shell dumpsys time_detector I get mLastAutoTineClockSet=null

Additionally doing the command in the documentation adb shell cmd network_time_update_service force_refresh hangs and returns false

user-d407c1 21 August, 2025, 08:16:08

Hi @user-a84f25 t! 👋 A couple of quick notes that might help:

  • Seeing mLastAutoTimeClockSet=null isn’t necessarily an error, it usually indicates that Android’s automatic time detection hasn’t updated the clock yet. You can dump the full time_detector state with adb shell cmd time_detector dump to confirm if auto time is enabled and which sources (e.g. network ) are active. Android Open Source Project

  • The adb shell cmd network_time_update_service force_refresh command may hang or return false depending on your device or Android version, it isn't guaranteed to succeed in all cases. More info here If time sync is the aim, the easiest option would be to rely on the Companion Device’s NTP auto-sync, which generally works when the device is online.

Regarding the Python API note: - The “no RTP received for 10 seconds: closing” message means the real-time data stream isn’t arriving, this can be due to network issues, the API not being closing properly. - Since the time echo works fine, your connection setup is likely functioning, but note that this protocol uses TCP. Here's what to try next: - Restart the device and reconnect to the network. - Make sure automatic time update is enabled in Settings so NTP can sync. - Force-close the Companion App, clear its cache, and try running one of the basic Python API scripts again. - After testing, let us know what the time offset result is, this will help narrow things down further.

If the issue persists, could you share a bit more about your setup (device model, OS, and whether you’re on WiFi or USB tethering)? That would also help

user-a84f25 20 August, 2025, 20:30:26

I'm also sometimes not able to get any data from gaze/eye/world sensors through the python API. The API claims that no RTP is received for 10 seconds: closing

user-a84f25 20 August, 2025, 20:30:41

The time echo estimate always works fine however

user-42cb18 20 August, 2025, 21:29:53

Hi! I have used the Neon glasses for data collection, and I will submit a paper in which I would like to cite Pupil Labs. Is there a relevant publication I can cite?

user-d407c1 21 August, 2025, 07:53:44

Hi @user-42cb18 👋! You can find it here: https://docs.pupil-labs.com/neon/data-collection/publications-and-citation/.

And once your work is published, feel free to share it in 🔬 research-publications , we’d love to see it! 😉

user-a84f25 22 August, 2025, 17:47:43

Miguel: For some reason, sometimes it works with internet and sometimes it doesn't. I'm not sure what changes between when the RTP streams are captured when the phone has internet vs when it doesn't. I'll need to do more testing In terms of another question: is there any audio retrieval methods in the realtime api?

user-a84f25 22 August, 2025, 23:10:07

@user-d407c1 Have you ever ran into the problem that the eye state, gaze, and video RTSP streams work fine, but the time estimator dies or freezes after a couple of minutes? Do you know what might cause that?

user-d407c1 25 August, 2025, 11:58:40

Hi @user-a84f25 ! Thanks for your patience, I’m currently at a conference so my responses may be a bit delayed.

Could you let me know how many times you’re calling the time offset estimator? We haven’t encountered such issues in our tests, but this detail might help us narrow things down.

Regarding your question, audio signal is included in the RTSP signal, although currently our python realtime api client does no have support for it.

user-a84f25 25 August, 2025, 15:49:02

@user-d407c1 Thanks! I'm running it on 20 samples, as often as I can (I'm using the async API)

user-42cb18 25 August, 2025, 16:47:16

Hi, all! Are there any other files besides wearer.json that the identity of the wearer could be revealed?

user-cdcab0 26 August, 2025, 07:08:15

Hi, @user-42cb18 - in the info.json file, you'll find a field named wearer_id. This value will match the one you'd see in wearer.json

user-fce73e 26 August, 2025, 06:38:13

"Hello, I've confirmed that the neon-player repository is on your GitHub. I was wondering, which repository is for real-time capture with the Neon device, similar to the 'Pupil Capture' software for the Core product?"

user-cdcab0 26 August, 2025, 07:06:42

Hi, @user-fce73e - the development version of Pupil Capture on GitHub has experimental support for Neon Linux and Mac, but this isn't officially supported. The only recommended way to use Neon is with the Neon Companion app, available for free in the Google Play store

user-a4aa71 26 August, 2025, 09:29:46

Hello! Is there a way to improve and/or stabilize the detection of surface markers for the surface tracker (I'm processing it via Neon Player)? Because I have some frames where the markers are probably too small and are not detected, so instead of being a rectangle, the surface becomes a weird polygon. The marker size was fine for the Core, but the Neon struggles more to detect it reliably. Thanks!!

user-cdcab0 26 August, 2025, 09:35:40

Hi, @user-a4aa71 - I'm afraid there really isn't anything you can do post-hoc to correct for small markers

user-a84f25 27 August, 2025, 21:25:27

How does the timing between the gaze and the eye videos compare? Should the gaze timestamp refer to a specific frame in the video, or is gaze calculated from multiple frames? Also for using the TimeOffsetEstimator, how do you go back and recalculate the time of the events? Which estimates go with which time points? Do you do some sort of moving average assuming the timeoffsetestimator is reporting at a consistent rate over the recording?

user-cdcab0 27 August, 2025, 23:49:11

Hey, @user-a84f25 - * Eye video frames and gaze estimates are 1-to-1. A gaze timestamp will exactly match the timestamp of the eye video frame from which it was generated * I don't think people generally use time offsets retroactively. Rather, after calculating a new offset you should assume that for all future events until another offset is calculated

user-a84f25 27 August, 2025, 22:45:06

Additionally, I have tried recording gaze through the pupil labs realtime api and through just recording on the phone. I was comparing timestamp_unix_ms from the realtime stream and ts from the recorded stream and I cannot find time correspondences?

user-cdcab0 27 August, 2025, 23:52:25

The Realtime API streams data over an established data protocol that which defines a lower precision for timestamps than what we save in recordings, so your observation is correct and these timestamps will never exactly match up, and the difference will likely appear sporadic/inconsistent. Timestamps from the Realtime API are still correct - just not as correct as the ones in the recording

user-9a1aed 28 August, 2025, 06:58:05

Hello, I wonder if anyone could help with pyNeon? I am following this tutorial, but it shows this error. https://ncc-brain.github.io/PyNeon/tutorials/surface_mapping.html

Chat image

nmt 28 August, 2025, 13:11:53

Hi @user-9a1aed 👋. I think you'll have to reach out to the authors of that tool - it's a third-party tool that we haven't used before 😅

user-f4b730 28 August, 2025, 12:48:47

Hello, @user-cdcab0 , I am running the same script on two laptops, the script is in Psychoy 2024.2.4, on one laptop the events are correctly stored. in the other laptop they are all stored at the end of the recording (i.e., recording.end). Any idea why the same script produces different behaviour? Thanks

user-f4b730 28 August, 2025, 13:53:38

I noticed that the same script with pupil plugin version 0.7.7 has that problem, but if I revert back to 0.7.6 the problem disappers and events are annotated at the right time. May I ask if 0.7.6/7 work with a specfici version of the NeonApp/NeonPlayer

user-cdcab0 28 August, 2025, 14:59:12

Hi, @user-f4b730 - could you share your script?

user-f4b730 28 August, 2025, 15:30:47

Yes, I can send it via email as it is very long (combines motion cpatrue, eye tracker and a robotic head). Can u remind me of the email?

user-f4b730 28 August, 2025, 15:31:21

Can I share it via email, as it very long and complex and combines motion capture, pupil and a robotic head?

user-a84f25 28 August, 2025, 16:14:21

@user-cdcab0 for the gaze and eye-video timestamps, on the copy recorded to the phone its one-to-one, but I cant find matches on the streamed copy? I i merge on the nearest time matches, I get as large a time difference as 1 sec

user-cdcab0 28 August, 2025, 16:32:47

I'm a little unclear on what you're comparing. Are you looking at streamed gaze timestamps versus recorded world timestamps? Because any streamed timestamps aren't going to be comparable to any recorded timestamps. You can only really match streamed timestamps with other streamed timestamps or recorded timestamps against other recorded timestamps.

Also, FWIW, the real-time API has a convenient function for matching scene and gaze timestamps already

user-a84f25 28 August, 2025, 16:35:12

@user-cdcab0 I didn't know about that function, that would be very helpful! I am grabbing gaze and eye-video through the async python realtime api, and then comparing the times of the eye-video and the times of the gaze. They don't match up exactly, but I can merge on the nearest timestamps

user-a84f25 28 August, 2025, 17:04:34

Should they match up exactly?

user-cdcab0 29 August, 2025, 12:39:26

You know, I expected they would, but my own quick tests here seems to indicate they don't actually. I've posed the question to our Android engineering team and will let you know what I learn, but it does seem that you will need to match by finding the closest values

user-a4aa71 28 August, 2025, 17:24:36

Hello everyone, I carried out an experiment starting the recordings with the Pupil Core and then I moved to Pupil Neon. Both have their own fixation detection algorithm, and the Neon also includes a saccade detection algorithm, both configurable in Pupil Player/Neon Player. Regarding fixations, are the results comparable? For example, it seems to me that in Pupil Player (with the Core) you can set the minimum and maximum duration as well as the dispersion for fixations, while the Neon Player uses a different type of algorithm and I don’t think it allows parameter adjustments. Should I, for instance, compare the number of fixations in an initial recording (done with the Core) and a final one (done with the Neon), or is there perhaps some repository/scripts that I could use with both datasets to detect saccades and fixations?

Thanks a lot!

nmt 29 August, 2025, 11:42:17

Hi @user-a4aa71 👋. That's a good question. The Core fixation detector does indeed have configurable thresholds. This means you can find different fixation detection results based on which thresholds you set. I found this to be useful in the past when tuning the filter to a given task. Differently, Neon's fixation detector is more advanced, as it compensates for head movements, and we have tuned the parameters to provide accurate results across different tasks. Now, can these be comparable. I would say, in principle, and in certain more controlled tasks, without head movements, it's possible. I think it would be more challenging with dynamic movements. That said, I wouldn't haphazardly take an old dataset recorded with Core and assume they're going to be comparable to Neon. Rather, I would recommend somehow comparing fixation results with Core and Neon to validate things. Unfortunately, we don't have a repo or scripts that you can run on different datasets to so.

user-fa126e 29 August, 2025, 03:40:10

Hello! My cloud storage was full, so I deleted some recordings, and also then deleted them from the Trash.

However, the other videos are still showing a yellow triangle with an exclamation mark, and when I try to add them to a project it says "Recording is processing, has errors, or blocked. This recording can not be added to the project."

How long does it take for the cloud to register that it has space available and allow me to access the other recordings?

wrp 29 August, 2025, 03:43:06

Hi @user-fa126e 👋 It should be instantaneous. Can you try refreshing the website. You can also check https://cloud.pupil-labs.com/settings/devices to check storage quota.

user-fa126e 29 August, 2025, 05:07:13

Thanks! I had refreshed, and it wasn't instant for some reason. it seemed to take at least 15 minutes (possibly longer but I was working on other tasks and I'm not sure exactly when it started working).

But it's working now!

user-827269 31 August, 2025, 17:58:14

Hi, using Pupil Neon with the Companion App I accidentally uploaded two recordings to the wrong workspace. Is there any possibility to bring them to the right workspace or create a project in the cloud using videos from different workspaces as I want to use the enrichment functions in the cloud? thanks for help!

user-f43a29 01 September, 2025, 08:50:55

Hi @user-827269 , we currently do not support transferring recordings across Workspaces. However, you can still use the Enrichment functionalities on those recordings, even though they are in a different Workspace.

End of August archive