πŸ‘“ neon


Year

Month (messages)

user-77aaeb 04 January, 2026, 10:11:36

can 'is this thing on' model record and analyse pupil diameter? how can this measurement be extracted?

user-4c21e5 05 January, 2026, 01:49:44

Hi @user-77aaeb! Yes. All Neon models can record pupil diameter. You can read more about the measurements here and how to extract them here

user-f89b02 05 January, 2026, 06:04:55

Hi- why is the recommended method to align the sensor data from IMUs and gaze with the start of the video stream? Via the neon_recording python api. Is recording.start_time relative to the video stream or just when the actual recording was started? Thanks!

user-f43a29 05 January, 2026, 08:28:05

Hi @user-f89b02 , just to clarify, did you mean "what is the recommended way" or "why is the recommended way"?

And, recording.start_time is relative to when the recording started, so the moment that the white button is pushed (or the moment that the device.recording_start() command is received by the device). The sensors always need to warm up and start a bit later, after recording start.

user-451948 05 January, 2026, 21:41:05

Hello, I just bought the "Just act natural" frames, but as my Neon module stopped working, I would like to purchase another module while awaiting repairs of this one. Is it possible to buy just the module and companion device without the frames or to return the unopened frames I just bought?

user-4c21e5 06 January, 2026, 02:36:58

Hi @user-451948. Yes, it's possible to buy just a module and Companion device, and also to return the frame if that's what you want. With that said, I assume the faulty module you're referring to is the one we discussed in the troubleshooting ticket? Because we haven't yet diagnosed whether the fault lies with the module or the bare metal nest. If it's the nest, e.g. a loose connection, then you might not need a new module.

Since you mention you have a new frame, it would be worth putting your module into it. If it everything works as expected, then it's just your bare metal nest which is broken. If the problem follows the module, then we know the module will need fixing.

user-13d297 05 January, 2026, 23:40:55

Hi! I have a recording where no fixations were mapped from the raw gaze data. I was wondering why this is the case and if there might be any solutions to obtain the fixation data?

user-4c21e5 06 January, 2026, 02:42:15

Hi @user-13d297! Could you expand a bit on what you mean by no fixations being mapped? For example, is this about mapping fixations to AOIs via an enrichment, such as Reference Image Mapper? Or something else?

user-9a1aed 06 January, 2026, 07:27:41

Hi team. may I know if there are suggested ways to extract fixations from the gaze data? I am processing the raw recordings using the python libraries pl-neon-recording. My current data structure has around 200 gaze points, and I want to extract the fixations from these gaze data points

user-f43a29 06 January, 2026, 07:57:12

Hi @user-9a1aed , pl-neon-recording has a fixations stream that already contains the data you are looking for. You can simply sample the fixations data at the same timestamps as those 200 gaze points, similar to here.

user-9a1aed 06 January, 2026, 08:11:15

thx!

user-9a1aed 06 January, 2026, 08:21:15

I am currently using pupil-labs-neon-recording==1.0.3 and it seems that it does not have module 'pupil_labs.neon_recording' has no attribute 'match_ts'. May I know if there is another attribute I could use to match the events? thx! matches = nr.match_ts(target_time, events.start_time, method="backward")

user-f43a29 06 January, 2026, 08:24:03

May I ask if there is a reason to not upgrade to the latest version of pl-neon-recording?

user-9a1aed 06 January, 2026, 08:25:56

I have a running script that is not yet updated since I encountered quite a few errors after updating it

user-f43a29 06 January, 2026, 08:32:43

Ok, in that case, it depends on how exactly you want to match the timestamps. You could try using np.searchsorted from numpy and then check if the matched timestamps are close enough for your purposes. You could also just find the timestamp with smallest difference from the target timestamp with something like np.min(np.abs(events.start_time - target_time)).

Alternatively, you can adapt the match_ts function for version 1.0.3.

user-e544ee 06 January, 2026, 16:56:17

Hello! I am using the neon glasses in combination with another, highspeed, egocentric camera and I am trying to use the egocentric video mapper code from github. I am not able to use the cloud for IRB reasons, so I am exporting the files and copying them from the android phone. I then used neon player to generate a folder of files. It looks like the egocentric video mapper is looking for a different set of files though (.csv instead of .npy) and it was looking for a world camera video with 0 in the name, which was not output by neon player. Is there another step I am missing to generate the propper file format for the egocentric video mapper? Thanks!

user-f43a29 06 January, 2026, 20:01:26

Hi @user-e544ee , after loading the recording into Neon Player, you then need to export it, by either pressing the E key or clicking the button that looks like a download symbol. Make sure also that the World Video Exporter is enabled. To clarify, the .npy files are an intermediate format that are not meant for typical usage. If that does not resolve it, then just let us know.

user-3c26e4 06 January, 2026, 17:11:16

Hi and healthy new year to everyone! Is there a possibility to share just some recordings in the workspace and not the whole workspace? I know it was never possible, but maybe there is some way...or at least some of the recordings to be shared as an Editor and others as a Viewer.

user-f43a29 06 January, 2026, 20:03:12

Hi @user-3c26e4 , thanks; wishing the same to you.

While you can share select recordings from a Workspace (right click and choose Share recording), it is not possible to do either of the following:

  • Restrict a member of a Workspace to seeing only some recordings
  • Nor, give a Workspace member a Viewer role for some recordings and an Editor role for others.

If you would like to see either of these features added, be sure to add them to the πŸ’‘ features-requests list.

user-3d4b81 08 January, 2026, 02:29:48

Hi there. I would like to connect Neon to my own computer to receive the data stream (including gaze estimation from NeonNet, eye pose, etc.). I think the best way is to the c++ client real-time API (https://github.com/pupil-labs/pl-realtime-cpp-client). I am wondering how stable this API is, since I don't see much information on its user feedback either on the website or here in discord. Are there examples of using this c++ client real-time API? Thanks.

user-4c21e5 08 January, 2026, 03:00:43

Hi @user-3d4b81 πŸ‘‹. We generally recommend our Python client as a starting point because it has the most extensive documentation and examples. It is our most widely used entry point for developers. That said, the C++ client is a valid option, but less widely used and discussed. Architecturally, both the Python and C++ clients abstract the API protocols (see under-the-hood). So you can choose the client that best fits your existing stack.

user-7413e1 08 January, 2026, 11:16:33

Hi. I have noticed that for one of my participant the enrichment marker mapper has not run properly and I would like to re-run it just for that participant. I am trying to create a new enrichment but I'm not able to i) select the recordingID that I want and ii) locate the 'run enrichment' command. See screenshot attached. Can you please help? Thank you

Chat image

user-f43a29 08 January, 2026, 11:27:23

Hi @user-7413e1 , it seems that this Enrichment was potentially already started? Could you click the red Cancel button and then do the following:

  • Go back to the Enrichments tab.
  • Hard refresh the browser tab -> Ctrl+Shift+R on Windows and Linux; Cmd+Shift+R in Chrome on MacOS

Then, either try to re-start that Enrichment or create a new one. Let us know how it goes.

user-3c26e4 08 January, 2026, 16:15:47

@user-4a6a05 Hi, surface tracking in the new Neon Player is not functioning. It resets the surface as soon as I move the sliders or uncheck Edit. When I add another surface, the old one extends to the old boundaries. There is also no heatmap.

user-f43a29 08 January, 2026, 17:28:38

Hi @user-3c26e4 , thanks. My colleague, @user-cdcab0 , is responsible for handling this feedback. Perhaps sharing a screen recording could help determine what is going on?

user-72e42a 08 January, 2026, 19:32:49

Hello! I had purchased the imotions and neon glasses with our grant, but was recently notified that there is now a subscription for the pupil labs cloud to access our data. Is there a discount for universities or a package for those that adopted before this format rolled out?

user-f43a29 08 January, 2026, 19:59:13

Hi @user-72e42a , if you want to work within the free 2 hour recording quota, then you do not need an Unlimited Plan. If you simply want to access the data that is currently there, you can analyze & download the 2 hours worth of data, then delete those, and work with the next 2 hour chunk. Just note that deleting recordings on Pupil Cloud will permanently remove them from Pupil Cloud, so be sure to make local backups if you go with this approach.

If you rather want to obtain an Unlimited Plan, it is not exactly a subscription. You obtain a Plan in yearly units and decide in advance, when it should start & end.

And, yes, academic discount is available for Unlimited Plans.

user-72e42a 08 January, 2026, 20:10:51

Could I get the pricing for a university license ?

user-f43a29 08 January, 2026, 20:12:49

Whichever method is most ideal for your situation.

Could you clarify what you mean though by "the files are corrupted when we analyze"?

If you use the Unlimited Plan calculator, it will show you the academic discount at the bottom left, as it depends on how many devices and years you wish to include in your Plan.

Chat image

user-13078d 09 January, 2026, 09:29:39

Hello. I am using my PupilLabs Neon and streaming data through LSL. When connected to "Neon Companion_Neon Gaze" I receive a sample of data with 22 entries:

[799.9196166992188, 471.5899658203125, 4.100955009460449, -27.5625, 14.5, -43.21875, 0.20794349908828735, 0.3420678675174713, 0.9163782596588135, 4.54191780090332, 34.78125, 12.34375, -49.40625, -0.08832237869501114, 0.3171752393245697, 0.9442452192306519, -0.63671875, -0.72705078125, 1.0433200597763062, -0.28125, -1.3251953125, 12.412347793579102]

Could you provide me information about what each value corresponds to? I have read the documentation (https://github.com/sccn/xdf/wiki/Gaze-Meta-Data) but it doesn't specify what values have been used, as I only receive 22 in the data stream and more than 22 are reported in the documentation.

Thank you!

user-f43a29 09 January, 2026, 09:37:38

Hi @user-13078d , may I first briefly ask why you are using LSL for real-time streaming? That is not really its principal design focus.

user-7413e1 09 January, 2026, 10:50:14

Hi, I am experiencing some issues that seem quite concerning. I opened up some events.csv files that I downloaded last July and had not touched since, and I realised they contain different events from the ones I can see on pupil cloud. I also checked that the events on pupil cloud have been inserted programatically via API (and not manually) confirming that the discrepancy does not result from a post-doc edits on pupil cloud. By looking at other back-up files of the session + the video on pupilcloud, the events on my july events.csv file are correct, while the events on pupilcloud seem to be messed up. Can you help me understand where this discrepancy come from? So far I have noticed this in two files but there may be more and will check more.

user-f43a29 09 January, 2026, 10:53:41

Hi @user-7413e1 , is it the naming of the events or the timestamps that are discrepant? Do I understand correctly that you also confirmed the discrepancy against the original code that sent the Events?

user-ffc425 13 January, 2026, 17:59:04

Hi all, I have been working with the open source local version of the egocentric video mapper and just had a few issues with it. 1. I was wondering what these 3 colored circles are in the output? 2. I noticed that the world seems to be the wrong color in the neon output video. For instance, the table captured is brownish-red in real life. 3. The red circle, which is also mapped onto my alternative camera, seems to be advanced in time in both the neon video and the alternative video compared to what I know is true. The two videos themselves look well synced in global time, but this red dot seems equally advanced in both of them. Any thoughts on this?

Chat image Chat image

user-d407c1 13 January, 2026, 21:11:22

Hi @user-ffc425 πŸ‘‹ ! Could you clarify a bit more what you’re running exactly? Which code are you using, and how are you opening or rendering the video?

A few things that might explain what you’re seeing:

  • Color change (red ↔ blue):
    If you’re using OpenCV, note that it works in BGR color order by default, not RGB. So a red overlay drawn in RGB can appear blue if the image isn’t converted first or your brown table look blueish. You’d typically want to convert with something like cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) before visualising.

  • Red vs numbered circles:
    The red circle is usually the gaze point, while the numbered circles typically indicate the fixation scanpath, which usually appears when you’re using video renderer (exported in Cloud) rather than the raw scene camera video.

  • Timing / alignment issues:
    If you downloaded a rendered video from Cloud, it may not have the same length as the raw scene video (see gray frames explanation here https://discord.com/channels/285728493612957698/1047111711230009405/1364251433612087416) . That could explain the misalignment you’re seeing.

If you can share whether you’re using the raw scene video, a Cloud-rendered video, or your own rendering code, we can narrow this down further.

End of January archive