👓 neon


Year

user-b16d4b 01 May, 2025, 04:46:12

Hello. I have a question. I would like to know how to output only a specific range of data (e.g., 3 minutes out of a 10-minute measurement period) uploaded to Pupil Cloud.

nmt 02 May, 2025, 05:14:43

Hi @user-b16d4b! Which data are you looking to output? Are you referring to raw timeseries data, such as gaze, IMU data, etc. Or are you referring to enriched data, such as those generated by Reference Image Mapper, or other enrichments?

user-b16d4b 07 May, 2025, 05:06:56

Hi, @nmt ! Thanks for your reply! I would like to perform analysis targeting only specified ranges of raw time series data such as gaze and IMU. Additionally, I would like to output video from the scene camera for specific ranges.

user-b16d4b 15 May, 2025, 09:24:57

[email removed] (Pupil Labs) ! Thanks for your reply! I would like to perform analysis targeting only specified ranges of raw time series data such as gaze and IMU. Additionally, I would like to output video from the scene camera for specific ranges.

user-796c70 04 May, 2025, 15:40:41

Hello - I recorded a session and it has been spinning (processing) in pupil lab for more than a week - I have a feeling there is some sort of error in the file? Is there any way to check on it and get it processed? Thanks - Dan

Chat image

user-480f4c 04 May, 2025, 15:47:35

Hi @user-796c70 ! Can you open a ticket in our 🛟 troubleshooting channel and we'll assist you there in a private chat!

user-2b5d07 05 May, 2025, 10:23:46

Hi, I'm trying to calculate the visual angle, and for that, I attempted to compute the azimuth using the definition: ψ = arctan2(x, z) However, I'm a bit confused because the values I get don't match the azimuth values provided in the CSV file from Pupil Labs. Could this be due to an error in my calculation, or is there another explanation? Thanks in advance for your help!

nmt 05 May, 2025, 10:35:24

Hi @user-2b5d07! You function takes x and z components, which indicates you're inputting Neon's eye pose measurements. Is that correct?

user-2b5d07 05 May, 2025, 12:21:44

Yes, exactly — I’m using the eye pose measurements from Neon (x from the optical axis x and z from the optical axis z). I calculate the azimuth separately for each eye using arctan2(x, z), and in principle, the average of the two should represent the overall gaze azimuth. However, this is not the case

nmt 05 May, 2025, 12:29:46

Thanks for clarifying. Actually, they are not expected to match. Check out this message for an overview of why https://discord.com/channels/285728493612957698/1047111711230009405/1351378673445634068 🙂

user-2b5d07 05 May, 2025, 13:04:16

Thanks a lot for the detailed clarification — that really helped! In my project, I’m trying to compute the visual angle, so I’m a bit confused about the best way to go about it. Would it make more sense to calculate the visual angle using the gaze azimuth directly, or should I rather compute it separately for each eye based on the optical axis? Thanks again !

nmt 06 May, 2025, 13:49:13

Hi @user-2b5d07! That really depends on the research question/use-case. Could you elaborate a bit more on that and I can try to offer the pros and cons for each approach.

user-a0189b 05 May, 2025, 22:14:08

I would like to know how to use the marker mapper data enrichment, if there is any recommendation specifications (i.e., how far the eye-tracker is from the AprilTags). Furthermore, do I need to download the pupil core for the Neon if I would like to do manually post-hoc analysis not just relying on the pupil-cloud?

nmt 06 May, 2025, 15:30:03

Hi @user-a0189b! A lot of this depends on the specific use-case or research question. What is it exactly you're looking to achieve through the use of Marker Mapper?

user-a0189b 06 May, 2025, 14:21:05

Please let me know..!!!

user-cdcab0 06 May, 2025, 09:16:52

Thanks for the video, @user-9a1aed. Two points: * Your markers are too small. Doubling their size would be appropriate * More markers will help too. When showing the fixation cross you have 10 markers, but when showing your stimulus you only have four. With that screen/environment, the glare/reflection of the light against the display completely obscures some of the markers

user-9a1aed 06 May, 2025, 09:30:15

Hi Dom. Gotcha. Thanks a lot. I thought the markers were presented only to mark the screen's dimensions. Why would the size and the number of the markers matter? I wanted to minimize them on the screen so that they are not distracting.

user-cdcab0 06 May, 2025, 09:41:14

The markers are presented in known positions on the display. Neon streams its scene video to our PsychoPy plugin, which detects the markers in each frame of the scene video. The corresponding points (specifically, the corners of each marker) between the scene frame and the known marker positions are then used to calculate a homography matrix. This, essentially, tells us the position and orientation of the display relative to Neon's scene camera. We can then use that information to map points from the scene-camera-space to surface-space.

Computing that homography matrix requires known, corresponding points in both spaces (scene-camera-space and surface-space). Our PsychoPy plugin already knows the surface-space points of the markers, but it has to detect them in the scene camera image. More markers = more points = more accurate mapping

user-9a1aed 06 May, 2025, 09:50:31

Got it. Thanks a lot for your explanation! For the gaze data, is there a way I can segment each trial? The current csv files downloaded from pupil cloud contain IDs but they are not specified enough to link the trial's image to the fixation data of that image.

Chat image

user-c541c9 06 May, 2025, 09:48:34

Hi all, Is there a way to ask the app to 'export' a session via API? i.e. I do these manual steps after I stop a recording via API > https://docs.pupil-labs.com/neon/data-collection/transfer-recordings-via-usb/#export-from-neon-companion-app so is there still a programmable way to effect the exporting of the current recording to Documents/Neon Export directory on phone?

user-cdcab0 06 May, 2025, 10:00:26

The Neon Companion app doesn't currently have a network API for transferring recordings, but please submit a 💡 features-requests. We use the feedback there to help us track and meet customer needs

user-cdcab0 06 May, 2025, 09:53:21

You'll want to use the PLEvent Component in PsychoPy builder to add events to your recording (e.g., trial-start, trial-end, etc). These timestamped events will appear in Pupil Cloud and will be available in CSV format as well. You can then compare your data timestamps to the event timestamps to filter the data for your needs

user-9a1aed 06 May, 2025, 09:54:30

Thanks!

user-a0189b 06 May, 2025, 16:31:16

For my case, I would like to place the monitor that will display the robotic movement for the teleoperation. Then I would like to track the gaze movement and pupil diameter. So, I just attached the marker on the screen and recorded. But I can't define the surface in the enrichment. Furthermore, participants also rely on the 3D environment instead of screen by seeing the robotic movement. For this case, I guess it might be useful to use the reference-image based enrichment. If you need more information to be clarified, just let me know.

nmt 06 May, 2025, 19:26:17

Thanks for elaborating. Let's start with the screen mapping. You would need more than one marker - default would be one in each corner of the screen. Using these, it will be possible to generate the surface. I'd recommend trying this next. In terms of marker size, you can see an example of what works in this Alpha Lab article. Let me know if you get that working.

In terms of the 3D environment, could you please share a photo of the setup?

user-60878d 06 May, 2025, 19:28:19

Hi! I'm new to this discord channel so please let me know if I should be posting questions at a different channel but I was wondering how to turn off the audio before recording?

nmt 06 May, 2025, 19:37:30

Hi @user-60878d! This is the right place. You can disable audio capture by toggling the microphone icon in the top left of the Companion app home screen 🙂

user-60878d 06 May, 2025, 19:38:01

Thank you!

user-3ee243 06 May, 2025, 20:24:11

Hi questions for the Neon real time API: we are able to stream real time gaze and pupil diameter following the steps outlined in the documentation using python and are looking to expand it to stream fixation,saccades, and blink rate. 1. Are all three metrics able to be streamed real time, i.e. line by line for a time series data or has to be done post-hoc? 2. Is the real time examples code on the website the best place to find how to do so? At the moment we tried using the sample code to get the data to stream but to no avail! Would love some help and insights on this! Thank you!

user-d407c1 06 May, 2025, 22:39:06

Hi @user-3ee243! We’re currently working on updating the documentation for the Realtime API.

That said, fixations, saccades, and blinks are already available starting from Neon Companion App version 2.9.0. If you’re not on that version yet, please update and ensure that these computations are enabled in the app’s settings.

You can then use the following examples to stream eye events:

Let me know if you run into any issues!

user-60878d 06 May, 2025, 23:01:50

Is there a maximum distance Pupil Lab Neon module's eyetracker can handle? For example, there are other mobile eyetracking glasses from other companies that say they can only calculate up to 4 meters distance

user-f43a29 13 May, 2025, 17:10:37

Hi @user-60878d , the gaze ray that Neon estimates is derived from the eye images, so it is independent of any surface that might be in front of the person. Accuracy for eye tracking is usually reported in degrees of gaze angle; that is, the deviation of the gaze ray estimate from the direction in which the person is actually looking. For Neon, the accuracy is 1.3-1.8 deg.

Technically, the gaze ray is of infinite extent; it simply points in the direction of the wearer’s gaze. Although, what a person can see is of course limited by human optics and the visual system’s capabilities. The ray can be projected/mapped onto arbitrary surfaces, at arbitrary distances, provided you have the necessary 3D info to localize the wearer in the environment. To that end, our Marker Mapper and Reference Image Mapper Enrichments are very powerful tools that do that for you. You can estimate the point/region they are looking at well beyond 4m. You may also want to check out the Tag Aligner tool.

However, please note that when it comes to projecting the gaze ray onto surfaces in the environment, quantifying mapping accuracy in terms of "meters" quickly becomes tricky.

It sounds like the eye tracking systems you refer to may be using vergence angle to estimate the 3D position the person is looking at. So, are you potentially looking to use Neon for XR or AR applications?

user-9a1aed 07 May, 2025, 03:11:46

Hi Team, you guys have helped me with the eyetracker's offset, etc. However, I still found that the eye-tracking results were not ideal. I have tried the demo program provided by PupilLab. Would you please take a look? It is very difficult for participants' fixation to land on the AOIs during eye tracking. If this happens during the experiment, it would be frustrating and tiring for participants. Are there any ways to improve my setup? https://hkustconnect-my.sharepoint.com/:v:/g/personal/yyangib_connect_ust_hk/EcPxM3QX32JJuFagkR_0mzgB0_ryJbWT6WAv6fon7rEjaA?e=BUNsht Thanks a lot in advance!

nmt 07 May, 2025, 03:29:58

Hi @user-9a1aed! Thanks for sharing the screen capture. The gaze measurements look much noisier than expected. I think it would be beneficial for us to look at a raw recording. Please open a ticket in the 🛟 troubleshooting channel and we can coordinate their 🙂

user-9a1aed 07 May, 2025, 03:32:19

Thanks a lot. I have opened a ticket there

user-9a1aed 07 May, 2025, 07:36:48

Hi Team, I wonder if there is any documentation on converting screen-based gaze coordinates defined by AprilTags using scripts? I found this package from previous chats https://github.com/pupil-labs/real-time-screen-gaze but struggled to use it for data processing. I have tried Marker Mapper in Cloud, but the processing was a bit slow, especially if I had a large amount of data to process.

user-cdcab0 07 May, 2025, 08:09:51

Hi, @user-9a1aed - if you want to do surface mapping without using Pupil Cloud, have you considered Neon Player?

user-9a1aed 07 May, 2025, 08:14:02

Thanks! Is there a more detailed workflow? It seems like a software you guys built, too?

user-cdcab0 07 May, 2025, 08:16:28

Yes, this is our software as well.

If you are new to Neon Player, you may find the general overview to be helpful as well. Otherwise, if you have specific questions, feel free to ask here 🙂

user-9a1aed 07 May, 2025, 08:18:41

Thanks a lot! Does it mean that if I want to convert to screen-based/AprilTag-defined surface coordinates, I have to use one of your softwares? I worried about the processing efficiency if we have hundreds of participants.

user-9a1aed 07 May, 2025, 08:22:01

In the fixations.csv data downloaded from Marker Mapper. https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/#export-format Normalized x and y coordinates from the average of all mapped gaze samples within the fixation. Would you please clarify this a bit? If I want to know the x and y coordinates of a particular fixation on the surface from the top left (the x coordinates from the left and y coordinates from the top), I need to multiply by the pixel size of the surface, right?

user-cdcab0 07 May, 2025, 08:29:07

It's difficult to compare Cloud-based processing times to offline processing simply because jobs submitted on Pupil Cloud are subjecting to queuing. Jobs are taken in the order they're received, so if there is a lot of work scheduled before your job, so it may take a little while before your job even starts. Once it does start, though, Pupil Cloud is pretty fast.

Running the analysis offline (e.g., using Neon Player, the screen-gaze package you found, or rolling your own solution) will, of course, run immediately without having to wait for anyone else's tasks to complete first.

I have to use one of your softwares? I worried about the processing efficiency if we have hundreds of participants.

We provide the software to help make your analysis easier, but you certainly don't have to use it. We use a pretty standard method of doing marker-based surface mapping - anybody with the right skill could implement their own surface mapping without using any of our software, but it would probably be difficult to be much more efficient without significant effort/skill.

user-9a1aed 07 May, 2025, 08:32:30

Ahh. I see. Thanks a lot! I will try with Cloud and Player and see which one fits the best

user-cdcab0 07 May, 2025, 08:33:14

Normalized x and y coordinates from the average of all mapped gaze samples within the fixation. Would you please clarify this a bit? A fixation occurs over a period of time. During that time, many gaze samples will have been collected (200 samples per second), and each gaze sample will be at a slightly different position. To produce a single point to describe the fixation, all of the gaze samples that are part of that fixation are averaged together.

If I want to know the x and y coordinates of a particular fixation on the surface from the top left (the x coordinates from the left and y coordinates from the top), I need to multiply by the pixel size of the surface, right? Yes, assuming you have aligned the bounds (corners) of your surface with the corners of the renderable area of your display.

user-9a1aed 07 May, 2025, 08:35:39

Thank you very much, Dom!

user-9a1aed 07 May, 2025, 09:10:52

Hi Dom, if I want to save gaze data into hdf5 files, do I have to add the code? I am using the demo provided here, https://docs.pupil-labs.com/neon/data-collection/psychopy/#data-format, but my local hdf5 files do not save any data. I can download the gaze data from Cloud tho.

user-cdcab0 07 May, 2025, 09:19:05

You need to make sure that the HDF5 data is enabled in your experiment settings and initiate the recording using the PsychoPy Builder ETRecord component

user-9a1aed 07 May, 2025, 13:30:11

Okk. Thanks!

user-ef94b7 07 May, 2025, 19:06:58

Hi Pupil Labs team,

Today I made a recording which resulted in an error. After 30 minutes of recording, the companion device started to vibrate. I didn’t receive any error messages at that moment. I touched the screen, and the recording seemed to continue. When I stopped the recording after 1 hour and 8 minutes, I received two error messages:

1st: Sensor failure – The Neon Module has stopped providing Eye Sensor data! Unplug your Neon device and plug it back in. If this behavior persists, it may indicate a Neon Module or frame issue. Please reach out to our support team for help.

2nd: Sensor failure – The Companion App has stopped recording extimu data! Stop recording, unplug your Neon device and plug it back in. If this behavior persists, it may indicate a hardware issue. Please reach out to our support team for help.

The recording has been uploaded to the cloud, but it is not possible to view it.

The Neon Module and the companion device were already sent in to Pupil Labs in February 2024 after my former colleague Jonas Priedemann and I experienced similar problems with recordings. Is there any possibility to restore today’s recording? What can I do to avoid these kinds of error messages in future recordings?

Thanks in advance, Lara

nmt 07 May, 2025, 19:59:29

Hi @user-ef94b7! These sensor error messages can occur for a variety of reasons, such as transient USB disconnects, or even hardware issues. For us to learn more, we'd need to collect some further details. Please open a ticket in 🛟 troubleshooting, posting the serial number of your module, and we can coordinate there 🙂

user-f7408d 08 May, 2025, 10:11:46

Hi, I am using Pupil Cloud and created an enrichment (image reference map), I uploaded a scan video and added two sessions to the project and processed the enrichment. I had since recorded and have uploaded 2 more videos. Do I need to setup and run a new enrichment to include the additional videos and process all 4 videos together? Or is there a way I can process just the 2 new sessions using the original enrichment? If that makes sense. Sorry if this is explained already in the docs, I haven't seen it.

nmt 08 May, 2025, 16:10:26

Hi @user-f7408d! You can add the recordings to your project and subsequently, you should be able to run them as part of your existing enrichment.

user-a5d1a8 08 May, 2025, 12:37:32

Hi, we try to connect NEON with the SW on the companion-Mobile. And we try to grab the data via IP and port from the device to store these data in a simulator-software. - but the sim Software did not find the device via IP. how can we connect the NEON to the companion Device AND the sim-software (via socket-connection)?

nmt 08 May, 2025, 16:12:44

Hi @user-a5d1a8! Are you able to successfully use the Neon monitor app on the same computer that runs your sim software?

user-9a1aed 08 May, 2025, 12:58:07

Hi Team, if I want to control the psychopy program using the phone so that the experiment is controlled by the experimenters, how can I achieve that? Do you guys have some docs for this. Thanks!

user-04e026 08 May, 2025, 13:23:54

Good morning. My team has asked me to reach out and ask about Neon and the Inter-Eye-Distance measure.

On the “Measuring the IED” tab on the pupil labs docs website, there is mention that the accuracy of pupil diameter measurements can be enhanced by doing this measurement for each user.

My team was wanting to know if you could provide any more details on what that means? Like, what is changing and if there is some significant range of error that could come from using the default 63 mm?

nmt 08 May, 2025, 16:36:04

Hi @user-04e026! Neon’s pupillometry measurements are provided by NeonNet, which combines machine learning and model-based approaches. For more details, be sure to check out our whitepaper. Entering the wearer’s IED brings the pupillometry measurements closer to the true physical state. In particular, it will enhance robustness to the effects of pupil foreshortening, e.g. caused by different camera angles, such as those resulting from headset slippage. In general, thus, we recommend setting the wearer-specific IED, and this might be especially important if you're doing data collection in an environment conducive to headset slippage.

user-cdcab0 08 May, 2025, 13:25:56

Hi, @user-9a1aed - PsychoPy doesn't have any built-in functionality for remote control. Since it runs on Python though, you can do just about anything you can imagine.

One could, for example, run a web server within your PsychoPy experiment and load the web page in the browser on the companion device. We don't have any examples or documentation for such a workflow, but you may have better luck asking for something like this within the PsychoPy community.

user-9a1aed 09 May, 2025, 07:58:23

I see. Thanks a lot!

user-a5d1a8 08 May, 2025, 17:25:37

@nmt Hmmm. I am not shure - the sim-computer with data acquisition-socket has 10.1.2.3 ant the companion-app shows 10.1.2.215 (SMTP is working ;-)). if I try to connect from 10.1.2.3 to 10.1.2.215:8080 it shows me a blank page - same from an other computer (- no-sim but same network). no firewalls, no restrictions to IP and ports BUT also NO internet-connection. The installation is like an island. BUT If I use the companion's-WLAN-hotspot it is possible to connect to an other mobile (via 172.x.x.x and start/stop/close - no problem). so the functionality is OK, but not for the 'normal' network we must use.

nmt 08 May, 2025, 17:37:09

It sounds to me like there's something about your local network that's blocking the connection. Can you try connecting your sim computer to the Companion phone's hotspot?

user-d16dec 09 May, 2025, 12:34:05

Hi, I've reached 2hours storage capacity so I tried to delete some recordings. But on my Pupil Cloud page, it says all the time that "Storage is full. Manage recordings or upgrade your storage with an Add-on." Do I need extra step to empty trash bin or somehow to free the space? And by this moment can I still make new recordings to upload to the cloud?

nmt 09 May, 2025, 13:02:01

Hi @user-d16dec! Yes, you can empty the trash and free up space. Recordings uploaded to Pupil Cloud will subsequently be available if they're within the 2 hour free limit.

user-d16dec 09 May, 2025, 13:41:58

Where can I empty the trash? It only says "Recordings in trash will be automatically deleted in the future."

nmt 09 May, 2025, 14:31:00

Select the recordings you want, right-click, and press delete

user-a5d1a8 09 May, 2025, 14:11:47

@nmt thank you for your idea. All the SIM computers have no WLAN/BT. I checked all the connections - no firewalls, no restrictions anywhere (that is the reason why it is a closed system)

nmt 10 May, 2025, 15:01:22

Hi @user-a5d1a8! I think it would be helpful if you could elaborate on your set up a bit. How are you connecting the Neon Companion device to your sim computers if they don't have WLAN?

user-9a1aed 10 May, 2025, 06:20:31

Hi Team, for the visualization method in Cloud, is there a way I can create heatmaps for each image stimulus? In my study, participants will view various images on the computer screen. When I use the Marker Mapper, the computer screen is a surface, thus, the visualization is the cumulative heatmap across all images. I have used plEvent to label the onset of each image stimulus. Thanks!

nmt 10 May, 2025, 15:04:08

Hi @user-9a1aed. In this case, you could create a new enrichment for each stimulus. Use specific events corresponding to the start and end of each image to define each enrichment.

user-9a1aed 10 May, 2025, 08:32:01

Sorry. One additional question about the plEvent label. When I processed the data, I found that some fixations span across two events (starts during "recording.begin" and ends after "study1StartplEvent" begins). Why is this the case? In this case, how should I assign this fixation to? Thanks!

Fixation 24: Fixation period: 1746608329.95 - 1746608330.34 seconds Assigned to event "recording.begin" (started at 1746608295.25 seconds)


Fixation 25: Fixation period: 1746608330.38 - 1746608330.52 seconds Assigned to event "study1StartplEvent" (started at 1746608330.05 seconds)

nmt 10 May, 2025, 15:11:35

It's not really possible to assign a fixation in this way. A fixation can occur at any given moment, depending on when and where the participant is looking. This can happen regardless of any events assigned to the recording. My question is, why is this an issue for your study?

user-d3192b 12 May, 2025, 07:57:51

Hi. We are using the Neon device. My colleague recently used my device and I used his device for some lab recordings. I would now like to transfer the data from his device to my cloud. How can I do this?

user-d407c1 12 May, 2025, 08:18:35

Hi @user-d3192b 👋 You can choose the workspace where your recording will be uploaded directly in the Companion App — just set it before starting the recording.

Once recorded, there are a couple of ways to access the data:

  • Local export: You can transfer recordings via USB. Here’s a step-by-step guide:
    Transfer Recordings via USB

  • Cloud access: You can also be invited as a collaborator to the workspace. The workspace owner just needs to click the toggle next to the workspace name, select “Invite Collaborators,” and add your email and role.

Just a heads-up — transferring recordings across workspaces isn’t currently supported. There’s a feature request open here, feel free to give it an upvote!

Let me know if you need help with any of the steps.

user-89e2c0 12 May, 2025, 15:04:17

Hello! I am currently using Neon, I captured the data in the companion device and exported the video and files to my computer. Is there anyway to process/export the .raw data without downloading neon player on my computer?

nmt 12 May, 2025, 15:12:17

Hi @user-89e2c0! Have you looked into using Pupil Cloud ? This is the easiest way to work with recordings as they can be automatically backed up, processed and exported.

user-6c6c81 13 May, 2025, 07:25:25

Hello @user-f43a29 , what are the axis(X,y,z) refer to in eye tracker camera ( Not Imu, Not scene camera) ?.

user-d407c1 13 May, 2025, 07:58:52

Hi @user-6c6c81 👋 Could you clarify which values you’re referring to and in what context you’re seeing them?

The (x, y, z) values could correspond to different things — such as eyeball centers, optical axis, gyroscope readings, or specific coordinate systems.

You might find these references helpful:

Feel free to share more detail and we’ll help narrow it down!

user-f4b730 13 May, 2025, 07:27:36

Hello, when automatically exporting blinks, gaze positions, and eye state, using neon player, I see that timestamps within the blink time have reasonable values for gaze positions and eye state (i.e., gaze position and eye state timestamps during blinks are saved). May I ask if gaze position and eye state are automatically interpolated by the NeonPlayer between the start and end of a blink? (not sure I expressed myself clearly)

user-d407c1 13 May, 2025, 08:09:02

Hi @user-f4b730 👋 ! NeonNet would always try to infer gaze, eye state, and pupil size from each eye image — even during blinks, squints, or when the glasses aren't worn.

So these values aren’t interpolated; they’re estimated frame by frame. This means you may want to filter out those parameters for your analysis. We do not filter them, in case you have a different definition of blinks than the one from our blink detector.

If you haven’t already, I’d recommend updating to the latest version of the Companion App. It now includes real-time fixations and eyelid aperture data. With that, not only are blink detections more accurate, but you can also decide whether to include full or partial blinks, depending on your criteria.

Let me know if you’d like help accessing or interpreting those parameters!

user-6c6c81 13 May, 2025, 08:05:22

@user-d407c1 it is respective to optical axis

user-d407c1 13 May, 2025, 08:10:38

In that case this graph can help you as well. Do you have any concrete question?

Chat image

user-6c6c81 13 May, 2025, 08:05:29

And gaze directions.

user-3bdb49 13 May, 2025, 08:13:52

@nmt the driving simulator is a huge installation with about 25 computers (210° view angle projection, simulated mirrors, driving dynamics configured as a closed system - no internet, all Win10 SP2), a real mockup (without motor, of course). All the computers are connected via LAN, static IPs, and there is a transparent WLAN connection (DHCP address starting at 10.1.2.210 -> .240). the companion device connects to the WLAN correctly. but if I point from a sim-computer to the companion's IP with :8080 there is a blank screen.

user-d407c1 13 May, 2025, 08:27:17

Hi @user-3bdb49 👋 ! I hope you don't mind if I step in. May I ask, does ping to the IP works from your sim computer?

user-3bdb49 13 May, 2025, 08:23:47

Is it possible to send simple messages via UDP/TCP to the companion device (start of simulation, stop of simulation, passed-by a POI, etc.) to be integrated as "Marker"? The use of the companion's "Marker" -button is not sufficient in timing. we need the marker to segment the recording. The companion-IP is 10.1.2.215. Whick protocol/port I should use to send these messages?

user-3bdb49 13 May, 2025, 08:27:53

@user-d407c1 Hi Miguel, of copurse, be welcomed! ping is working

user-d407c1 13 May, 2025, 08:31:55

Sim Computer Network

user-652793 13 May, 2025, 21:08:04

Hi everyone, I'm Dario, and I'm new to Pupil Labs Neon. I'm currently working on analyzing pupil data, but I'm a bit confused about how the different .csv files relate to each other.

Specifically, I noticed that the fixation column in gaze.csv contains many of the same values as in fixations.csv, but with some gaps—i.e., some rows in gaze.csv have missing or blank fixation values.

Is this due to the difference in sampling rates between the gaze data and fixation data? Does gaze.csv simply align with fixations by interpolating or marking segments based on timestamps, resulting in some rows without fixation IDs?

Any clarification would be greatly appreciated!

Chat image Chat image

user-cdcab0 13 May, 2025, 22:10:27

Hi, @user-652793 - the fixation data is computed from the gaze data. It's not technically accurate to think about fixations as having a sampling rate - they're more like events whose occurrence comes from the gaze data.

Every fixation consists of a set of gaze points, but not every gaze point will below to a fixation. That is you see repeated fixation IDs in the gaze data as well as the gaps.

user-87d763 13 May, 2025, 22:09:14

Hello! We are running the Matlab/Python integration on macOS. In order to test synchronization, we created a sanity check in which both neon devices are recording a TV screen with a 60hz refresh rate. The TV shows a clock counting up by milliseconds.   We hypothesized that if our two devices were properly synchronized, the simultaneous events recorded on both devices (using the realtime API corrections) would point to the exact same image on the TV screen on both devices.    Like expected, we found that while the recorded events on both phones match the TV image (e.g., when navigating to event 1 on device 1 and event 1 on device 2, the image on the TV matches within ~20ms), they do NOT match in the output events.csv file in the downloaded timeseries data (e.g., event 1 on device 1 is >500ms different than event 1 on device 2).

As such, we dug deeper into the Matlab code and found what we believed to be an error: when we manually calculated the mean of the time_offset_ms.measurements, this provided a different calculation than the mean calculated by the code provided (time_offset_ms.mean). Thus, when calculating the corrected_time_ns_in_neon_clock, we subtracted the current time from our manually calculated mean. With our correction, the output changed. Now, we found that the event timestamps were matched in the output files downloaded from pupil labs, but the events no longer corresponded to the same image on the TV screen.   Going further, we looked into the Device.m Matlab file and discovered that the estimate of time offset measurements seemed to be taking the mean of the roundtrip duration instead of the time offset (Line 247).   When we edited the Device.m code to calculate for time offset instead, we came back to the original problem where the event timestamp is aligned in our “ground truth” and the events correspond to the same image on the TV screen, but the corrected timestamps in the events.csv files do not match.

user-87d763 13 May, 2025, 22:09:19

Can you please help us better understand this discrepancy and what we can do to best ensure synchronization? Thank you in advance for your help, and please let me know if I can provide any additional clarification.

user-f43a29 14 May, 2025, 07:36:19

Hi @user-87d763 , sure, we can of course help you work through this.

One thing to keep in mind is that the clocks of your two Neon devices are not expected to be synchronized in general, unless you have synchronized them already to an NTP source. The process is specific on modern Android, so may I ask how you did that?

If both devices are not synchronized from the start, this is in principle fine, but then it is expected that Event 1 on Device 1 will not match with Event 1 on Device 2. It would then require also calculating the clock offset between Device 1 and Device 2. One way to synchronize both devices "manually" or post-hoc is to play a simple sound waveform and find the onset of this sound in the audio recording from each device. Cross-correlation can help automate this process. You can then take this as the common "zero" point in both Device's time axes.

Note that you would also want to periodically measure these offsets to account for clock drifts.

To clarify about the MATLAB code, it is not the MATLAB integration that is doing the calculation. Rather, the MATLAB code uses the output of the Time Echo Implementation from our Python Real-time API.

Also, that part of the MATLAB code does not exactly "use" the estimates. Rather, it provides them to you, already including the mean time offset, so that you can easily send offset-corrected Events, as described in our Documentation and as shown in this example.

However, I do see the typo you mention at line 242. It additionally provides all of the "raw" time offset samples, if you wanted to calculate the mean yourself, but accidentally provides the roundtrip estimates there. It is now corrected, so you can update the integration, but fortunately, this "convenience" feature had no impact on the process above, nor does it effect the working & tested example code linked above.

user-468437 14 May, 2025, 11:45:57

Hi! I am hoping you could help me solve the issue of the screen not showing the footage in pupil cloud. Today I had it in multiple recordings: sometimes the screen is completely grey, sometimes partly. What could be the reason for this and can it be fixed? Thank you.

Chat image

user-f43a29 14 May, 2025, 11:53:14

Hi @user-468437 , could you open a Support Ticket in the 🛟 troubleshooting channel?

user-9a1aed 14 May, 2025, 12:42:17

Hi Team, if we put AprilTag around the corners of the laptop's screen, we are worried that it might interfere with their central vision. I wonder if we are able to print the AprilTag out on paper/board ,and we can stick it on the back of the laptop so that they do not appear on the screen. Has anyone tried that to avoid distraction of the computer screen? Thx!

user-d407c1 14 May, 2025, 13:01:23

Hi @user-9a1aed 👋 ! Yes, you can print them and position them on the edge if you wish. For convinnience, you can find the markers ready to print here.

user-cdcab0 14 May, 2025, 13:13:42

@user-9a1aed - to add to what @user-d407c1 says, using off-screen AprilTag markers with our PsychoPy plugin requires a little extra effort.

The PsychoPy plugin needs to know the position of the markers in relation to the display. When you use on-screen markers provided by the plugin, then it already knows these positions. With off-screen markers though, it has no way of knowing where the markers are in relation to the screen.

So to make that work, you need provide the positions of the markers to the plugin. Probably the easiest way to do this is to use the individual AprilTag marker component in PsychoPy builder (rather than the marker frame component). For each off-screen marker you have, add an on-screen AprilTag marker with the same ID, set its size so that it's displayed size matches the size of your physical marker, and then change the position of the marker so that it's in the same "place" (off-screen) as your physical marker.

It's a bit unorthodox, but if you size and position the virtual markers correctly, it will work. Having said that, I think it may be worth a small pilot study to actually determine whether the on-screen markers actually cause an effect versus physical, off-screen markers

user-652793 14 May, 2025, 17:51:32

@user-cdcab0 That helps, thank you so much !

user-658b57 14 May, 2025, 21:25:15

Hey All, Our lab is looking to purchase a Neon eye tracker and we need a quote to send in to finance it. Can someone please help with the same?

user-480f4c 14 May, 2025, 21:31:56

Hi @user-658b57! Please reach out to [email removed] in this regard or simply select the items of interest from our shop and fill out the quote request form: https://pupil-labs.com/products/neon/shop

user-658b57 14 May, 2025, 22:54:49

Hi Nadia,

Thank you for the quick response. We are planning to collect data of how people view images on a screen, is the "Bundle - Ready set go!" the most accurate one for this task in your opinion? Although it is not intended,there will be some head movement. We are upgrading from the Pupil Core where we used have major issues with slippage due to head moement. Looking for your opinion on the best device for this use case

user-38bfe7 15 May, 2025, 00:17:08

Our android phone upgraded to android 15 - We are using the one plus 10 phone, which from what I have read requires android 13 - is there a way to rollback the phone to android 13 for the one plus 10?

user-38bfe7 15 May, 2025, 00:21:18

one plus 10 pro that is

user-38bfe7 15 May, 2025, 00:23:20

running neon companion app 2.9.4-prod

user-e0026b 15 May, 2025, 08:43:47

Hi, I want to use the PupilLabs Neon to investigate about the Quiet Eye in Golf. How are fixations defined in the Pupil Labs Neon system in terms of minimum duration and angular deviation thresholds (in degrees), and are these parameters configurable by the user?

user-f43a29 15 May, 2025, 12:36:05

Hi @user-e0026b , you can find a description of the fixation detection algorithm in the associated whitepaper. The parameters have been optimized for a number of use cases, including sports, and accounts for the effects of head movements, which you can find detailed in our publication.

On Pupil Cloud and in the Neon Companion app, the parameters are not configurable. It might be worth it to first see how the defaults work for you, but you can modify the parameters in the pl-rec-export tool.

user-f43a29 15 May, 2025, 12:53:53

@user-a0189b And, if you are open to hanging static AprilTags in your environment, then you could use the Head Pose Tracker plugin of Neon Player. To intersect the gaze ray with the robot arm would again require knowing the pose of the robot arm over time.

user-a0189b 19 May, 2025, 21:35:19

I tried to use the april tag and used the enrichment but it was hard to click the qr code during the enrichment. How to do this easily

user-87d763 15 May, 2025, 16:53:03

Hello @user-f43a29 , thank you so much for your response. the two neon devices and the computer running the code are all synched to the same NTP server (time.android.com).

In terms of calculating the clock offset between device 1 and device 2, I am still struggling to wrap my head around why the events are not matched. Both devices have been corrected to account for the offset to the source computer. shouldn't correcting this offset in theory make the timestamps on both phones identical?

user-f43a29 15 May, 2025, 16:53:51

Hi @user-87d763 , may I first ask how you did the NTP sync process, as in what steps did you follow?

user-f43a29 15 May, 2025, 16:56:20

With respect to the Events being "not matched", they actually are in principle matched and are "happening at the same time", since you accounted for the offset to the source computer and have validated that they correspond to the same physical process in the external world. I ask about the NTP sync steps, because if there was any discrepancy in the NTP process, then the clocks of both Devices will not be in sync and this would propagate down to the Events.

user-d90133 15 May, 2025, 17:42:04

Hello, I’m testing out the Neon for the first time to pilot a new study. I’m noticing that during recordings where I’m walking from one point to another while wearing it, on Player some of the fixations show the green dot drifting away from the yellow fixation circle, when the green circle should probably stay within the yellow. Is this due to potential lag, headset slippage, and/or anything else? Is this cause for concern? Any tips would be appreciated.

user-cdcab0 16 May, 2025, 01:27:32

The fixation circle "glues" itself to the features within the image by calculating the optic flow for the duration of the fixation. So the fixation visualization has a sort of temporal offset that doesn't apply to the gaze point which is instantaneous

user-87d763 15 May, 2025, 21:17:37

@user-f43a29 every time we make a recording, we reset the date and time on the android phone and then make sure that the "set time automatically" is toggled on so that it is getting the lowest possible drift from the server. based on my research, with this function on, the android phone automatically connects to the NTP server time.android.com

user-f43a29 16 May, 2025, 07:33:14

Hi @user-87d763 , could you instead try the following?

On both Android phones: - First, restart both phones before initiating a sync. - Next, when you turn off Set time automatically , then set the time one hour wrong. - Wait 5 seconds, then re-enable Set time automatically

On the Mac computer: - Open System Settings > General > Date & Time - Click Set next to Source - Reset the time server to MacOS's default - Then, disable Set time and date automatically - Click Set next to Date and time - Set the time to be one hour incorrect - Wait 5 seconds - Re-enable Set time and date automatically - Wait 5 seconds - Open a terminal and run sudo sntp -sS time.apple.com

That sntp command is necessary on modern MacOS at least to ensure it has fully synced correctly.

After doing that, could you run a brief test and see how the Events look on both Android devices?

Also, please note that of the three major OSes (MacOS, Windows, and Linux), MacOS has the most trouble with keeping accurate time. At least on modern Macs, it's clock can drift more quickly and regularly exhibits sudden step changes in time, on the order of 50 or 80 ms. If you are looking for more stable timing, then you could consider Windows. With Linux (e.g., Ubuntu), testing shows that you will have the best timing results.

In all cases, it would still be advised to periodically measure & update the clock offsets. Some references recommend doing this at least once every 5 seconds.

user-60878d 15 May, 2025, 23:46:09

Hi! I'm curious if there is a noticeable comfort difference between Just Act Natural vs Is this thing on vs Ready set go; similarly I'm curious if there is comfort difference between All fun and Games vs Crawl walk run because I got a feedback that All fun and games was not comfortable to wear for a long period of time and I'm curious if "lightweight" frames like Is this Thing on, Ready set go, Crawl Walk Run would be more comfortable to wear for ~ 1 hour or so. If any have experience with these and have thoughts on wearer comfort please let me know!

user-d407c1 16 May, 2025, 07:42:01

Hi @user-60878d 👋 ! Comfort can be quite subjective — we do our best to accommodate a wide range of facial physiognomies through different frame options, but how the glasses feel can vary from person to person. Some of the lighter-weight options may feel more comfortable, specially for people who aren’t used to wearing glasses, but ultimately it depends on the individual.

We’d really appreciate your feedback! If you could share what specifically made the glasses uncomfortable for your subject, please drop us an email at info@pupil-labs.com and we will forward it to the relevant team.

We're always looking to improve and make things more comfortable for everyone.

user-1391e7 16 May, 2025, 10:30:20

Hello everyone! I have a question regarding recordings in the cloud or recordings in general. When we record outdoors, it isn't always possible not to have short outages of the sensor. Participants jostle the device unintentionally and the USB-C connection briefly cuts out. As far as I know, this then makes it impossible for the cloud service to process the recording.

user-1391e7 16 May, 2025, 10:31:21

Can I do anything after the fact, to make analysis possible? If not for the entire recording, could I split the recording somehow, so the outage isn't part of it?

user-d407c1 16 May, 2025, 11:13:22

Hi @user-1391e7 ! While it's not typical for the USB connection to come loose unless a significant force is applied, both the Companion App and Cloud are designed to handle this scenario.

If the cable gets disconnected during a recording, the signal will drop — but once reconnected, the glasses will resume normal operation without needing to restart anything.

In Pupil Cloud, you'll notice gray frames in the timeline during the disconnection period. Here's a screen recording where I disconnect and reconnect the glasses multiple times, so you can see how it's handled.

user-1391e7 19 May, 2025, 05:57:38

Thank you for the reply! does this also work for recordings done in early december 2024? did you make changes in the recordings themselves, or should that work in any case?

user-60878d 16 May, 2025, 19:56:27

Hi! I noticed that the camera module got hot after 3 minutes of recording -- is that expected or is the module malfunctioning?

user-480f4c 16 May, 2025, 20:06:53

Hi @user-60878d! The module itself might warm up but should not overheat. However, this should not affect the Neon App functionality. Did you get any errors?

Also, which frame are you using? Does it have a heatsink? Our current frames for sale come with an aluminum heatsink.

The heatsink helps dissipate heat from the module to the front of the frames, ensuring the module gets warm but not hot.

See also this relevant message: https://discord.com/channels/285728493612957698/1047111711230009405/1265679380471087246

user-4ef9ea 19 May, 2025, 07:12:05

Hello, my lab recently purchased the Neon eye tracking glasses and even as a freshman undergrad in EE, I am disappointed by the mechanical design of the glasses. If you’d like, I can help design a brand new frame for the glasses that you all can ship as the standard or sell separately.

user-d407c1 19 May, 2025, 07:26:31

Hi @user-4ef9ea 👋 ! Thanks for the feedback — we really appreciate you taking the time to share it.

We designed the frames to accommodate a wide range of facial shapes, securely house the sensor module, and use materials that balance durability with affordability. That said, we're always looking to improve.

We’d love to hear more about what didn’t work well for you. Feel free to email us at info@pupil-labs.com with your suggestions or complains— we're always open to improve.

user-4ef9ea 19 May, 2025, 07:45:46

That’s fair. I will email some suggestions. Otherwise, the product is great and really shines with its software.

user-d407c1 19 May, 2025, 07:56:23

Thanks looking forward.

FYI, we also have open-source on GitHub, the module, nest, and some frame geometries, such that anyone can design their own. 😉

user-d407c1 19 May, 2025, 08:58:03

It seems I copy pasted wrongly the neon geometries link, my colleague made me aware. I have modified the message, but it is also here (https://github.com/pupil-labs/neon-geometry)

user-a0189b 19 May, 2025, 22:04:14

I wouldl ike to know how big qr code is, how close it needs to be between the eyetracker and qr code

user-f43a29 19 May, 2025, 22:53:16

Hi @user-a0189b , a good rule of thumb is that they should be visible to you in the scene camera image. It sounds like perhaps they were too small in this case. If you can share a screenshot, we can provide feedback.

user-a0189b 19 May, 2025, 23:19:19

Chat image

user-f43a29 20 May, 2025, 08:07:53

Hi @user-a0189b , those markers look too small. You can print them out larger and paste them to the edges of your monitor, in case you are looking to save screen real estate.

Also, your scene camera looks potentially blurry. Could you try carefully wiping the scene camera lens with the provided lens cloth and see if that improves it? Let us know, if not.

After both of those steps, you can then submit some images again for review, if you'd like.

user-a0189b 19 May, 2025, 23:20:00

Chat image

user-d407c1 20 May, 2025, 08:31:35

Just a note — the snippet is fully standalone, so if that’s exactly what you need and you have uv installed, you can run it like this:

uv run -s name_of_the_snippet.py path_to_timeseries.zip

That’ll give you the count and average duration of fixations in the specified intervals.

Also it might look a bit more complex than it actually is — most of that is just boilerplate to make it run standalone. The core logic starts around the events_sorted line.

user-eee2af 20 May, 2025, 14:49:05

Hello everyone, I have a quick question. What is the usual delivery time for delivering to the netherlands? I'm doing my thesis using the ready set go glasses and want to get started as soon as possible

user-f43a29 20 May, 2025, 16:33:58

Hi @user-eee2af , we also received your email and will follow-up there.

user-bed573 21 May, 2025, 00:12:38

Hi, can the Samsung S25 Ultra be used as a companion device?

nmt 22 May, 2025, 14:02:38

Hi @user-bed573! Thanks for your question. We're going to add this to our list of supported devices as 'experimental'. We believe it should work with Neon, but can't make complete guarantees. If you have one already, it's definitely worth trying. For reference, our supported devices list can be found in this section of the docs.

user-a0189b 21 May, 2025, 08:04:33

I would like to know if the scene is changing due to the head movement without eye-movement, then the gaze data will be the same?

user-f43a29 21 May, 2025, 09:54:59

Hi @user-a0189b , if you were to keep your eyes in a specific orientation and turn your head, such that your head turns and your eyes stay fixed in the head (in other words, not eliciting VOR), then yes, gaze x/y coordinates will essentially remain at the same point. This is because gaze data are specified in the coordinate system of the scene camera.

Do you need the gaze data to rather be in a world-referenced coordinate system?

user-ab3403 22 May, 2025, 10:48:49

I used the surface tracker feature and compared the data with the fixations data. In some cases the fixations in and off surface combined are less than the total fixations detected. I would like to know the reasons for that. Is it possible to figure out which of the fixations were omitted when using the surface tracker?

user-cdcab0 22 May, 2025, 13:26:35

Hi, @user-ab3403 - are you using Neon Player? The number of fixations should be the same in both, but perhaps there are some edge cases. Regardless, the fixations are assigned an ID, so finding missing fixations is simply a matter of looking for missing IDs

user-87d763 22 May, 2025, 13:45:37

Hello again! I had a quick question regarding Neon terminology. I am using the realtime API synchronization. Are the timestamps that are recorded and collected as part of the timeseries data (e.g., in the gaze data) reflective of the device time (the companion device) or the external clock time? Furthermore, in the world_timestamps.csv output, again can you please clarify whether these timestamps are reflective of the device time or the external clock time? Overall, I guess my question is whether the outputted timestamps are device specific or if they reflect a system time. Thank you so much in advance for your help!

user-f43a29 22 May, 2025, 21:20:25

Hi @user-87d763 , when you say „external clock time“, do you mean the clock of the receiving computer running Python/MATLAB or some other clock?

To clarify the other part, the timestamps from Neon‘s real-time API and in the data files, including CSV, for all sensors, are from Neon‘s clock. Those timestamps all come from the same clock source. In other words, the time axis of the Neon device. They are also technically a system time. They are in UTC format and count from the Unix epoch.

user-cf5a97 22 May, 2025, 13:51:31

Hi, in January I bought "Better safe than sorry - Neon eye tracking bundle". Unfortunately, the project got discontinued and I would like to sell the glasses with the phone. The glasses have barely been used. I would like to sell it for 5600 Euro. Contact me if you are interested [email removed]

user-d90133 22 May, 2025, 18:37:48

The colors of the circles are

user-87d763 22 May, 2025, 21:29:42

Hi @user-f43a29 - yes, by external clock time I was recievering to the recieving computer that is running the script.

As for the timestamps, thank you for clarifying. I was curious as how they might be meaningful and relevant across devices. For example, is a timestamp of 1747875211899127575 in one clock equal to the same timestamp in a separate device? Or, since they are two completely separate devices, we cannot equalize the timestamps without taking into account the offset correction provided by the real-time API code? I hope this line of reasoning makes sense! Thank you again for your help.

user-f43a29 23 May, 2025, 21:46:21

Hi @user-87d763 , since it is not possible to perfectly synchronize any two clocks, such that they tell time exactly the same, one usually defines an acceptable „error“ threshold.

With the NTP sync method that was posted above, you can typically expect an estimated difference of 5 to 20 ms between the two clocks on average, at least for ~20 minutes, depending on your OS.

When taking clock offsets into account, it depends on the precision of said offset. There’s always some jitter, but on modern computers and the Companion devices, it’s often negligible for many use cases, including high-precision neuroscience. If it is of concern, you can estimate it for your two devices by periodically collecting their relative offsets over a time window, say 5 minutes. If the error is acceptable, then yes, you can consider the timestamps effectively the same, provided you’ve either recently synced them both via NTP and/or you’ve used the clock offset to convert them to a common time axis.

To be clear, many applications can even accept more error. It all depends on the time scale of the mechanisms in question.

user-f0ea5b 22 May, 2025, 22:14:59

Hello, is there a way to look at the pov camera and the synced eye cameras to check if there are actually blinks? I am doing smooth pursuit tracking during a sports task and I am getting blinks during bat ball contact. I would like to manually check if blinks are being registered because participants are blinking or if it’s due to occlusion of the pupil because they are looking downward. Thanks!

nmt 23 May, 2025, 13:06:19

Hi @user-f0ea5b! You can use Neon Player to do this - it has an eye video overlay plugin.

user-f4b730 23 May, 2025, 06:45:34

Hello, a question about offset calibration. Is the correction applied in pixels or in angles? The distance between the wearer and the target affects the perceived offset (and its correction). however, since pupil does not read the distance between the wearer and the target I assume the offset is based on pixels, right?

Moreover, is there a distance you recommend to check if calibration is needed?

Finally, I have seen this page: https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433 Is there something similar in NeonPlayer? Thanks

user-cdcab0 23 May, 2025, 07:03:53

You're correct - it's in pixels. In Neon Player, you can adjust the manual offset by going to the Gaze Data settings

Chat image

user-cdcab0 23 May, 2025, 07:04:24

Note that the unit there is technically "norm" units, but that's just multiplied by the resolution. Effecitvely, it's still pixels

user-1391e7 23 May, 2025, 07:10:18

the new Tobii glasses look like half a Pupil Neon at twice the price and with most functionality locked away

user-9a1aed 23 May, 2025, 07:39:09

Hi Team, for computer-based tasks, do u guys have a recommended viewing distance from a laptop? I could not find any article about the viewing distance. I understand that most Neon users may not conduct computer-based tasks. I appreciate any help/information!

user-f43a29 23 May, 2025, 22:07:37

Hi @user-9a1aed and @user-1391e7 , if it helps, Neon is designed to be used at any distance, both indoors & outdoors. It works on different principles from remote, desktop-mounted eyetrackers. See this message for more details: https://discord.com/channels/285728493612957698/1047111711230009405/1371897435781337200

Gaze offset correction is also independent of viewing distance. The offset correction is a wearer specific parameter. It is something that, if it is needed, you essentially set one time and it is applicable at all distances, rather than being something you set differently for different viewing distances.

When it comes to screen-based tasks and/or using AprilTags, the following tips can be kept in mind:

  • If the AprilTags are visible in the scene camera preview to you, then they can usually be detected well by Marker Mapper and similar tools. This includes having them far away but printed largely.
  • With Neon, viewing distance can simply be something that is comfortable for the participant, provided your experiment does not require an explicit viewing distance for all participants.
user-1391e7 23 May, 2025, 07:56:02

I'm just a dev as well, so take this with a grain of salt :). afaik, the model was trained on larger viewing distances, but you can perform an offset correction. viewing distance from a laptop, wouldn't you always be placed ~60-100cm away anyways? just by nature of the screen and the resolution and what you do to interact with it?

user-9a1aed 23 May, 2025, 08:03:38

Thx a lot! This is my first time using a laptop to display the stimuli since we might want to carry the device around. I wanted to use the smallest viewing distance so that the visual angle could be larger cuz I want to minimize the AprilTag's size. For desktops, it should be around 60-100cm...but I was thinking 30 cm for laptops. I feel like people sit closer to the laptop's screen

user-aa6efa 23 May, 2025, 09:05:16

Hi ! I have a question. I would like to get the x and y coordinates of the gaze in px of the screen that is in front of the person during the experiment. They will perform various tasks on the computer, looking at the screen. I'd like to know if it is possible to get the data from neon, transform it into pixel into the screen's plan that is in front of the user, all of that in 200Hz ? I tried to use the demo available on Github with the virtual apriltags but my supervisor told me that obstructing the screen during the experiment is an issue. I had issues on detecting edges to detect the screen as well bc the script didnt detect the screen at every frame so the frequence of data acquiring dropped significanlty. Do you have an idea of what the best solution could be for this problem ? Thanks a lot !

user-1391e7 23 May, 2025, 09:27:45

you could attach a frame of printed out tags to the screen. that way, the screen is not obstructed, and the tags are still in view of the video

user-9a1aed 23 May, 2025, 09:39:02

Hi Oscar, if u search for keywords "off-screen AprilTag markers," you can see Dom's reply to the printed-out method. I had a similar issue

user-1391e7 23 May, 2025, 09:25:51

I'm having issues finding the neon device via local network. I am almost 100% sure it is due to a firewall restriction. is there a range of UDP ports I need to have open, on top of 5353, to allow the neon device to be found?

nmt 23 May, 2025, 13:14:18

Hi @user-1391e7! Could you describe the network are you using? Is it an institutional network?

user-535f1a 26 May, 2025, 06:01:32

Hi Pupil Labs team, I am running a study using Pupil Labs Neon where participants are free to explore an exhibition space viewing multiple objects. What is the best approach to create the 3D model using April Tags? I am assuming I need to create multiple RIMs but not sure how many would be sufficient as the exhibition space contains multiple areas, and each participant is free to decide their own route. Many thanks!

user-f43a29 26 May, 2025, 07:18:18

Hi @user-535f1a , it sounds like you aim to do a combination of these two guides?

That should work and then, technically, you can merge all the "tag aligned" recordings into one 3D model of that exhibition space, afterwards.

The process for taking the April Tag recoding for Tag Aligner remains the same -> it just needs to be a recording where the April Tag was hanging in the region that was scanned. It does not need to be a recording where a participant did the task, nor does it need to be part of the Scanning Recording for Reference Image Mapper, although that can save a bit of time. It also does not need to be long, but it will help to have some movement during the "Tag Recording".

user-3c26e4 26 May, 2025, 06:19:15

Hi, I have a cloud registration for the Invisible. Can I register in the same cloud with Neon, or if this cannot be done, can I combine the two separate clouds?

user-f43a29 26 May, 2025, 07:10:47

Hi @user-3c26e4 , yes, you can use as many Pupil Invisible and Neon eyetrackers under the same Pupil Cloud account as you like. You can even mix Pupil Invisible and Neon recordings in the same Projects and Enrichments.

user-535f1a 26 May, 2025, 08:15:30

Thanks @user-f43a29 for your prompt reply and info, very helpful, will look into the guides!

user-f43a29 26 May, 2025, 08:18:50

You are welcome!

user-5a90f3 26 May, 2025, 08:51:23

Hello, question about the model of the mobile device. Does the OnePlus brand only work with these two models of mobile devices: On OnePlus 8 and 8T: Android 11 On OnePlus 10 Pro: Android 12 and 13 , is it okay with OnePlus' latest phones?

user-f43a29 26 May, 2025, 09:40:13

Hi @user-5a90f3 , if you want to use a OnePlus phone, then only those OnePlus devices with those Android versions, as specified in the Documentation. Using newer OnePlus devices is not supported with Neon.

user-5a90f3 26 May, 2025, 11:06:42

Thanks @user-f43a29 for your prompt reply!

user-9a1aed 26 May, 2025, 15:49:28

Dear Team, the USB-C port recommended by your company is not available in my region. May I check if you have an idea if this one works for data transfer and power? Thanks a lot in advance!

Anker 553 PowerExpand 8-in-1 USB-C PD Hub $640 x2 USB-C Port, 2 USB-A, 2 HDMI, Ethernet, Micro SD/SD 100W PD

https://www.amazon.com/Anker-PowerExpand-Adapter-Delivery-Ethernet/dp/B0874M3KW4?th=1

user-f43a29 26 May, 2025, 15:55:33

Hi @user-9a1aed , we have not tested this hub, so we cannot say one way or the other. If you have success with it, please let us know!

user-5a90f3 27 May, 2025, 03:45:13

Hello, Does the new Motorola meet the requirements for mobile devices, such as the Moto edge 60 pro?Thanks

user-d407c1 27 May, 2025, 06:46:25

Hi @user-5a90f3 👋 !Only the devices listed here have been tested and are officially supported.

user-37a2bd 27 May, 2025, 09:34:34

Hi team. I've opened a support ticket. I'm having trouble with my companion device. Could someone help me out. We are in the middle of a research project

user-f43a29 27 May, 2025, 09:57:17

Hi @user-37a2bd , thanks. We've taken a look. If it is only a security update for Android 13, then you can do that. As listed in our Documentation, Android versions 13 and 14 are supported on the Moto Edge 40 Pro.

Otherwise, it can be helpful to disable Smart Updates to prevent accidentally updating the device to an unsupported version.

user-2d96f8 27 May, 2025, 11:46:19

Dear Team, I’m using Pupil Labs’ Realtime Python API to retrieve data from Neon in my Python code. After upgrading the Realtime API from version 1.4.0 to 1.5.0, the log message "No cached eyes video frames available for matching" started appearing frequently. Why is this happening? Can you reproduce the issue on your end as well? Thank you!

user-f43a29 27 May, 2025, 17:48:54

Hi @user-2d96f8 , could you describe what your code is doing and how you are using Neon when you see that message?

user-87d763 27 May, 2025, 13:27:06

Hello. We have run into a problem in our data, and we were hoping to get some insight into possible ways to salvage it. Our experimental setup is such that we are running two Neon devices at the same time, and we need to ensure their synchronization for data analysis. Unfortunately, our problem is that we miscalculated the device offsets using the realtime API code, and thus we do not have synchronized events in our data. As such, we have been trying to brainstorm ways to post-hoc sync the data and then verify that the correction is correct.

In order to brainstorm ideas for offset correction, we recorded a “test” video that is not affected by the miscalculation bug. The test video involves both eye trackers recording a dual audio/visual event of a physical clap. With this video, here are some things we tried and our observations:

  1. We calculated audio offset using cross correlation. When we arrive at an offset value and apply this correction to the audio data, the audio streams are perfectly synchronized, but this does not extend to the video (the clap does not coincide with the audio time series). It’s possible we are not regenerating the video properly.
  2. We visually inspected the same video using the Neon Player to determine at what frame the claps occurred. This provided a different offset value than the audio cross correlation. And, it appears that in one of the recordings, even before any adjustments, the audio waveform of the clap does not match the video of the clap.
  3. We did force sync to a master clock on both companion devices immediately prior to recording. From the world_timestamp.csv files, we calculated an offset by subtracting the first timestamp on both Neon devices. Then, we found the closest matching timestamps based on this offset. This provided a third value that was different from the previous two values, and still was not sufficient in synchronizing the data (either audio or video).
user-87d763 27 May, 2025, 13:27:16

Based on these tests, we have a few remaining questions. First, how can we be confident about the timestamps and the auditory offset alignment? Is there a way we can see/prove to ourselves that the timestamps are fully synchronized both in the audio and visual mediums? Is there any chance that the audio/visual alignment is not perfect, but we can prove to ourselves that the timestamps are synchronized?

If you have any ideas/feedback on how we can move forward to salvage this data, that would be greatly appreciated. We are more than happy to share our test recordings, if that would help.

user-f43a29 27 May, 2025, 17:38:59

Hi @user-87d763 , if you could share a sample recording from a session with both devices, that could be helpful. You can share it with data@pupil-labs.com via Google Drive, if you'd like.

With respect to "regenerating the video", are you using OpenCV to open and inspect the scene camera video? Since you already have synced audio at item 1, then it might be easiest to first narrow down the issue in that case, before moving on to items 2 and 3.

user-a4e164 27 May, 2025, 17:41:59

Hi there! We're having trouble getting our brand new Neon to work with psychopy. - We managed to get the Neon connected via ethernet and recognized by the Windows 10 computer, by using the recommended tp-link router (itself connected to the recommended Anker hub), so far so good. - We installed the pupil-labs toolbox from the "get more" component in Psychopy, hope this was the right way to do this (couldn't get the pip -install thing to work but we're probably doing it wrong). - We can get a recording to start on the phone from psychopy : from pupil_labs.realtime_api.simple import discover_one_device device = discover_one_device() device.recording_start() - Any attempt to send any other command (e.g. a simple "recording started" label with device.send_event) then throws a WinError 10048: only one use of each socket address is allowed ('', 9036). The traceback shows that this occurs when psychopy tries to run start_iohub_process.py. - Trying to use the embedded PsychoPy components (such as Eyetracker Record with default parameters) either does not start a recording at all, or creates a recording with null duration, or throws the same error. - Trying to setup things using the coder example (https://docs.pupil-labs.com/neon/data-collection/psychopy/) leads to similar consequences. Any help is appreciated ! 🤗

user-f43a29 27 May, 2025, 17:55:00

Hi @user-a4e164 , the PsychoPy team no longer recommends Coder for eyetracking experiments. If you want to use code, then you can run a normal Python virtual environment with an install of our Real-time API and PsyhcoPy, and then use the API for interacting with Neon and PsychoPy for displaying the stimuli. This lets you side-step Coder.

With respect to the issue, what version of PsychoPy have you installed?

To install the Pupil Labs plugin, you want to use the Tools > Plugins/Package Manager menu item, rather than Get more. Also, just to note, you don't typically use PsychoPy's self-managed pip to install the plugin. It will do that process for you internally. It might be worth it to re-install the plugin by:

  • First, closing all PsychoPy windows
  • Deleting the psychopy3 folder in %APPDATA%/Roaming (just copy-paste that into Windows Explorer)
  • Then, restarting PsychoPy and following the above plugin installation steps

Otherwise, may I ask, when you hit that error message, was that from the very first time that you tried to run the code or after an initial crash of PsychoPy? Or, did the code use launchHubServer together with discover_one_device?

user-a4e164 27 May, 2025, 18:16:37

Thank you, will try a clean reinstall during opening hours. In the meantime if you have a minimal working example of script available somewhere that would be very helpful.

user-f43a29 27 May, 2025, 18:50:43

Hi @user-a4e164 , if you go the route of doing it all in Python code, then you can change our Coder example to instead directly use our Real-time API. Just take note of what that means in terms of how you save the data. Also, you may find referencing this tutorial helpful.

user-87d763 27 May, 2025, 21:20:23

Hello @user-f43a29 ! I just sent the data files to [email removed] Please let me know if you have any issues accessing it, or if you have any other follow-up questions. To answer your previous question, we used MATLAB to identify the temporal offset, and then used ffmpeg to stitch the two videos back together with the offset correction accounted for. However, we understand that this might not be the best approach.

user-f43a29 27 May, 2025, 21:22:04

Thanks, @user-87d763 . We will take a look in the morning. ffmpeg is a good tool for such a task, but it has many command-line parameters. It may just be an option that needs to be tweaked.

user-5a90f3 28 May, 2025, 09:11:27

Hello, I forgot to tick Pupil Cloud unlimited storage when I bought a device, and I want to ask how to buy it? Thanks

user-d407c1 28 May, 2025, 09:33:44

Hi @user-5a90f3 👋 ! You can find the different options at the bottom of this page, simply add them to the cart and request a quote.

If you prefer you can simply send an email to sales@pupil-labs.com requesting it.

user-5a90f3 28 May, 2025, 10:58:18

Ok thanks! Why do I keep downloading failed when downloading time series data from pupil cloud, stuck at 14.1 MB and can't complete the download. I changed the network and asked a friend to help download it and it was still stuck at 14.1 MB, what is the reason for this situation and how should it be solved? Thank you

user-d407c1 28 May, 2025, 11:13:30

Hi @user-5a90f3 ! Could you please open a ticket on the 🛟 troubleshooting and share the recording ID that is giving you issues to download?

user-60878d 29 May, 2025, 02:25:31

Hi! I had a question about the gaze-offset. It says the gaze-offset is only for that recording. Is there an option to make the gaze-offset universal for that participant? I was thinking something like Meta Aria's personalized calibration process.

user-f43a29 29 May, 2025, 05:01:17

Hi @user-60878d, yes, you set it in the Wearer’s Profile in the Neon Companion app. We recommend making a separate Wearer Profile for every participant who wears the glasses.

Just to clarify, this is different from a calibration. Neon is calibration free (eg, you can take the glasses off and put them back on and you are automatically provided with accurate gaze data), but for a subset of participants, there will be some offset in its gaze estimates. Without offset correction, you can expect a gaze accuracy of ~1.8 degrees over a sample of the population, as covered in our Neon Accuracy Test Report.

In principle, you only need to do this once, if it is deemed necessary. The saved offset correction in the Wearer Profile is automatically applied to all future recordings. You can bring the participant back later, load up their Wearer Profile, and simply start recording.

Also, if you have not yet had your free 30-minute Onboarding, then just send an email to info@pupil-labs.com

user-5a90f3 29 May, 2025, 07:52:25

Hello, how long does it take to process a thirty-minute video with Reference Image Mapper, what does the processing speed depend on, and how to speed it up?

user-d407c1 29 May, 2025, 08:03:07

Hi @user-5a90f3 ! Processing time depends on a few factors — mainly the total duration of the recordings and the current queue of submitted jobs.

You can find more details in this message:
https://discord.com/channels/285728493612957698/633564003846717444/1251121902265827418

To speed things up, you can limit the analysis to specific segments where the reference image is actually visible. This is done using enrichment sections based on events — more on that here:
👉 https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections

That approach reduces processing load and can speed up the enrichment!

user-9a1aed 29 May, 2025, 12:08:42

Hello, may I know if I can collect eye movement data for a program that runs on jsPsych? I do not see any plugins for jsPysch. But I guess it is possible to treat the computer monitor as a physical object? Then I can use off-screen Apriltags to help convert the fixation to computer-based? Or is there a more efficient way to collect eye movements? thanks a lot in advance!

user-d407c1 29 May, 2025, 12:58:46

Hi @user-9a1aed 👋 ! There’s currently no official JavaScript client for the Network API or a plugin for jsPsych.

Could you share a bit more about what you're aiming to do? Are you looking to use the data in real time, say for example, in a gaze-contingent paradigm?

If so, you can write your own JS client for the Network API or use our ready to use python libraries like this to get gaze on screen coordinates and then relay it to jsPsych via WebSockets, for instance.

If that sounds a bit too complex, you might want to explore PsychoPy together with our real-time Python client, which already handles that workflow.

Alternatively, if you're mainly looking to start/stop recordings and send events at specific points, that can be done directly in jsPsych using simple HTTP requests. Just note that for high-precision timing, you’ll need to account for the time offset between the device and your local clock.

From there, you can post-hoc analyze the data using the Marker Mapper.

user-33134f 29 May, 2025, 13:27:30

Hi, is there any tool or code available to automatically detect the target recommended for the core pupil labs eye tracker in a neon/invisible tracker's scene camera?

In my experiment, I included a 5 point validation since I needed to understand how accurate the specific tracking was for a specific recording and the the normal offset correction was not possible (due to the experiment timing, multiple recordings with multiple eye trackers at the same time, different eye trackers (Neon, Invisible, other brand) etc.). Now I need to calculate the exact validation error, but doing this manually is very time consuming and our automatic approaches did not work so far. When speaking with some of the Pupil Labs representatives who joined the project recording last September (in London), they mentioned, that there might be some code/solution available to do this automatically, specifically identifying the pixel corresponding to the center of the target in the scene camera frames (which can then be used to calculate the validation error). If there is any automatic solution or code like this availble? This would be extremely helpful! Thank you very much.

user-d407c1 29 May, 2025, 13:44:00

Hi @user-33134f 👋 ! Yes, you can leverage Pupil Core's open source code with its Circle Detector for this.

Copy the circle_detector.py to your directory.

Then:

from .circle_detector import CircleTracker

tracker = CircleTracker()

...
# Here grab the frame, however you want, could be using pl-neon-recording, could be using Cloud,...
# Undistort the frame and gaze.
...
for frame in ...:
  markers = tracker.update(img=frame.to_ndarray(format="gray")) # note that frame.to_ndarray or so would completely depend on how you get the frame
  for marker in markers:
    # each marker detected would have a 
    # "marker_type" and "img_pos"
    if marker["marker_type"] == "Ref": #is a calib marker, not a stop calibration marker
      x_marker, y_marker = marker["img_pos"]
user-e33073 30 May, 2025, 14:07:15

Hi! I'm having some trouble with enrichments. In my experiment, subjects look at multiple pictures on a screen, and I need to get data on these pictures individually. I tried using Reference Image Mapper, but it doesn't see the pictures in the recordings. I considered using marker mapper (I have used markers on the screen during the recordings) and use the temporal selection since every picture triggers a specific event, but I can't because the stimuli were randomized, so a picture is followed by a different one every time, which means I don't have anything to select under "to event". Is there anything I can do?

user-f43a29 30 May, 2025, 14:18:57

Hi @user-e33073 , may I ask what you mean by „it doesn’t see the pictures“ and what was your naming scheme for the Events?

user-e33073 30 May, 2025, 14:21:57

I tried using the original image as a reference, but when I run it Pupil Cloud doesn't recognize the image in any of my recordings

user-f43a29 30 May, 2025, 14:24:10

I see. So when Reference Image Mapper is asking for a Reference Image, that means a picture of the surrounding context as well. So for a monitor, the Reference Image should also include the desk, keyboard, etc. Then, you can mark the monitor as an Area of Interest. It works by building a 3D model of the environment and then mapping gaze into a stable 2D Reference Image of that environment, so an image of just the picture will not be enough for it to do its work.

It essentially needs some static stable features that are constant in the environment. And the Reference Image should resemble the scene that was scanned during the Scanning Recording.

user-e33073 30 May, 2025, 14:23:45

The Events are all called after the file titles, which are based on the type of painting it is (they're all paintings). For example, IMR1 is an impressionistic painting

user-f43a29 30 May, 2025, 14:25:01

I see. Did you name them like „IMR1.start“ and „IMR1.end“ to mark the beginning and end of stimulus presentation? If not, there are solutions, but just so I understand.

user-e33073 30 May, 2025, 14:26:19

Unfortunately, no

user-f43a29 30 May, 2025, 14:29:05

That is okay. So, when using either the Reference Image Mapper or the Marker Mapper, the routines also give you the eyetracking data in the coordinate system of the Reference Image. These are found under the Downloads tab with the green icons and the same name as the corresponding Enrichment. So, for your current situation, you can then open these data in Python or Matlab and, using the Events timestamps, filter to only the time window for a specific stimulus. You would essentially find which Event is „IMR1“, let’s say it is Event 7, and then analyze the data up to Event 8. This allows you to side-step the absence of an explicit end Event.

For future experiments, when working with randomized stimuli, you could call them „beach.picture.start“, „castle.picture.start“, etc. and „beach.picture.end“, „castle.picture.end“, and so on. Then, it does not matter what order they were presented and can be used to set Enrichment Sections.

user-e33073 30 May, 2025, 14:32:21

Alright, thank you!!

user-97c46e 30 May, 2025, 15:22:52

Hi @user-f43a29 , I hope you are well. I am analyzing a neon recording and have a question about gaze.csv that is obtained from the pupil cloud analysis of the recording. Specifically, I'd like to clarify if there are conditions in which an entry/row/trace in gaze.csv can have a fixation id and a blink id i.e., it is declared part of a fixation event and a blink event at the same time. If so, could you say a little bit about what those conditions might be? My basic assumption was that there are three cases 1) fixation 2) saccade 3) blink that are distinct from each other. Therefore if fixation id is populated in gaze.csv, i'm in a fixation (and blink id should not be populated). If a blink id is population then fixation id should not be populated. If neither blink id nor fixation id is populated, then those gaze traces correspond to a saccade. The data don't seem consistent with this expectation and I'm trying to figure out how to make sense of rows where blink id and fixation id are both populated.

user-f43a29 30 May, 2025, 15:49:51

Hi @user-97c46e , thanks, I hope the same for you.

The fixation detector and blink detector run as parallel & independent processes. Although rare, there can be moments when they both return a positive classification. Rather than impose a decision, we provide you directly with the raw outputs from these detectors, so that you have maximal flexibility with your data analysis. For example, some groups will simply filter all data during a blink, others will also exclude some data pre and post blink, whereas others might first inspect the eye videos for the edge cases you mention, to see what was it exactly.

May I ask how often you see this classification overlap? If often, what do the eye videos look like in those cases?

Also, if you’d like more details, you can read about the fixation detection algorithm in the respective white paper:

Note that the Blink detector in the latest app release takes into account eye openness

user-97c46e 30 May, 2025, 16:27:21

From the one recording i'm analyzing - I see 37745 instances of fixation and blink overlapping and 132872 instances of saccade and blink overlapping out of 931866 entries.

user-97c46e 30 May, 2025, 16:29:19

4.05% of the time - blink and fixation overlap and 14.2% of the time saccade and blink overlap, for about 18% of the time total.

user-97c46e 30 May, 2025, 16:31:10

Is it fair to infer loss of eye-tracking when blinks are registered, and if so, does that imply that the gaze coordinates, azimuth and elevation readings during the blink events are some kind of interpolation/extrapolation made by the cloud fixation detector? maybe with something like a kalman filter running underneath?

user-97c46e 30 May, 2025, 16:37:13

to clarify, i've looked at the algorithm as detected in the whitepaper, and i just want to make sure i'm tracking it correctly. your guidance will help me decide which approach i should take of the three you mentioned above

user-f43a29 30 May, 2025, 16:40:15

There is no interpolation or filtering of the gaze data. Rather, NeonNet analyzes each eye image and makes a gaze estimate. Similar to before, we offer you the raw output. You then have the freedom to take & reject what is appropriate for your research context. Naturally, the gaze data will be less accurate when the eye is closed, but NeonNet still can provide its best estimate.

For example, many researchers filter out the data during a blink. The type of filtering they do varies from group to group.

Thank you for sharing those numbers. I can check with the team on Monday to be sure I give you the best answer. In the meantime, what do the eye videos look like around the moments that exhibit overlap?

user-97c46e 30 May, 2025, 16:44:24

Sorry to be a bother @user-f43a29 , could you give some insight on "the gaze data will be less accurate when the eye is closed, but NeonNet still can provide its best estimate."

user-f43a29 30 May, 2025, 16:47:34

NeonNet is the deep-learning powered gaze estimation pipeline that enables Neon‘s calibration-free nature. It uses the whole eye image to come to an estimate. Hence, it takes an image and provides a gaze estimate. It is not a dark pupil detection method, if that helps make it clearer.

user-97c46e 30 May, 2025, 16:46:02

if i'm following correctly, NeonNet provides a gaze estimate for each image but there is no smoothing over time for these raw estimates. the fixation detector in the white paper may do some smoothing on these raw estimates (with the LPF mentioned in the white paper - but that is to delineate the start/stop of each fixation. Am I interpreting reasonably?

user-f43a29 30 May, 2025, 16:56:46

Correct. The raw gaze data, themselves, are not smoothed.

The low-pass filter is the first stage and in combination with the subsequent stages, you get fixation events and their time windows. Then, the gaze data in that window are used to calculate fixation x/y coordinates, for example

user-97c46e 30 May, 2025, 16:46:58

In that case, if i have a couple of gaze traces at the beginning of a fixation also labeled as blink, then it might just be the blurring introduced by the low pass filters (edge effects). would that be a fair way to interpret what i'm seeing?

user-f43a29 30 May, 2025, 16:57:20

Ah, gaze is not an input to the blink detector.

user-97c46e 30 May, 2025, 16:52:33

Sorry, I am not familiar with dark pupil detection necessarily. But i do follow that NeonNet will give a gaze estimate with whatever part of the eye is visible (even if less of it is visible or if it happens to be closed?). i am assuming that those cases (below a certain threshold) will be marked as blinks?

user-f43a29 30 May, 2025, 16:58:54

The details are more nuanced than that. For example, closing the eye for too long is not a blink. Similary, closing and opening it slowly is not a blink.

user-7c5b51 30 May, 2025, 16:55:48

Hi, I have a question regarding the real time data on the Neon- I can see clearly now. Does that inlude degree angles? In addition, what is the standard adult sized glasses specifications? Are there any neons that work with different populations.

user-f43a29 30 May, 2025, 17:03:38

Hi @user-7c5b51 , may I ask what you mean by „degree angles“? Do you mean azimuth/elevation of the gaze ray?

With specifications, do you mean what prescription lenses does it come with or rather the size & weight?

There is only one Neon module, which is the eye tracker. It is modular and can be fit in many different frames for different use cases. It is, in all cases, the same eyetracker and gaze estimation & analysis pipelines.

See here for a guide to our frames: https://discord.com/channels/285728493612957698/733230031228370956/1376859240836108299

user-97c46e 30 May, 2025, 16:56:42

but the question I was trying to clarify above are what happens after the Neon Gaze estimate has been obtained. I assume that the fixation detection algorithm from the whitepaper is run on those estimates and it has a low pass filter (the SG 3rd order polynomial). I'm seeing blink and fixation overlap in many cases on the early traces of the fixation and on the last few traces of a fixation (this is not perfectly true). Could that overlap be attribute to the edge artifacts of the low pass filter that the eye gaze estimates are put through?

user-f43a29 30 May, 2025, 16:59:53

I can ask the team on Monday, but we do have open source implementations of the detectors that you can experiment with as well.

user-97c46e 30 May, 2025, 16:57:29

it is a little odd that the categorization itself might be blurred because the algorithm is only taking raw data, but i can imagine a setting where it works out that way

user-97c46e 30 May, 2025, 16:57:54

aha... ok. gotcha...

user-97c46e 30 May, 2025, 16:58:03

perfect, i think i am following you now

user-97c46e 30 May, 2025, 16:58:34

blink detection is thresholding the eye openness signal only?

user-f43a29 30 May, 2025, 17:01:01

It’s using the eye openness signal. We will soon put up a description of how eye openness is now taken into account.

user-97c46e 30 May, 2025, 16:59:35

sorry, for the blink i just got directed to a paragraph in the documentation not a whitepaper.

user-f43a29 30 May, 2025, 17:00:16

Apologies, my mistake. The documentation is in the process of being updated for the new blink detection description.

user-97c46e 30 May, 2025, 16:59:46

i got a whitepaper for the fixation detection

user-97c46e 30 May, 2025, 17:04:10

no problem. i think the blink whitepaper you mentioned might be the missing piece for me. I might be able to work through what I should do upon reading it.

user-f43a29 30 May, 2025, 17:14:41

@user-7c5b51 Also, may I ask what populations are you working with? Neon has been designed to work with as many as possible, and has a monocular estimation mode that can be helpful in clinical research.

You can learn more about how we assessed & validated Neon’s gaze estimation accuracy across a sample of the population in the Neon Accuracy Test Report

user-f43a29 30 May, 2025, 19:13:12

Sure, so that is not provided as a base real-time stream, but we give you all the elements needed to compute it in real time. You can:

  • Use the scene camera calibration info to then undistort & unproject the 2D gaze point to the corresponding 3D gaze ray.
  • Then, you can convert it to spherical coordinates and have azimuth/elevation.

Do you have a specific latency requirement?

Also, if you would like to see this added, feel free to open a 💡 features-requests

user-7c5b51 03 June, 2025, 18:39:25

Thanks Rob! One more question, Is the main difference between the pupil core and the Neon the "no need" of calibration?

user-0001be 30 May, 2025, 20:01:41

Hello there! I'm trying to catalogue my neon devices and frames, i have multiple frames and some are the same model. I would like to confirm they have the same Frame ID? For example if I have two Just act natural glasses, it shows they are the same. I thought they would have a different ID.

user-d407c1 02 June, 2025, 06:22:09

Hi @user-0001be 👋 ! If you’d like to catalogue your Neons for inventory purposes, the best way is to note the serial number, which is shown in the settings when connected or readable on the matrix view behind the module.

Additionally, each Companion device has a unique IMEI, which can also be useful for tracking inventory on your end.

If you're using LSL and simply need to differentiate between outlets, note you can also change the device name in the settings.

End of May archive