๐Ÿ‘“ neon


Year

user-9a1aed 01 September, 2025, 03:59:26

Hi team, I want to use this: pl-neon-recording Python library, to export my native recording data to surface-gaze fixationn data. I cannot follow the guide from your website: https://docs.pupil-labs.com/neon/neon-player/surface-tracker/#workflow. Can I use the Surface Tracker with the Python library to export all my recordings in bulk? My recordings are exported locally via USB-C Cable from the phone. It looks like I have to export each recording separately from Neon Player?

user-f43a29 01 September, 2025, 09:04:34

Hi @user-9a1aed , we do not have a batch exporting tool for Neon Player's Surface Tracker plugin. If you are using pl-neon-recording, then it would be necessary to do some programming to create such a tool. If using Neon Player, then correct, you export each recording separately.

user-0cc8fa 01 September, 2025, 08:57:45

Hi! I have a question regarding the companion smartphones for the Neon and the Invisible. We have one Invisible and one Neon which came with their companions smartphones. Can we use the Invisible phone also for the Neon and the Neon phone for the Invisible?

user-f43a29 01 September, 2025, 09:01:32

Hi @user-0cc8fa , it depends on the phone. You can find the list of supported phones here:

So, Android 11 on the OnePlus 8 and OnePlus 8T are supported for Pupil Invisible and Neon.

user-e3fdf5 02 September, 2025, 16:34:19

Hi! I am inserting some PLEvent components into routines in PsychoPy Builder and am receiving the following error messages. " return computer_time + self._time_offset_estimate.time_offset_ms.mean / 1e3 AttributeError: 'NoneType' object has no attribute 'time_offset_ms' " I have defined the onset/offset of the components so I am not sure where the offset might be getting lost. Any thoughts on troubleshooting whould be helpful. Thanks! -Eric

user-f43a29 03 September, 2025, 08:39:43

Hi @user-e3fdf5 , the time_offset_ms mentioned in that error message is distinct from the onset/offset that you are specifying in the components. The error that you are receiving indicates that a network connection was not established with Neon. What kind of network are you using? Is it a university network?

user-ff60f7 02 September, 2025, 17:56:05

Hi there! We are in the process of ordering the Pupil Labs Neon eye-tracking glasses, and it is my understanding that they ship with a Samsung Galaxy S25 as the companion device. Our goal is to integrate the glasses with a Qualisys motion capture system.

We are considering using a Linksys MR7350 router for this setup. Could you confirm whether this router would be suitable, or if you would recommend alternative models? Weโ€™d appreciate your recommendations regarding router compatibility (like the one posted in a previous message: TP-Link TL-WR841N N300 WLAN router) and performance when paired with the Samsung Galaxy S25 companion device.

user-6de021 02 September, 2025, 22:18:21

Hello, I downloaded the Pupil Labs plugin for Psychopy Builder on a PC. I am using psychopy 2025.1.1 and pupil labs neon recording 1.0.3. However, there are no drop-down boxes available to change the IP address and port (see photo). Any idea what is happening here or how to fix it?

Chat image

user-cdcab0 03 September, 2025, 08:46:10

Hi, @user-6de021 - sorry for the trouble. 2025.1.1 had been in beta for quite a while, but it seems that have taken it out of beta now, which is great! I have been waiting for this to happen, as it does require a small change to our plugin to be compatible. I will make the change, test that it works, and push the update as soon as I can

user-cdcab0 03 September, 2025, 15:44:34

Hi, @user-6de021 - I have pushed the update. You should uninstall the plugin, reinstall it, and restart PsychoPy

user-f4b730 03 September, 2025, 08:09:31

hello, can the gaze offsett correction on the neoplayer have 3 digits precision?

user-cdcab0 03 September, 2025, 08:21:01

Hi, @user-f4b730 - sure, that's a pretty simple change. Are you able/willing to run from source? I've already pushed the change, but it's maybe too minor to justify a release by itself

user-f4b730 03 September, 2025, 08:25:11

ok, I see, how can I run it from source?

user-cdcab0 03 September, 2025, 08:41:14

There are some instructions in the github readme - let me know if you have questions

nmt 03 September, 2025, 08:27:18

Hi, @user-ff60f7 ๐Ÿ‘‹. Yes, that's correct. They'll be shipping with an S25. In terms of router, in principle, the one you mention should work just fine. Although we have had a lot of success with the afformentioned TP Link model. So that's what I recommend.

user-ff60f7 03 September, 2025, 12:38:18

Thanks!

user-9a1aed 04 September, 2025, 04:00:52

Hi team, may I receive any feedback on my recordings? It seems that the recorded gaze has a systematic shift toward the bottom. May I know how I can improve the tracking accuracy? here is my recording: https://hkustconnect-my.sharepoint.com/:v:/g/personal/yyangib_connect_ust_hk/EQ56jrm2_o5Alc4ZrS_bIfMBLlciUd3T55GjQW81WRfKZw?e=c7IDcS

nmt 04 September, 2025, 07:48:58

Hi @user-9a1aed! Thanks for sharing the video. That's helpful! Small systematic offsets can be corrected using the offset correction feature, either in the Companion app or post-hoc in Pupil Cloud. You can also read more about the sources of offsets in this message https://discord.com/channels/285728493612957698/1047111711230009405/1283848621648908424

user-73f4fe 04 September, 2025, 06:45:32

Hi all! Have two questions i was hoping someone could help with - Is there a way to shorten an eye tracking video so i can use it as a reference video (i forgot to shoot a proper one) - Does anybody have a smart alternative for pupil lab case for the neon? It is a nice case, but would like to have something smaller so i can put it in my handbag

nmt 04 September, 2025, 07:52:55

Hi @user-73f4fe ๐Ÿ‘‹. Unfortunately, it's not possible to shorten the eye tracking recording to use as a scanning recording. Although this feature request is on our radar: https://discord.com/channels/285728493612957698/1212053314527830026 With regards to a case, we don't have a smaller alternative. Although perhaps someone from the community will chime in with after market cases they've used ๐Ÿ˜„

user-9382ce 04 September, 2025, 08:46:16

Good morning! I would like to inquire If in the Neon Eye-tracker whether head rotation is taken into consideration when generating the gaze maps.

nmt 04 September, 2025, 09:03:37

Hi @user-9382ce ๐Ÿ‘‹. Just so I'm sure I understand, could you clarify what you mean by gaze maps? Are you referring to the enrichments like Reference Image Mapper in Pupil Cloud?

user-e0a71c 04 September, 2025, 12:30:21

Hi there, we ran into a problem with the android app gui disappearing into the background after launching it. It takes a couple of seconds after opening the application until this happens. We tried restarting the device, connecting / disconnecting wifi, manually closing the app etc., but nothing worked. We also recorded the process in a video which I can dm you, if necessary. The app is running on a OnePlus 10T and we did not encounter this behavior in the last couple of months. Do you have any suggestions?

nmt 04 September, 2025, 13:21:53

Hi @user-e0a71c ๐Ÿ‘‹. Please try the following: - Disconnect Neon from the phone - Long-press on the companion app icon, select app info, and then clear storage and cache. - Reconnect Neon, open the app and enable all permissions. Does that solve the issue?

user-5a90f3 05 September, 2025, 08:10:06

Hello@user-f43a29, Neon's companion software needs to be downloaded through Google Store, but because Google cannot access it in my area, I would like to ask if it is possible to provide the corresponding APK file? Thank you.

nmt 05 September, 2025, 08:20:09

Companion app on Google Play Store

user-cc6fe4 05 September, 2025, 14:41:00

Thread

user-9382ce 08 September, 2025, 10:02:17

Dear Pupil Labs team,

I am using Pupil Cloud for eye movement analysis and would like to confirm the exact settings of the fixation and saccade detection algorithm. Specifically, I would like to know:

Which velocity and duration thresholds are used to classify a saccade?

Which criteria (e.g., velocity, dispersion, minimum duration) are applied to define a fixation?

Are these parameters fixed in Pupil Cloud, or can they be adjusted by the user?

In some eye-tracking systems, thresholds around 30ยฐ/s for fixations and 40ยฐ/s for saccades, with minimum fixation durations of 80 ms, are commonly reported. Could you please confirm whether these values are applied in Pupil Cloud, or clarify which exact thresholds your software uses?

Thank you very much for your attention and support.

Best regards, Alejandro

user-d407c1 08 September, 2025, 10:15:35

Hi @user-9382ce ๐Ÿ‘‹ ! The fixation and saccade thresholds, as well as the algorithm details, are reported in our fixation white-paper, which you can find here.

These parameters are not tunable in Cloud; we use the values that we found work best on our dataset.

For convenience, I am attaching the white-paper directly to this message.

Kindly note that you can also get real-time fixations on the device, and the algorithm may have been slightly adjusted to accommodate its usage on the device.

Pupil_Labs_Fixation_Detector.pdf

user-bda2e6 08 September, 2025, 22:52:50

Hi @user-cdcab0, I asked this question a while ago and it wasn't a feature back then. Is real-time blink detection when using neon and Psychopy a feature now? Sorry if this was in one of the updates, I haven't been following lately and I just got back to work

nmt 09 September, 2025, 17:29:46

Hi @user-bda2e6! This hasn't been implemented as yet. But we do have your feature request logged!

user-e3fdf5 09 September, 2025, 21:48:18

Trying to use neon with psychopy builder. I am receiving RPC_DEVICE_RUNTIME_ERROR on psychopy. I am also receiving "ioHub Server Process Completed With Code: 0". I have neon connected via wifi and neon monitor can view the camera on the browser. Psychopy does not seem to recognize or find neon. I tried lots of things, reinstalling the package, setting the companion address to the ip address (neon monitor can be accessed with neon.local:8080). Looking for help with this last piece of the connection.

user-f43a29 10 September, 2025, 08:14:23

Hi @user-e3fdf5 , are you using PsychoPy Builder or PsychoPy Coder?

user-9a1aed 10 September, 2025, 05:17:24

Hi team, may i check if I use pupil_labs.neon_recording.open to read the local recordings, what are .gaze.ts and .scene.ts and .event.ts? Particularly, scene.ts and event.ts, are they the onset time? It seems that the onset of gaze is earlier than the onset of the scene

user-f43a29 10 September, 2025, 08:18:02

Hi @user-9a1aed , the .ts fields always correspond to timestamps. In fact, the same timestamps as you get in the Pupil Cloud Timeseries CSV exports, so you can refer to the Data Format Documentation for details.

The very first timestamp of scene.ts, gaze.ts, and event.ts are the onsets of those respective streams.

For any Neon recording, gaze can start before the scene camera or the scene camera can start before gaze. This is because the sensors run independently (in parallel) and each has a warm up time. Typically, within a second or two, all sensors have started running.

Is your aim to know the timestamps relative to the start of the recording or relative to the start of the scene camera stream?

user-9a1aed 10 September, 2025, 05:54:41

Is this the correct way to map the gaze, video frame index, event using the same timestamp>

Chat image

user-f43a29 10 September, 2025, 08:26:31

Hi @user-9a1aed , this is correct.

Just note that the default sampling method is nearest neighbor, so any gaze data that occur before the first scene camera frame occurs will be sampled as if they happened during that frame. This is also why recording.begin is shown as aligned with the first gaze datum and the first scene camera frame in your DataFrame, which is generally not the case.

For all overlapping data, you don't need to think about this.

user-057596 10 September, 2025, 07:30:37

Hi, two questions regarding Neon. Firstly has the streaming on the World View camera views to an IOS system been resolved and with our nonprescription lens the magnets on one of the lens have for some strange reason come away so we canโ€™t attach it back on so Im wondering if itโ€™s possible to get a spare lens?

user-480f4c 10 September, 2025, 11:10:37

Hi @user-057596! Thanks again for flagging these issues.

Unfortunately, the monitor app streaming on iOS hasn't been resolved yet. We're still waiting on some iOS features to be implemented so that our web monitor can run more reliably. If we get any updates, I'll make sure to let you know.

For the lens issue, could you please drop us an email at [email removed] with a photo showing the magnet? Our Ops team will then get in touch right away to sort it out for you.

user-057596 10 September, 2025, 13:05:30

Thanks @user-480f4c for getting back to me and I will photo the magnet and send it over.

user-ebd1fe 10 September, 2025, 17:00:39

Hi guys ๐Ÿ‘‹ has anyone worked on obtaining the visual axis using Neon or knows of any references on how to calculate it? Also, has anyone tried projecting vectors using the optical axis through eye centers to estimate gaze position data โ€” is this method legit?

user-f43a29 11 September, 2025, 07:23:05

Hi @user-ebd1fe , when you say "estimate gaze position data", do you mean estimate the 3D position in the world that the person is looking at?

user-e9f71a 11 September, 2025, 01:57:48

Hi guys, I have the automatic upload to cloud on the companion device of the neon lab glasses, but I deleted the video from the cloud workspace by mistake, but itโ€™s still on the phone, how can I reupload it to the cloud?

user-c2d375 11 September, 2025, 06:23:59

Hi @user-e9f71a ๐Ÿ‘‹๐Ÿป Once a recording is permanently deleted from Pupil Cloud, it cannot be re-uploaded from the Neon Companion app. However, when a recording is first removed from a workspace, it isnโ€™t immediately lost but moved to the trash.

Could you please check if itโ€™s still there? You can do this by clicking on the three-dot icon next to "Add filter", then selecting "Show trashed". If you find the recording, just right-click on it and select "Restore Recording" to bring it back to your workspace.

user-ac970e 11 September, 2025, 23:18:42

Hi all! I just downloaded Neon Player and am having trouble finding a way to add or edit AOIs. Is the Neon Player capable of setting AOIs, or is that a function only available in the Pupil Cloud?

user-f43a29 12 September, 2025, 08:41:47

Hi @user-ac970e , with Neon Player, you use the Surface Tracker plugin and draw multiple Surfaces to represent multiple AOIs.

If you need shapes other than quadrilaterals, then in a post-hoc analysis step, you can apply a mask to the surface-mapped gaze coordinates.

user-9a1aed 12 September, 2025, 05:34:06

This means that I cannot have duration for each gaze data point I have, right?

Chat image

user-f43a29 12 September, 2025, 07:51:21

Hi @user-9a1aed , what is your end goal?

Gaze is not typically thought of as having a duration. Gaze is rather a continuous process; i.e., the ever changing position of where you are looking, which is sampled moment-by-moment. In other words, each gaze datum can be thought of as an instantaneous measurement in time.

When your gaze pauses, that is classified as a fixation and the duration of that pause is the fixation duration. As a result, there are always less fixations than gaze points.

If this is unfamiliar, just let us know.

user-763912 12 September, 2025, 16:56:37

Hi! I have a similar question about setting AOIs in the Neon Player - is there any way to set surfaces across multiple recordings? Or do we have to set surfaces for each individual recording (and setting AOIs across multiple recordings is only a feature available in the Pupil Cloud)?

nmt 13 September, 2025, 09:49:37

Hi @user-763912 ๐Ÿ‘‹. Yes, you can reuse surface definitions across multiple recordings. After loading the Surface Tracker plugin and defining your surfaces in an initial recording, hit export. This action saves a surface_definitions file to the neon_player subdirectory of that recording. You can then copy this file and place it into the corresponding subdirectory of any new recording. The surfaces will be applied automatically as long as the same markers are present in the scene video.

user-999fbb 12 September, 2025, 19:08:08

If all available data (full player-compatible package) are downloaded, saved offline, and then deleted from Pupil Cloud, can these data be re-uploaded to Pupil Cloud at a later time (restoring the original capture)? If so, how?

user-999fbb 12 September, 2025, 19:13:09

Separate Question: Is there any common wisdom on how to filter fixations based on the confidence value (0,1) indicated in 'fixation detected on surface' column in the 'fixations_on_surface' output file from Neon Player? And, is there any such confidence that is computed for fixations on AOIs built in Pupil Cloud? In output files for Pupil Cloud, I only find a total list of fixations/AOI, whereas in the output files for Neon Player, I find fixations and associated confidences/surface. Maybe my underlying question is more about the difference between the surface and AOI. Thanks!

nmt 13 September, 2025, 10:12:06

Hi @user-999fbb!

It is not possible to re-upload recordings to Pupil Cloud after they have been downloaded and deleted. Recordings deleted from Cloud can be restored from the trash for 30 days, but once permanently removed, they cannot be uploaded again from any source.

Regarding your second question: - Surface Fixation "Confidence": The confidence metric from Neon Player is binary (1.0 for "on surface" or 0.0 for "off surface"). This is a filter to indicate if a fixation occurred on the defined surface area. It is not a graded confidence score of the fixation itself. This is equivalent to the 'fixation detected on surface' column in the Marker Mapper export from Pupil Cloud. - Surfaces vs. AOIs: A "surface" is the entire area you define for tracking, such as a complete computer screen or a poster. An AOI is an additional layer you can define in Pupil Cloud to subdivide a surface into smaller, specific regions, such as an icon on the screen or a face within a poster. - AOI Data in Pupil Cloud: The export file for an AOI from Pupil Cloud lists only the fixations that fall within that specific, user-defined region. Because the list is already filtered to only include fixations on the AOI (and metrics are also generated) a separate "on/off" column is not necessary.

user-9a1aed 15 September, 2025, 01:23:50

Hi Team, when I tried to run the PsychoPy program, it says calibration with the host IP failed. Later on, when I tried it several times, it worked. May I know what is going on? Sometimes, it takes a while to connect the host device to the laptop's program.

user-cdcab0 15 September, 2025, 07:09:17

I haven't seen this particular error before, but it's likely a networking issue. Can you describe your network configuration?

user-9a1aed 15 September, 2025, 11:36:58

Hello, when I am running the Psychopy program, the host phone does not connect to the program, and the bug says cannot calibrate. After a while, the program can run smoothly, although I did not fix anything. I wonder why this happens sometimes? Thx!

user-cdcab0 15 September, 2025, 13:18:45

If you could provide your PsychoPy log, that would be very helpful

user-cdcab0 16 September, 2025, 09:31:35

The psychopy log is saved to a file in your .psychopy3 folder. It's overwritten every time you launch PsychoPy, so you need to make sure you grab it right after it happens

user-9a1aed 16 September, 2025, 09:36:02

As for one of the recordings, I calibrated to ensure accuracy using the offset function. Also checked the eyeview to be captured during the recording, but it seems that the red circle is a bit off. I want to check if this is normal. Thx

user-cdcab0 16 September, 2025, 09:47:50

Do you mean the red annulus displayed on the phone's screen (the one you apply the offset to), or do you mean the red circle displayed in your PsychoPy program?

user-9a1aed 18 September, 2025, 01:02:54

We have done offset correctly and with the same viewing distance. Is there other reasons/ways that we can improve the eye tracking accuracy? Thx!

user-9a1aed 16 September, 2025, 12:27:10

for offset, we create small rectangle shapes on the computer screen. so the target viewing distance should be the same too

user-c4afaa 16 September, 2025, 15:50:34

Hi all! My name is Nitya and I am a Master's student working on trying to identify blinks versus saccades in our collected data as automatic blink detection data is not available. Upon plotting the pupil size relative to time, I noticed that there seemed to be frequent dips in pupil size. I predict that this could correlate to blinking but wanted to check if anyone had any insights as to what could be causing these dips or if there was any other way to detect blinks. Thank you, Nitya

Chat image

user-f43a29 17 September, 2025, 07:44:15

Hi @user-c4afaa , those dips are usually produced by blinks, but are not as robust or reliable as eyelid openness, which Neon's blink detector uses. May I ask, though, since you have pupil diameter, but not blink data, when was this data collected and are you using Pupil Cloud?

user-bda2e6 17 September, 2025, 15:08:41

Hi @user-f43a29! Thank you for the reply. Nitya is working with me and I have been talking with @user-cdcab0 regarding blink detection. Our data is streamed from Psychopy and due to IRB regulations we cannot use Pupil Cloud or the raw eye recordings. The time series data produced by Psychopy is the only thing we can work with in this particular study. Blink detection from Paychopy has not been implemented yet so we have to make our own algorithm to do that with what we already have.

user-cdcab0 17 September, 2025, 15:43:55

Although we aren't streaming blinks to PsychoPy yet, the blink events are recorded in the Neon Recording. The best thing you could is synchronize the data sets (PsychoPy data and recording data)

user-bda2e6 17 September, 2025, 15:11:12

And this pupil diameter drops are the only meaningful blink indicator we have seen in the data so far. So we are wondering if you could share any insight as to what could we do to better detect blinks from the data we have

user-882858 17 September, 2025, 18:18:25

Hi @user-cdcab0, We are doing a classroom study and we set up the AprilTags on the corners of the screen. However, they are not being detected. My suspicion is that it will be hard to include these when individuals are further back. Can you provide some guidance on this? (We need to define AOIs for later data analysis)

Chat image

user-cdcab0 17 September, 2025, 19:00:07

Margins and contrast are good, but the markers are too small at that distance.

A subjective judgement here is a reliable measure. If it's hard for you to distinguish the individual squares within a marker, then it's hard for the detector as well.

user-11dbde 18 September, 2025, 09:26:46

Hello. We have exchanged our pupillabs neon eyetracking module for a new one due to a defect. Now when using pupil cloud the old module is still registered for unlimited space, while the new module cannot be activated for unlimited space. How can we register the new eye-tracker module in pupil cloud with our old license?

user-f43a29 18 September, 2025, 09:30:24

Hi @user-11dbde , please send an email to info@pupil-labs.com about this and the team will resolve it.

user-11dbde 18 September, 2025, 09:27:15

old: 263231 -> new: 595697

user-11dbde 18 September, 2025, 09:28:17

the new one also indicates that there is 7h of data recorded, but we have no access to this recorded data in another account (I guess the data was recorded by pupil labs, not us)

user-11dbde 18 September, 2025, 09:37:33

Hi @user-f43a29 thank you so much for your quick reply. I will send an email now.

user-9a1aed 18 September, 2025, 13:10:15

Hi team, when we use an extension hub to connect the eye tracker, charger, and mobile phone after some experiment blocks, the following prompt pops up on the Neon appโ€œSensor failure The Companion App has stopped recording extimu data! Stop recording, unplug your Neon device and plug it back in. If this behavior persists, it may indicate hardware issue. Please reach out to our support team for help."

So we removed the hub, directly connected the eye tracker to the mobile phone, and closed the small window. The recording display continued recording, and we asked the participants to look at the four corners and the center of the screen to ensure that the eye tracking was functioning. When the recording automatically ends after the experiment finishes, we found out that the recording was only 4 minutes and 58 seconds. But if we do not click into the video, it shows as an 8-minute video.

user-f43a29 18 September, 2025, 13:23:37

Hi @user-9a1aed , could you open a Support Ticket in ๐Ÿ›Ÿ troubleshooting ? Thanks.

user-9a1aed 18 September, 2025, 13:10:37

user-9a1aed 18 September, 2025, 13:10:44

Chat image

user-9a1aed 18 September, 2025, 13:25:01

thanks!

user-52e548 19 September, 2025, 05:23:51

Hello. Could you explain the distortion correction for Marker Mapper or Surface Tracker? If 2D barcodes are placed in the environment with tilt relative to the Neon module's IMU, how does this affect the definition of the ROI? Please discuss the tilt along each axis: X (pitch), Y (roll), and Z (yaw).

user-f43a29 19 September, 2025, 06:51:50

Hi @user-52e548 , Marker Mapper and Surface Tracker do not use the IMU. Rather, they compute the homography that projects the flat plane containing the AprilTags onto the image plane of the scene camera. This is done by estimating the pose of each of the AprilTags from the scene camera image. These are concepts from projective geometry and more details in the general context of cameras and OpenCV can be found here.

A key part of this process is undistorting the scene camera image.

When you say "please discuss the tilt along each axis", do you mean how does this tilt then relate to the orientation values provided by the IMU? Do you want to compare the surface orientation to IMU orientation?

user-69f927 19 September, 2025, 12:34:42

Hello. I am a new PhD student in the field of rehabilitation robotics and am using Neon to detect objects of interest and calculate viewing distance. In your Pupil Labs fixation detector whitepaper, there is a following explanation of using IMU to calculate optic flow.

Where can a find the corresponding code?

Chat image

user-d407c1 19 September, 2025, 13:00:51

Hi @user-69f927 ! You can find it here, but note that depth component is not computed, if that's what you are looking for.

user-ac970e 19 September, 2025, 16:38:25

Hi again! I am using the Neon Player to define surfaces and AOIs, and I just notice that when I load a recording, it seems to automatically begin finding the surface, and progress is shown in this cache. Great! But, do I need to open up every recording in the Neon Player before the surfaces can be found, and then export each recording? In other words, as I make new recordings with the same experiment (on-screen video with April Tags) do I still need to individually open each recording in Neon Player in order for the surfaces to be defined?

Chat image

user-f43a29 19 September, 2025, 16:42:54

Hi @user-ac970e , you don't need to manually redo Surface definitions every time, if you re-use the same Surface and AprilTags across recordings, but it is necessary for them to be found in each recording (i.e., the caching progress that you refer to).

Since Neon is head-mounted, there is no a priori fixed relationship between the tags and the wearer, so the code needs to detect them, since there can be head motion, difference in observer height, sitting distance/position, etc.

user-08dfba 19 September, 2025, 19:29:14

Hi guys! Iโ€™m having an issue with creating enrichments for a study Iโ€™m doing. The reference image mapping enrichments keep erroring out on each of the 10 samples I have. Iโ€™ve tried several different scan recordings and reference images but keep having the same issue. Is any one able to assist?

nmt 20 September, 2025, 04:03:46

Hi @user-08dfba ๐Ÿ‘‹. If you're able, please invite [email removed] to your workspace. That way we can take a look and provide some concrete feedback ๐Ÿ™‚

user-fa126e 20 September, 2025, 05:33:00

I can see this has been asked before but I couldn't see a reply, so apologies for the double question!

When adding a gaze offset in Pupil Cloud, is there a way to do it more precisely? We're getting participants to look at a target, and it can be difficult to tell if the gaze is consistently offset in a particular direction and needs correcting or if the variation of the gaze from the target is just expected variation. It can also be difficult to tell exactly how much the gaze needs correcting, because at some points in time the variation will be less than others - is there a way to calculate the average gaze variation from a particular point in the scene camera over a period of time and then correct the gaze by that amount?

Also, is there a way to correct it other than dragging and dropping with the mouse? It's more accurate than doing it in the app, but it would be nice to have finer control by being able to enter coordinates or even being able to nudge it left/right/up/down using arrow keys once it's placed to make it more precise.

I saw another user mention that it would be great if we could use a target (like a QR code) that the glasses can automatically recognise and calculate the gaze offset from that point over a specified period of time (when the participant is looking at it), but I assume that isn't possible?

user-d407c1 22 September, 2025, 07:44:50

Hi @user-fa126e ๐Ÿ‘‹! What you describe is possible. You can use fiducial markers, or even better, the circular calibration marker from Pupil Core.

Since our detector is open-source, you can use it to detect the marker, compute the average offset over a period of time, and then send that correction via the Cloud API. That said, from my experience the actual gain is usually minimal, if any.

I can put together a small gist to show how this would work if youโ€™d like, though it may need to wait until next week.

One thing to check first though, if what youโ€™re seeing looks more like jitter rather than a consistent offset, it might be related to the participantโ€™s setup (e.g. wearing their own prescription glasses, or in some cases strabismus). That can introduce variability that wonโ€™t really be fixed by offset correction.

user-ac970e 20 September, 2025, 12:26:29

Does anyone have any suggestions for lining up the timestamps in gaze_positions_on_surface.csvs with the frames from the world video?

gaze_positions_on_surface.csv does not start at exactly the same timestamp as the world_timestamps.csv (the first value in one is 1755530803833957766 and in the other, 1755530802581479680, which is about one second difference or less, but still problematic to not have things lined up exactly especially since exports from Neon Player only give gaze positions and not fixations, so fixations have to be manually calculated), so I'm also unsure why there would be a discrepancy there. Also when I converted those values to Universal Time in R, the time it gave me did not match anything in the export_info.csv either.

Knowing the timing of events and fixations in my experiment is really important, but I am struggling to figure out how to know when in the gaze_positions_on_surface.csv an event occurs in the world video. I can imagine this is something someone else probably figured out how to do though?

user-d407c1 22 September, 2025, 07:52:59

Hi @user-ac970e ๐Ÿ‘‹ The timestamps are already in UTC Unix Epoch (nanoseconds), and theyโ€™re already aligned. The gaze_positions_on_surface.csv inherits the timestamp of the underlying gaze sample, it checks for the closest scene camera frame and looks if the surface was detected. Since the gaze data itself carries the same timestamps, and you have the fixation IDs in gaze.csv, matching them should be straightforward.

Regarding your conversion to Universal Time, are you using a timezone-aware library? Could you share how you do it?

Also, could you clarify your note on Neon Player not computing fixations on surface? That shouldnโ€™t be the case, you should see a fixations_on_surface_<surface_name>.csv (docs) as long as the fixation plugin was enabled. . Which versions of the Companion App and Neon Player are you using?

user-6de021 20 September, 2025, 21:06:36

Hello,

I have two recordings that are each around 30 minutes long, with events already marked. I just realized that for enrichments, the videos cannot be longer than 3 minutes. Is there a way to crop or trim the videos so I can run enrichments only on the specific event segments?

Thank you for your help!

user-fa126e 21 September, 2025, 08:41:29

Hey Allison, I'm not from Pupil Labs but just curious - which enrichment are you using? I've been using the face mapper enrichment on videos that are 5-10 minutes long and haven't had any issues.

However, when I made my enrichments I was able to choose the "Begin event" and "End event" in the menu on the left (Under "Temporal Selection"). I assume all the recordings would have to have the same events in them if you want to process them all at once, otherwise you could probably do them individually and choose the start and end events for each of them separately.

user-f43a29 22 September, 2025, 07:48:25

Hi @user-6de021 , only the Scanning Recording for the Reference Image Mapper Enrichment has the 3 minute limitation. All of the other recordings that are then analyzed (i.e., the ones where participants actually wear Neon and do the task) can be as long as needed.

user-c4afaa 21 September, 2025, 14:08:16

Hi Pupil Lab team! Would someone be able to clarify the units for the gaze in the x, y, z directions as well as the units for and difference between device_time, logged_time, and time? Thank you!

user-f43a29 22 September, 2025, 07:50:53

Hi @user-c4afaa , just to clarify, is this for data collected with PsychoPy?

user-52e548 22 September, 2025, 07:56:53

Let me ask another question while another thread is being processed. The following page states that Yaw convergence takes time, but is there any reference data available? I understand it is significantly influenced by the surrounding environment, but it would be helpful to have data showing the relationship between elapsed time and error rate. https://docs.pupil-labs.com/neon/data-collection/calibrating-the-imu/

user-f43a29 22 September, 2025, 08:06:16

Hi @user-52e548 , we specify the IMU's model number and link to the manufacturer in our Documentation. They have a page with technical details and specifications.

May I ask why it would be significantly influenced by the surrounding environment? Are you referring to when highly magnetic equipment is nearby?

user-480f4c 22 September, 2025, 13:34:27

@user-08dfba thanks for sharing more context. Just to clarify, since the enrichments Test and P1 are identical, and P1 has already been completed successfully, is there anything specific preventing you from using the output of P1?

are there any other enrichments that haven't been completed?

user-08dfba 22 September, 2025, 13:39:12

@user-480f4c so that YardHouse workspace was a past project that I was trying to test things on. The current project is that second workspace I shared access to is where we first experienced the issues. So that new test enrichment was just a troubleshooting test

user-480f4c 22 September, 2025, 14:43:13

@user-08dfba just to follow up here as well - the fix has been deployed and enrichments are now being re-processed. Since you also have opened a ticket in the ๐Ÿ›Ÿ troubleshooting channel, we can continue the conversation there once re-processing has finished!

user-1423fd 22 September, 2025, 13:49:51

Hi! Is there a way to change gaze data capture rate post collection in pupil cloud / on neon player app? say I recorded gaze data at 200hz and wanted to downsample to 100hz, or vice versa

user-d407c1 23 September, 2025, 09:03:28

Hi @user-1423fd ! Thatโ€™s not possible to change after the fact, the sampling rate is fixed during recording.

Worth noting though: if Cloud detects that the data is below ~185 Hz, it will automatically reprocess the recording to achieve 200 Hz.

user-ac970e 22 September, 2025, 15:42:42

[email removed] ! Letโ€™s try to break it down

user-cc6fe4 22 September, 2025, 20:40:56

Hello, I am still having some trouble downloading the data from the project in the cloud. I manage to download it eventually, but only after several failed attempts. The speed also continues to fluctuate a lot. I was looking at other options, such as the Pupil Cloud API (https://api.cloud.pupil-labs.com/v2). However, when I downloaded the files, they did not correspond to the ones available in the Timeseries (different files and different columns). Is there any web endpoint (API) that provides access to all the files available through the cloud download? And where is this documented? Thank you

user-f7408d 23 September, 2025, 04:16:58

Hi, I want to do a sanity check. We are developing some cognitive aids for paramedics to use in the field. They will be accessed via a tablet. We want to do some user testing in a simulated real world environment. Looking through the Alpha Labs I have come across the "Map to Dynamic Screen Content", "Gaze on Phones" and "Website AOIs". After reading these my plan is to record the screen on the tablet at the same time as the eye tracking. Create a Reference Image Mapper enrichment. Export data for offline analysis, to generate scanpath etc..? Just wanted to double check there isn't an updated way to track dynamic content on a phone or tablet?

user-67b98a 23 September, 2025, 05:15:53

Hi, I'm running a video renderer for a project, but I keep getting an error message even after trying multiple times. I've also emailed the cloud support team. Is it possible to get help resolving this?

user-480f4c 23 September, 2025, 06:46:10

Hi @user-67b98a! We've received your email and working on a fix. We will follow up as soon as possible! Thank you for your understanding ๐Ÿ™๐Ÿผ

End of September archive