πŸ‘“ neon


Year

user-13f7bc 01 February, 2025, 16:45:49

Is it possible to set the sampling rate of the eyetrackers to a fixed on, e.g., always 60 hz?

user-480f4c 03 February, 2025, 07:29:06

Hi @user-13f7bc! The Neon Companion app provides real-time gaze data at up to 200 Hz. However, you can adjust the sampling rate in the app settings (Gaze data rate) to 100 Hz or 33 Hz if you wish.

Eye images are always captured and saved at 200 Hz. This means that once a recording is uploaded to Pupil Cloud, gaze data is automatically re-computed at the full 200 Hz rate. So, even if you selected a lower sampling rate on your phone, the downloaded recording from Cloud will still contain gaze data at 200 Hz.

user-9aec11 03 February, 2025, 10:02:37

Hi! where can I find more information on the coordinate system of the optical axis vector? the page linked on the github does not work. thanks!

user-d407c1 03 February, 2025, 10:37:06

Hi @user-9aec11 ! Would you mind pointing out which link is broken? Regarding the coordinate system, does this image helps you?

Chat image

user-13f7bc 03 February, 2025, 11:30:23

@user-480f4c Can I follow-up on the sampling rate question - what if I retrieve the data directly from the app and import to neon player? Will it also resample?

user-480f4c 03 February, 2025, 11:36:40

@user-13f7bc When data are uploaded to Pupil Cloud, the eye images are used to reprocess the gaze data at the full 200Hz rate. However, Neon Player uses the Native Recording Data format which always provides the data at the sampling rate as defined by the Gaze data rate setting from the app. Therefore, if you set your real-time gaze rate in app to be 100 Hz, then data will come at 100 Hz when exported by Neon Player.

user-08df76 03 February, 2025, 12:17:34

Hello everyone, I'm currently looking for the confidence data of my neon pupil size measurement. On the website it says: "Gaze data inherits the confidence value from the pupil data it was based on." But I can't find it. Can you help me here please?

user-13f7bc 03 February, 2025, 12:18:37

@user-480f4c Brilliant, thank you so mcuh

user-d407c1 03 February, 2025, 12:26:37

Hi @user-08df76 πŸ‘‹ ! That phrase corresponds to the documentation from Pupil Core, Neon does not have a confidence value. See more here https://discord.com/channels/285728493612957698/1047111711230009405/1136218102892343357

user-08df76 03 February, 2025, 12:42:48

Thank you @user-d407c1! Is there anything similar that I can use for my neon data?

user-d407c1 03 February, 2025, 12:45:45

Unfortunately, we do not have yet, a sort of confidence metric that we can derive from.

Meanwhile, you can find a high-level description as well as a thorough evaluation of the accuracy and robustness of Neon’s pupil-size measurements in our white paper.

user-24f54b 03 February, 2025, 21:10:00

Hi all, it seems reference glasses frame for the bare metal neon here (https://github.com/pupil-labs/neon-geometry) fit the bare metal neon by itself but not with the PCB board attached (this is why we accidentally tore the wires on our original PCB, trying to fit it into the glasses doesn't work) is there another design that would accommodate both so that it can be used in practice or would we need to design our own? Please let me know.

user-d407c1 04 February, 2025, 08:37:13

Hi @user-24f54b !
I’m not sure I fully follow. That repository includes 3D models for the PCB board (a.k.a. Bare Metal), the module, and a pair of glasses resembling the Just Act Natural frame.

As you can see in the render, the frame already has space for the PCB board and module. Kindly note, that the PCB is glued onto the frame, not screwedβ€”the screws are only used to secure the module.

Out of curiosity, since you're 3D printing the frame, do none of our off-the-shelf frames meet your needs?

Chat image

user-1beb67 03 February, 2025, 22:31:11

hey, so I have some recordings in which the events I made using realtime API are wrongfully placed. I was trying to edit them through the Cloud but whenever I tried deleting them, an error would come up saying that I can only suppress them, rather than deleting since they're 'recording events'. any recommendations on how I should proceed? I couldn't find anywhere any reference to suppressing events. I've tried using the Neon Player, but didn't have any success with their Annotation plugin

also I would need to align some of the timestamps of these post-hoc events in 2 pairs of recording, how can I do so?

user-d407c1 04 February, 2025, 10:12:04

Hi @user-1beb67 ! πŸ‘‹ We've modified this behavior in Cloud, so you should now be able to remove recording events.

Before proceeding, though, were the recordings you aim to sync recorded at the same time and any event sent through the real-time API to both? That would facilitate it.

Alternatively, do you have any visual or audible signal captured in both recordings?

user-37a2bd 04 February, 2025, 08:21:16

Hi everyone, do we still have to use the python program to get a gaze path from our recordings? Will you guys be implementing any process in the cloud to generate gaze path?

user-480f4c 04 February, 2025, 08:31:03

Hi @user-37a2bd! If you want to generate a fixation scanpath overlay over your scene camera video, this is already available using our Video Renderer visualization on Pupil Cloud.

For scanpath overlays over a reference image (after mapping gaze/fixations using our Pupil Cloud enrichments), we have made a very easy to run Google Colab to help you generate static and dynamic scanpaths. You can find it here

user-1beb67 04 February, 2025, 16:43:05

@user-d407c1 would it be able to also enable renaming those events? other than also deleting. thank you a lot for the quick update!

user-13f7bc 04 February, 2025, 18:12:12

I would love to get feedback from everyone if possible:

a) Has anyone used "ready to go" frames or the one with the headband on top of glasses? We are aiming to use this model with babies that often have eyesight problems. b) Would higher eyesight prescriptions create problems for the precision of the module or can it cope with it? c) The module with headband comes in blue, has anyone been able to change its colour? if so, how did you do it? we would prefer something more neutral to be less distracting for the babies. d) Has anyone used the eye-tracking glasses with babies with and without disabilities - how did they react to them?

Thank you

user-f389a1 04 February, 2025, 18:24:13

Hello - I am trying to analyse pupil data collected with the Neon glasses. I am in python using this guide for loading pupil data - https://pyplr.github.io/cvd_pupillometry/06a_pipr_analysis_pipeline.html - however, I am having issues loading the data as the I downloaded (Timeseries) is set up in a different structure. I am wondering which columns I need for the pupil analysis?

user-f43a29 05 February, 2025, 08:45:14

Hi @user-f389a1 , that is a great package. Although PyPLR was originally made for Pupil Core's data format, its code and principles are still relevant and helpful and you can in principle use its routines with Neon data, just like you are planning.

If you want to use it with Neon's data format, then you want to extract the pupil diameter left/right [mm] columns from the 3d_eye_states.csv file.

It can also be helpful to know that the timestamps in the Timeseries CSV files are in nanoseconds (UTC format), so you do not need to deal with Pupil Time, which is something specific to Pupil Core, but you may want to convert them to seconds to be consistent with PyPLR's assumptions.

If you want to know time relative to recording start, then you can subtract the start_time value in info.json from all timestamps for that recording.

user-24f54b 05 February, 2025, 00:39:05

Chat image

user-9aec11 05 February, 2025, 09:27:07

okay, thanks a lot for your reply! using the API would not avoid this issue right? also I have another problem, in some of my recent recordings, the gaze data starts with a big offset compared to the video. I was used to it taking a few seconds and waited at the beginning of recordings for some s but it is really hard to work with the Neon now if I have no idea when the gaze recording starts and if I will have some data loss in the beginning. Do you have any idea why this happens? Thank you so much!

user-480f4c 05 February, 2025, 09:32:48

Yes, the same issue with the sampling rate would happen when using the API if you're using an older companion device and have the real-time eye state computation enabled.

As for your second question, can you clarify how long this offset is? In general, the time each sensor starts varies. The difference between sensor start times ranges from 0.2 to 3s on average, so we would recommend starting the recording and waiting a few seconds to ensure all sensors have started properly before any important part of your experiment begins.

user-9aec11 05 February, 2025, 09:35:48

so usually it was just a few seconds (up to ~3 as you said) but now it is between 9 and >20. I usually wait a few seconds but if I dont know why this got worse I am scared it might increase even more without my knowing.

user-480f4c 05 February, 2025, 09:38:51

in that case, could you please open a ticket in our πŸ›Ÿ troubleshooting channel to help us identify the root cause of this issue? We will assist you there in a private chat.

user-13f7bc 05 February, 2025, 12:50:16

Would you be okay with me PM'ing you? I am also going to use the glasses with infants and it would be great to get your thoughts on some things!

user-f43a29 06 February, 2025, 08:26:10

Hi @user-13f7bc , I'm briefly stepping in for my colleague, @user-cdcab0 .

Would you be interested in a Demo and Q&A call? We can show you how the different frames look, how they sit with respect to glasses, discuss dimensions, and also demonstrate Neon's eyetracking in real-time.

You can schedule a meeting at your convenience here.

Also, if you can elaborate a bit on your earlier question (item b; here: https://discord.com/channels/285728493612957698/1047111711230009405/1336398922901491822) about Neon's eyetracking performance with respect to varying prescription strength, then we are happy to discuss that.

user-13f7bc 05 February, 2025, 12:54:15

@user-cdcab0 Is there any guidance anywhere for how to use the module with infants and/or people with glasses? I know there are interchangeable lenses, but those only go up to +-3.

user-cdcab0 05 February, 2025, 15:29:07

Although our child-sized frames are designed for ages 2-8, we've had some customers with good success using them with infants. The challenge here is mostly just the physical size of the frame vs the head.

For the interchangeable lenses, the standard I Can See Clearly Now kit comes in Β±3 diopters, but we also sell an extended range lens kit. It has 20 additional lenses: -6.0, -5.5 -5.0, -4.5, -4.0, -3.5, -1.75, -1.25, -0.75, -0.25, 0.25, 0.75, 1.25, 1.75, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0

user-13f7bc 05 February, 2025, 17:37:33

@user-cdcab0 we would like to be able to use the ready to go with the headband, would it be possible to use it with glasses?

user-cdcab0 05 February, 2025, 19:44:51

No, not really, unfortunately

user-13f7bc 06 February, 2025, 09:02:08

@user-f43a29 I have already sampled all the frames in all adults, just not in infants. I am mostly interested in their compatibility of being worn with glasses because we do not want to use the glass frames as we much prefer the ready to go sets both with headband and with frames. If the glasses sit before the eyetracker, this shouldn't be a problem right? Or is the added distance between eyetracker/eyes a problem?

user-f43a29 06 February, 2025, 11:43:35

Hi @user-13f7bc , I see. To be sure I provide the best answer, may I ask if you mean wearing it like this?

Chat image

user-13f7bc 06 February, 2025, 11:45:23

@user-f43a29 yes, but with ready to go

user-f43a29 06 February, 2025, 12:00:02

I see. If you would like to try it with the frame you are considering, then please see my colleague, @nmt , message here for helpful details: https://discord.com/channels/285728493612957698/1337122797868290110/1337151080798359602

user-13f7bc 06 February, 2025, 18:08:37

@user-f43a29 Dear Rob, thank you for this. Unfortunately we are really aiming to avoid glasses because we have had limited success with them with neurodiverse children.

user-f43a29 06 February, 2025, 18:20:28

@rob Dear Rob, thank you for this.

user-24f54b 07 February, 2025, 03:04:40

Hi Dom, we're in touch with the 3D print lab about higher quality prints. Do you have some photos of a 3D printed version of the just act natural frame fitting the PCB and the module as intended so I can see how the grooves work and what should fit where?

user-cdcab0 07 February, 2025, 05:26:47

I'm afraid I don't have any photos, but perhaps this animation will clear things up. You'll notice an overhang the clips the nest PCB in place - this requires some flex in the frame or, if you're feeling adventurous, a pause in the print so the PCB can be inserted before the overhang is printed.

Importantly, please note that the wires are not displayed here - you will need to accommodate them in your own model (or modify the JAN reference accordingly)

user-cdcab0 07 February, 2025, 05:28:37

user-3a111a 07 February, 2025, 11:48:37

sorry ..my bad...

Hi so in earlier version of neon player, the downloaded CSV files included world_index also. I dont see that now with updated version of neon player and old version is also not working anymore. So i basically want the synchronised-time stamps, world frame index and gaze value..... I dont this find this in neon player or cloud either ?

user-cdcab0 07 February, 2025, 11:52:30

Hi, @user-15edb3 - I moved your message here to the πŸ‘“ neon channel.

A few versions ago, an effort was made to make Neon Player's exports/outputs more closely match those of Pupil Cloud. As you've discovered, this now excludes a world_index.csv file, but not to worry.

With the World Video Exporter plugin enabled, you will see a corresponding world_timestamps.csv file that contains the timestamp for each frame in the video. To synchronize against gaze values, you can compare timestamps to find the closest matching values

user-15edb3 07 February, 2025, 11:57:31

Ok will try that. But do consider including that just like ow it was earlier. Infact i remember the time stamps was in sec starting from zero. Also in cloud also i dont see this feature so the same has to be done there as well i guess. Thanks anyways for timely response.

user-f0ea5b 08 February, 2025, 18:37:04

Hello, I am doing my Master’s thesis on smooth pursuit eye movements in athletes. I am trying to combine post hoc object tracking for baseballs (python scripts), using the environment camera feed. My lab has two neons and I’ve built a prototype object tracker. For the final steps I will need a better refresh rate and resolution on the environment camera. Is there a way to upgrade the environment camera on the Neons or will I have to go the DIY route?

user-f4b730 09 February, 2025, 11:31:48

Hello, I have checked the NeonPlayer and Cloud system to define an AreaOfInterest. Based on the screenshots showing the desktop and online tool it seems that the cloud tool is more accurate to adjust the video grandangular distortion of the image to define the ROI.

Is this just an impression or the two tools slighty differ?

Chat image Chat image

user-cdcab0 09 February, 2025, 16:27:41

Hi, @user-f4b730 - the difference is merely in the visualization of the surface bounding areas. Neon Player and Pupil Cloud both map gaze to an undistorted surface

user-cdcab0 09 February, 2025, 16:25:42

Hi, @user-f0ea5b - we've actually published an AlphaLab article that shows how to map gaze onto an alternative egocentric video using Neon. I think this is perfect for your use case!

user-f0ea5b 09 February, 2025, 17:03:53

That looks like it might solve my issue, thanks!

user-f0ea5b 10 February, 2025, 02:05:34

Hi, @user-cdcab0 - are you aware of any publications using this technique in the literature?

user-cdcab0 10 February, 2025, 05:13:58

We just published that how-to in late November, so it's pretty new still

user-b14b09 10 February, 2025, 10:29:01

My mobile phone got updated and after update i can't find pupils mobile compainion app. How can i get it back.

user-c2d375 10 February, 2025, 10:38:40

Hi @user-b14b09 πŸ‘‹πŸ» Could you please open a ticket in our πŸ›Ÿ troubleshooting channel, so we can assist you more effectively with this issue? Thank you!

user-b14b09 10 February, 2025, 10:29:52

I tried downloading from

user-b14b09 10 February, 2025, 10:30:17

I tried downloading it from Google playstore but the offset correction is not working

user-613324 10 February, 2025, 14:44:28

Hi Neon team, we are using Neon eye tracker to collect data for our clinical study. For the recordings that was done after the noon of 01/23/2025, I noticed that the gaze.csv has 2 times the number of rows than 3d_eye_states.csv (both of which were downloaded from the Pupil Cloud --> Timeseries data). When I scrutinized the gaze.csv, I found that every other timestamps differ by about 5ms, but in between them there's an extra timestamp that lags by only 50 to several hundreds ns. I tried to re-download the Timeseries data from the Cloud, but it didn't help. Please help to debug this problem. Thanks!

user-f43a29 10 February, 2025, 17:43:25

Hi @user-613324 , could you open a support ticket in πŸ›Ÿ troubleshooting about this?

user-a55486 10 February, 2025, 20:10:37

Hi team, we are doing some quite long recordings and we extracted the native format using a USB. We didn't upload to Pupil Cloud but would like to have the same preprocessing pipeline and output folder structure as those downloaded from Pupil Cloud. Is there any macro command to do this in neon player?

user-cdcab0 11 February, 2025, 11:55:13

My colleague, @user-f43a29, reminds me that we do have the pl-rec-export utility. While it can't do everything that Cloud or Neon Player can do, it may suit your needs

user-cdcab0 11 February, 2025, 10:33:41

No, I'm afraid we don't have anything like that at this time

user-613324 11 February, 2025, 03:33:16

Hi Neon team, I'm curious how mathematically the gaze is transformed from the 2d image space of the scene camera to the surface coordinate system, and trying to reconstruct the process. I've read your documentation and the implementation of surface.py on github. Based on my understanding, the process seems to be the following: given the coordinate of a gaze in 2d scene camera image (px and py, which corresponds to the gaze x [px] and gaze y [px] columns in gaze.csv), find the corresponding homography matrix "dist_img_to_surf_trans" in the Neon Player exported surfaces/surf_positions_Screen.csv according to the matched world_timestamp. Let's refer to this matrix is mat. Then np.dot(vec, np.transpose(mat)), resulting in a 1x3 vector [v1, v2, v3]. transformed_x = v1/v3, transformed_y = 1-v2/v3, is that right? I compared the results with surfaces/gaze_positions_on_surface_Screen.csv, but they differ by about 3%.

user-cdcab0 11 February, 2025, 10:32:48

That seems right, but we let OpenCV do a lot of that math via cv2.perspectiveTransform. My first guess for your 3% difference is from camera distortion. Try undistorting your points before transforming (and if that's worse, try undistorting after the transformation instead)

user-37a2bd 11 February, 2025, 09:18:43

Hi Team, for the aoi metrics that we get from the downloaded enrichment data, we have a metric called Time to First Fixation. Am I right in saying that Time to First Fixation is measured w.r.t the start time of the recording itself and not measured against the events that we marked? If so is there a way to change it such that it is measured against the start time of the event rather than the start time of the recording itself?

user-613324 11 February, 2025, 12:52:19

Thanks Dom. But how should I undistort the points? Based on my understanding, gaze x [px] and gaze y [px] columns in gaze.csv are the coordinates of gaze in scene camera image space which includes distortion, is that right?

user-cdcab0 11 February, 2025, 12:58:40

The apriltag marker locations are detected in an undistorted image, and the homography is computed from that. So, I think you'd need to undistort your gaze points before transforming them in order to get them in the same space as the homography

user-13f7bc 11 February, 2025, 13:37:19

has anyone used a gaze contigency task with babies/children with the neon glasses? if so, could you share it?

user-871b55 11 February, 2025, 14:59:43

Hello, I'm considering a purchase of several neon pupil lab glasses. I would like to know first if the neon glasses are compatible with gel-based EEG recordings, both in terms of the ability to synchronize the two systems but also in terms of electrical noise. Thanks very much to any of you that might have experience with this issue.

user-480f4c 12 February, 2025, 09:54:25

Hi @user-871b55! Synchronizing Neon with EEG is possible using LSL, provided your EEG system supports it. You can find more details in our documentation: https://docs.pupil-labs.com/neon/data-collection/lab-streaming-layer/

If your EEG system does not support LSL, you can still synchronize data using Neon’s Real-Time API: https://docs.pupil-labs.com/neon/real-time-api/tutorials/

There should be no issues using Neon with gel-based EEG setups. In fact, a recent study successfully combined Neon with a gel-based EEG system. You can check it out here: https://doi.org/10.48550/arXiv.2409.12562

If you’d like to discuss this further, I’d be happy to schedule a demo call. Feel free to reach out via [email removed] and I’ll send you a link to book a meeting! πŸ™‚

user-13f7bc 11 February, 2025, 16:56:47

@user-cdcab0 Can I ask about this page: https://docs.pupil-labs.com/alpha-lab/gaze-contingency-assistive/ Can I confirm if I did everything correctly - download github page, run both commands on command line, this opens the Gaze-controlled Cursor Demo, then open neon on the companion phone in the demo window and modify the settings. Click on mouse control and close the window using right click. I did all of this but there is no connection between the neon app and the laptop, how is this set up so it knows the my gaze coordinates?

user-cdcab0 11 February, 2025, 20:39:19

The Realtime API has the ability to automatically discover companion devices on the network, which is really nice for things like this. In some networks/configurations though, it may not work and the code would need to be adjusted to manually enter an IP address

user-f4b730 11 February, 2025, 16:58:51

Thanks @user-cdcab0 , will future versions of NeonPlayer visualise the ROI in the same way?

user-cdcab0 11 February, 2025, 20:42:09

That's something I've wanted to do, but we tend to prioritize Neon Player efforts on functionality and you're the first person to ask for this. There are a few upcoming updates to Neon Player, and if that doesn't take too much effort it may get rolled in too

user-037674 11 February, 2025, 17:01:05

Hi neon team and @user-d407c1 , is there any update on the post-editing of events in pupil cloud? would be highly appreciated. thanks a lot!

user-f43a29 12 February, 2025, 16:26:56

Hi @user-037674 and @user-1beb67 , this is now in Pupil Cloud. Refresh your browser tab and give it a try!

user-f43a29 12 February, 2025, 10:45:12

Hi @user-037674 and @user-1beb67 , I'm briefly stepping in for my colleague, @user-d407c1 . I am checking with the team and will update you.

user-613324 11 February, 2025, 17:09:22

Hi Neon team, for the world_index (e.g., the first column of the Neon Player exported surf_positions_XX.csv file), does it start from 0 or 1?

user-cdcab0 11 February, 2025, 21:12:20

0-indexed

user-613324 12 February, 2025, 00:12:52

In the world_timestamps.csv file exported by Neon Player, the current version only has one column named "timestamp [ns]", I believe this is in the same time system as the timestamps in gaze.csv file exported from the Cloud, right? but for the older version of world_timestamps.csv file exported by Neon Player, it has 2 columns, with first column named # timestamps [seconds], how to relate this time with the timestamps in gaze.csv file?

user-cdcab0 12 February, 2025, 01:50:47

Someone else asked about this recently as well

I believe this is in the same time system as the timestamps in gaze.csv file exported from the Cloud

Yes, the timestamps are measured in nanoseconds since the Unix epoch. Pretty much our "standard" time format now. If you have an old recording, you should be able to open in new versions of Neon Player to generate updated output files

user-613324 12 February, 2025, 14:27:54

Thanks for confirming. But my question was how to synchronize against gaze values for the older version of world_timestamps.csv file exported by Neon Player, that is when the first column is # timestamps [seconds]. I know that this denotes the seconds lapsed relative to the first camera frame. But where to find the unix epoch of the first camera frame, so that I can reconstruct the older version of world_timestamps.csv into unix epoch.

user-37a2bd 12 February, 2025, 08:37:32

Hi Team, while running the scanpath google collab sheet on the first step where it asks us to link to the google drive, it asks for a bunch of permissions. Do I have to give permissions to all of them in order for the program to work? I see an option asking for permission to contact information which I feel is not necessary. Please let me know. I am currently getting the following error.

Chat image

user-480f4c 12 February, 2025, 10:57:22

Hi @user-37a2bd! Yes, to use the scanpath Google Colab notebook you need to allow access to your Google Drive. The reason is that you need to get the enrichment folder within your Google Drive for the tool to access the data, and then the scanpath videos/image will be saved within this folder in your Drive.

user-b55ba6 12 February, 2025, 10:17:35

Hi!

I am looking to calculate eye yaw and pitch (NOT HEAD) in degrees. I am looking at eye state data datasheet. I have two questions about this.

The variable "eyeballcenter" is in mm. I am confused about what the optical axis variable represents. The documentation states both variables are in mm (https://docs.pupil-labs.com/neon/data-collection/data-streams/#_3d-eye-states) but variable "optical axis" goes from -1 to 1. The magbitude is 1 so I guess it is a unit vector. My first question is: is the vector origin the corresponding eyeball center in the scene cootdinate system?

My second question is if there is any other way to get the measure I want that I am not considering. Variables azimuth and elevation are comparable but their origin is the scene camera (0,0,0)

Thanks, AndrΓ©s

nmt 12 February, 2025, 10:30:03

Hi @user-b55ba6!

The optical axis variable describes the orientation of the eye. It is represented as a vector originating from the centre of the eyeball and pointing towards the centre of the pupil, using Cartesian coordinates.

You can easily convert this to spherical coordinates (elevation and azimuth) to obtain the eye's orientation in degrees or radians. This conversion involves using trigonometric and square root functions.

You can see an example here

user-b55ba6 12 February, 2025, 10:32:12

Thanks Neil for the quick response. Just to check I understand correctly, the "optical axis" variable is then a unit vector pointing at the center of the pupil. Is this correct?

nmt 12 February, 2025, 10:36:19

Correct

user-b55ba6 12 February, 2025, 10:36:29

Thanks!

user-37a2bd 12 February, 2025, 13:11:22

Hi Nadia, after giving the permissions I have run the blocks but I get an error on selecting the wearers to be included.

Chat image

user-480f4c 12 February, 2025, 13:16:31

Hi @user-37a2bd, can you please open a ticket in our πŸ›Ÿ troubleshooting channel? I'll assist you there in a private chat

user-cdcab0 12 February, 2025, 14:35:53

If you open the old recording in the latest Neon Player and export the world video, it will generate a new world_timestamps.csv file for you with timestamps in that format.

Alternatively, you can pull the timestamp out yourself from the Neon Scene Camera v1 ps1.time in the native recording data. Something like this:

timestamps = np.fromfile("Neon Scene Camera v1 ps1.time", dtype="<u8").astype(np.float64)
user-613324 12 February, 2025, 14:37:18

Got it! thank you very much! this is super useful!

user-b442f7 12 February, 2025, 20:49:54

Hi team,

I want to connect neon to my computers directly with ethernet. I have the Motorola Edge 40 Pro companion phone and using the Anker hub recommended.

I followed to completion the guides posted here for both: (1) a latest ver. win11 laptop, and (2) m2 os13.5.1 macbook but was unable to connect for either, except a few time it worked on the win11 laptop but not reproducible...

(btw my colleague was able to get reliable conncetion w/ the same setup on their m1 os14 macbook)

Any tip on where/how to start troubleshooting this? The win11 laptop is the main one we run experiments on so being able to connect reliably with low latency is important.

Thank you!

user-f43a29 12 February, 2025, 21:04:06

Hi @user-b442f7 , could you give these slightly updated steps for Windows 11 a try?

May I also ask what the research context is?

Note that you can also plug both devices via Ethernet into a router. Configuration & establishing a connection would be significantly easier and you would still benefit from Ethernetβ€˜s transmission speed.

Otherwise, establishing a direct Ethernet connection on masOS 13 may be somewhat different from macOS 14 and newer.

user-b442f7 12 February, 2025, 21:37:09

Hi Rob, thanks for the reply,

The linked guide was the one I was already using. I followed it exactly again and still wasn't able to connect. Not sure if this matters but ethernet IP assignment keeps switching to manual instead of automatic (DHCP)

What I did previously that made it work (briefly) was resetting the network and using this weird trick I saw online with creating then deleting a network bridge. In that case I was able to trigger start/end and sent event stamps via PsychoPy just fine. No longer works.

Another detail that I remembered was that my windows might've updated itself since the last successful attempt.

Re: research context. We're running a task where people need to complete patterns based on images shown simultaneously. We are basically interested in people's fixation points and scan paths as part of their behavior and sync w/ neural data.

Re: router. We have to use the internal network/wifi provided by our institute, so I'm not sure if a router is possible. But I'm also not familiar with setting up routers...

user-f43a29 13 February, 2025, 09:29:47

Hi @user-b442f7 , when you say "sync with neural data", do you mean sync with an EEG device? Have you seen our Lab Streaming Layer (LSL) integration?

To clarify, the router does not need internet access. You would only be connecting Neon and the experiment computer to it via Ethernet cable. It simply acts as a junction point and manages IP addresses. For example, although we have not tested it, a Windows 10 user has reported out-of-the-box success when using this router to connect to Neon via Ethernet. While I cannot make a 100% guarantee about that router, according to them, they just plugged in and it worked. We have also had success with the Archer BE550.

If the router had WiFi, then you could even use it as a local hotspot and simply connect to Neon that way. Then, you can use LSL and Events via the Real-time API to do post-hoc time sync without needing to manage cables. In fact, it would be worth it to send Events for post-hoc sync even when tethered.

With respect to establishing a direct Ethernet connection, please note that those steps are not official instructions. There can be variability in how different Windows systems handle Internet Sharing; e.g., the network capabilities of the underlying hardware might also play a role. If a direct connection is indeed necessary, then it could be worth it to reach out to the technical support of the network card in your computer, but it might be easier to just test with a router first, as you would still have Ethernet-level transmission latency.

user-f389a1 13 February, 2025, 12:01:33

Hi Rob - thanks for your help, I am writing a script which uses different functions but follows the same steps as the PyPLR script. I noticed during the pre-processing steps that there is a mask confidence step - but I can't find a confidence value in the neon output - is there a value which I can extract which i can use for this step (or is this step necessary for neon pupil analysis)?

user-f43a29 14 February, 2025, 10:58:52

Hi @user-f389a1, Pupil Core's confidence metric is related to its dark pupil detection method.

There is not yet a comparable "confidence" metric for Neon. Since Neon is deep-learning powered and calibration-free, "confidence" takes on a new meaning. Basically, the neural network always uses all information present in both eye images. When it has full view of the eyes, then you can have high confidence in the output.

In a future release, there will be a worn detector, and that could be used to differentiate between "low" and "high" confidence data. But, the basic idea would be that when Neon is worn normally throughout a recording, then you can skip the confidence mask step of PyPLR.

If you deem it necessary, then you could add a validation step to your experiment, as done in our whitepaper that evaluates the accuracy of Neon's pupillometry estimates.

user-edb34b 13 February, 2025, 12:28:24

Hi team, I would like a small precision on the gaze offset. If I apply a gaze offset on a first recording for a wearer, and then I apply another gaze offset for a second recording of the same wearer, will I get two different offsets or does the second one also update the one from the first recording?

user-b442f7 13 February, 2025, 17:27:59

Thanks Rob,

We actually do intracranial recording from a separate proprietary machine/software, so (I think) LSL won’t work. Thanks for bring it up though, didn’t know it existed :)))

~~And sounds good, I’ll try using a router. To clarify, do I still need to turn on internet sharing or do I just need to plug everything into the router LAN ports?~~ Solved!

user-f43a29 14 February, 2025, 10:54:35

Hi @user-b442f7 , many companies offer LSL integrations for their products, including proprietary devices. If you have interest and don't see it listed on LSL's list, then you might want to check with the manufacturer.

Based on your edit, I assume you figured it out, but just to clarify, with a router, the idea is that you essentially "plug in and go". You do not need to do any step from that guide if using a router, hence making the whole process easier.

user-6952ca 14 February, 2025, 15:56:46

Hello! I was wondering if it is possible to stream Neon data directly to a computer without using a wireless network. Thanks!

user-f43a29 14 February, 2025, 16:00:55

Hi @user-6952ca , yes, please see these discussions:

and this part of our online Documentation. If you need more clarification, just say so!

user-6952ca 14 February, 2025, 16:02:36

Looks like I should have just scrolled upward to have seen the answer! Thank you!

user-f43a29 14 February, 2025, 16:04:32

No worries! And, you are welcome!

user-f43a29 14 February, 2025, 16:05:59

Hi @user-edb34b , you will get two different offsets. This can be confirmed by checking the gaze_offset field of the info.json file for each recording.

user-edb34b 14 February, 2025, 16:53:10

Hi Rob, thanks for your answer. However, when accessing this info.json file, all the offsets from my 6 recordings for 1 subject have the values (0, 0), despite an offset being applied (and saved) on each recording and visible on the Cloud. How is this feature updated, and how can we be sure that the offset is applied in the csv. data?

user-f43a29 14 February, 2025, 18:46:20

Hi Rob, thanks for your answer. However

user-f2e507 17 February, 2025, 08:34:29

any reliable tutorial to use image reference mapper? The current interface wasted me more than 3hours. Seriously considering return my multiple neons.

nmt 17 February, 2025, 09:16:40

Hi @user-f2e507! Thanks for your message. Reference Image Mapper can be an incredibly powerful enrichment. Once the key 'ingredients' are there. Could you share some more details about the environment you're trying to use it within? Sharing some images of the environment could help. You can also invite [email removed] to your workspace such that we can take a look and provide concrete feedback!

user-068852 17 February, 2025, 13:20:39

Hi there, I was having an issue with pupil cloud. I was wondering if there was a way to upload AOIs drawn outside of the cloud to enrichments. I see the only 2 ways are to either draw within the cloud, or use the SAM AOI to draw AOIs and have them uploaded to the cloud

nmt 18 February, 2025, 02:11:28

Hi @user-068852! As you mentioned, AOIs can be easily created using the AOI editor tool in Pupil Cloud. We also have the Alpha Lab guide that shows how to generate them with SAM, and then upload them to Cloud. It is technically possible to build on top of the Alpha Lab guide to create your own AOI definitions. However, if I may ask, what would be the purpose of doing this rather than just creating them manually in Cloud?

user-37a2bd 18 February, 2025, 09:28:37

Hi Team is there anyway for us to get the distance to be shown on the gaze video through pupil cloud? Do you think it is a feature that will be added in the future?

user-d407c1 18 February, 2025, 09:48:06

Hi @user-37a2bd ! Are you referring to estimating the depth at which the wearer is gazing?

user-37a2bd 18 February, 2025, 10:57:37

Yes Miguel.

user-d407c1 18 February, 2025, 11:14:31

@user-37a2bd Thanks for following! Below is an explanation on how to get depth estimation and why it's is currently not offered by Cloud.

Achieving robust and reliable depth estimation typically requires an RGB-D camera or LiDAR, which can be used to map gaze onto the depth frame, this requires an specific setup with an specific type of sensor/camera to be synced. Other approaches, such as monocular AI depth estimators, exist but are still computationally expensive and produce varying results. Newer models may raise and we are always trying to be on the edge of it, so there might be room for exploration here.

Alternatively, one can try leveraging vergence as a proxy. Vergence-based depth estimation works quite well at short distances (typically within 1 meter) but becomes far less reliable for farther objects due to minimal vergence angle changes, so it's not a great solution for all needs.

You can try these approaches locally by yourself, and they might work depending on the level of precision that you need. However, there is no built-in feature for this in Cloud. Feel free to request it on the πŸ’‘ features-requests channel, such that we can keep track of interest there and share any updates (if there were).

user-bed573 18 February, 2025, 11:41:45

Hello, I am interested in getting high resolution images from the Neon scene camera.

From the data streams documentation, the scene camera resolution is 1600x1200, but that resolution is too low for our purpose.

Using the USB Camera app, I can see that the scene camera supports up to 3264 x 2448 resolution. https://play.google.com/store/apps/details?id=com.shenyaocn.android.usbcamera

Is it possible to receive higher resolution images, at perhaps a lower frame rate, or alternatively to pause the eye tracking and manually capture a high resolution image through another app?

user-d407c1 18 February, 2025, 12:14:56

Hi @user-bed573 πŸ‘‹ ! May I ask what application you have in mind that requires such a high resolution? Keep in mind that the camera is not HDR, and the sensor size will remain the same, so the benefit of a higher resolution on the image quality might be minimal. Depending on your use case and needs, you might be better off using an external third-party camera, as shown in this tutorial.

Additionally, note that other sensorsβ€”such as eye cameras, the IMU, and microphonesβ€”share the USB cable bandwidth, which is already near its limit, thus, increasing the resolution further may not be feasible.

user-bed573 18 February, 2025, 12:28:01

I am interested in reading distant text that the user is looking at, which can be difficult to distinguish with 1600x1200 resolution. I can see if a separate camera could work. Is there any way in the current system to get higher resolution images?

user-d407c1 18 February, 2025, 12:38:21

Unfortunately no, the current system can't collect data with the scene camera at a higher resolution. That said, there are several workarounds to potentially achieve your goal.

Besides syncing an external camera, as previously mentioned, you can also use the Reference Image Mapper enrichment in Cloud. This enrichment allows you to remap gaze onto a higher-resolution imageβ€”such as one taken with the Companion Device.

In a nutshell, you take a scanning recording along with a high-resolution picture of the text and surrounding environment at a distance that works for you and then the Reference Image Mapper matches features in the scene and remaps the gaze data onto that image. Does this sound as a valid approach? Have you tried it?

user-d71076 18 February, 2025, 13:31:57

Hello!

I do:

scene, eyes, gaze = device.receive_matched_scene_and_eyes_video_frames_and_gaze()
print(gaze)

and get only:

GazeData(x=671.4378662109375, y=767.9850463867188, worn=True, timestamp_unix_seconds=1739884429.259284)

Before I was able to get more data. For example gaze object had these attributes:

gaze.eyeball_center_left_x,
gaze.eyeball_center_left_y,
gaze.eyeball_center_left_z

also I could get pupil diameters and optical axes.

What am I missing?

user-d407c1 18 February, 2025, 13:45:21

Hi @user-d71076 ! May I ask whether you have "Compute eye state" enabled on the Companion Settings?

user-d71076 18 February, 2025, 13:48:51

Thanks! πŸ™‚ Now it works

user-f74204 18 February, 2025, 13:50:47

Hi all!

Is there anything similar to this helper function for the Pupil Neon? Previously I was using the Core, but I just received a Neon and I am trying to write a function that will get me real-time gaze coordinates on a surface.

https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py

user-d407c1 18 February, 2025, 13:53:29

Hi @user-f74204 πŸ‘‹ ! Have a look at this tutorial and it's corresponding library Real Time Screen Gaze, they show how to achieve that.

user-f74204 18 February, 2025, 15:10:11

I am having trouble installing the required pupil-apriltags library. It fails with this error

ERROR: Failed building wheel for pupil-apriltags
Failed to build pupil-apriltags
ERROR: Failed to build installable wheels for some pyproject.toml based projects (pupil-apriltags)

I have more output I can share but it is around 226 lines. I can't really find a whole lot online about this. Is there a specific version of Python that you guys use or this library is supported for? Currently, I am using Python 3.12

user-b55ba6 18 February, 2025, 14:01:53

Hi. I am having issues with Neon Companion app detecting the Neon eye tracker. When I plug it in the light turn on, the Neon appears as detected in the app, and a minute later it disappears. It has been a couple of months since I use it, but today I was trying the "IMU visualization utility, plimu" when the error started. Who should I contact for advice on this kind of error?

user-480f4c 18 February, 2025, 14:09:22

Hi @user-b55ba6! Can you open a ticket in our πŸ›Ÿ troubleshooting channel? We can assist you there πŸ™‚

user-6952ca 18 February, 2025, 16:46:16

Hello, all! How well do the Neon glasses record gaze data in low-light and no-light conditions? (Not concerned with scene camera)

nmt 19 February, 2025, 02:20:49

Hey, @user-6952ca! If you're not concerned about the scene camera, you can use Neon even in complete darkness. The eye cameras are equipped with IR illuminators, allowing us to capture eye images even when there's no visible light.

user-068852 19 February, 2025, 07:01:02

From what I could tell the ones created manually had to be drawn, and it is less precise than if I could use some form of a photo editor to make exact shapes and then upload them on top of an image, unless there is some way to make precise shapes like a circle within pupil cloud?

user-d407c1 19 February, 2025, 10:06:06

Hi @user-068852 πŸ‘‹ ! Let me step in for my colleague @nmt . Yes, you can submit your own custom AOI masks to Cloud, but note that this feature is undocumented and subject to change without any notice.

With that in mind, you can see how this works in the code of the Alpha Lab SAM tutorial.

Basically, Cloud, expects a POST request on this endpoint /workspaces/{workspace_id}/projects/{project_id}/enrichments/{enrichment_id}/aois

This POST request should contain the API token (api-key) and workspace (workspace_id) in the headers and a payload like it follows.

payload = {
        "color": HEX color as string
        "created_at": UTC datetime ISO format as string,
        "description": "string", 
        "enrichment_id": enrichment_id,
        "id": str(uuid.uuid4()),
        "mask_image_data_url": mask,  # an image of max size 1024px encoded in base64 with the mask as non zero UTF-8
        "name": string,
        "updated_at": UTC datetime ISO format as string,
    }

where the mask would be an image encoded on base64.

user-d407c1 19 February, 2025, 07:20:01

@user-f74204 ! Thanks for following up, I could not replicate it. Could you kindly share some details about your setup? What OS, architecture are you using?

user-f74204 19 February, 2025, 15:26:24

Thanks, I figured it out. I was able to install it via pip on Python 3.7 and 3.8.

user-f74204 19 February, 2025, 15:27:33

Not sure when the last time the library was updated, but maybe a future version of Python broke something

user-d407c1 19 February, 2025, 15:49:05

@user-f74204 I did test it, and the package installs on Mac even on Python 3.14. If possible, it would be helpful if you could share more details about your setup, to figure out what could have gone wrong.

❯ uvx --python="3.14" --with pupil-apriltags python
   Building pupil-apriltags==1.0.4.post10
      Built numpy==2.2.3
      Built pupil-apriltags==1.0.4.post10            
      Built numpy==2.2.3                       
Installed 2 packages in 12ms
Python 3.14.0a5 (main, Feb 12 2025, 15:01:40) [Clang 19.1.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
> import pupil_apriltags
> >>> print(pupil_apriltags.Detector)
> <class 'pupil_apriltags.bindings.Detector'>
> >>>
user-97997c 19 February, 2025, 18:45:32

Hi everybody

I have a Pupil Neon and I am trying to make it work with Matlab. Some scripts are ok, except for those that call get_calibration In that case I get the error "Error: Python Error: DeviceError: (404, 'Failed to fetch calibration')" I have seen it in previous post related to Pupil Invisible, but that is not my case

Can you help?

Thank you!

user-e7102b 19 February, 2025, 20:58:02

Hi all, I just received a Pupil Neon yesterday and I'm also trying to get it to work with MATLAB/Psychtoolbox. I've followed the installation guide and everything seemed to install OK, but I'm running into the following error when I attempt to run the demo script from matlab (demo_pupil_labs.m), which I think just means the pupil labs smartphone is not connecting:

Unable to resolve the name 'device.recording_cancel'.

Error in demo_pupil_labs (line 90) device.recording_cancel(); ^^^^^^^^^^^^^^^^^^^^^^^^^

Some info on my setup: Macbook Pro M2 Matlab 2024b Python Version 3.9 (I've checked and my matlab pyenv is set to this) I've created a hotspot on my cellphone, and I'm connecting both my Macbook Pro and the Smartphone to this hotspot, so they should both be on the same network. I plan to buy a dedicated router for this purpose, but I think this hotspot setup should work for now?

Any advice would be appreciated! Thanks, Tom

user-f43a29 19 February, 2025, 21:34:06

Hi @user-e7102b , it sounds like your setup should be fine, although you might want to double check that you've installed Python from python.org, as recommended by Mathworks. It sounds like a Python version provided by MacOS might instead be getting used?

Could you try first running one of the more basic examples, such as get_diagnostic_info.m? Note that demo_pupil_labs.m also requires Psychtoolbox to be installed.

user-f43a29 19 February, 2025, 21:36:57

Hi everybody

user-e7102b 19 February, 2025, 21:41:53

Hi Rob, when I try running more basic scripts, like get_diagnostic_info.m or basic_recording.m I just get:

Error: Unrecognized function or variable 'Device'. Unable to resolve the name 'device.close'.

Error in get_diagnostic_info (line 63) device.close(); ^^^^^^^^^^^^^^

I'll try installing one of those newer versions of python, as suggested...

user-f43a29 19 February, 2025, 21:43:35

Hi @user-e7102b , then it sounds rather like the matlab folder has not been added to your MATLAB Path. If you do that, then those errors should resolve.

user-97c46e 20 February, 2025, 02:23:05

Hi everyone, I have two questions where i think i know the answer but i would like to confirm with folks in the product group. 1) Is it possible to put a recording collected from one companion device (device A) on to a different companion device (device B) and have it uploaded to the pupil cloud from device B? 2) Is it possible to upload a recording folder to the pupil cloud in any other way apart from the Neon Companion App? For instance, might i be able to do this from the Neon Player? I think the answers are "No" in both cases, but i just want to confirm that i did not miss anything

user-c2d375 20 February, 2025, 08:03:13

Hi @user-97c46e πŸ‘‹πŸ» You're correct - it’s not possible to transfer recordings from one Companion device to another for upload to Pupil Cloud. Similarly, recordings cannot be uploaded to Pupil Cloud from Neon Player; uploads are only supported via the Neon Companion App on the device where the recording was made.

May I ask for further clarification on why you'd like to upload a recording from another device or Neon Player?

user-37a2bd 20 February, 2025, 11:59:28

Hi team, is there any way to increase the opacity of the heatmap generated on the Pupil Cloud?

user-480f4c 21 February, 2025, 09:19:44

Hi @user-37a2bd! You can adjust the scale parameter in the heatmap view on Pupil Cloud. Is that what you were referring to?

user-b6f43d 21 February, 2025, 08:13:15

Heyy pupil labs, for this application of integrating 3rd party camera with the EyeTracker, it is recommended to use these devices (GoPro, insta360, or DJI action camera).

Is there any recommendations for, this is the version or model that is preferable or compatible for this application ?

user-f43a29 21 February, 2025, 09:23:38

Hi @user-b6f43d , those devices are only suggestions, not recommendations. You can use in principle any camera that can be easily mounted with a similar view of the scene as Neon's forward-facing camera. To that end, a camera that can be comfortably head mounted, like the GoPro or Insta360, is an easy option. The goal of that tool is to give you the choice of camera that works best for your situation.

user-2d7dd5 21 February, 2025, 09:22:44

Hello Pupil Labs, Is there a way to synchronize the times of the Neon with a PC? The background is that I want to run a simulation on a computer and need the same time stamps for actions on the screen and the reaction of the test person for the evaluation. Please feel free to ask if you have any questions about the use case. Thanks for the answer in advance.

user-f43a29 21 February, 2025, 13:52:14

Hi @user-2d7dd5 , I would recommend checking our Documentation on Super Precise Time Sync and Neon's Lab Streaming Layer integration.

Do you need to do time synchronization during the experiment, or is post-hoc synchronization also acceptable?

user-37a2bd 21 February, 2025, 12:45:23

Not really Nadia. I wanted to increase the density. Like currently the heatmap that is generated you can see through and see the actual image. But is there a way to make it such that you can see the colors of the heatmap more visible? It doesn't matter if the reference image isn't seen properly. The problem we are having now is when the heatmap is superimposed on the reference image it becomes difficult to actually see the heatmap in some cases no matter which color we use. That's why I was wondering if we can increase or decrease the opacity of the heatmap.

user-480f4c 21 February, 2025, 13:49:59

thanks for clarifying @user-37a2bd! Although it's not possible through Cloud's interface to change the heatmap's opacity, you can download the data and find the heatmap alone in the folder you download.

Also please note that the heatmap implementation is available in our documentation, so you can get the code and create your own overlay if you prefer: https://docs.pupil-labs.com/neon/pupil-cloud/visualizations/heatmap/

user-2d7dd5 21 February, 2025, 20:25:24

Hello Rob, Thank you for the links. We have just purchased the Neon, so we still need to get to know the system. (1) I have already tried to set the same server for the system time on PC and smartphone and to synchronize before starting each experiment, but sometimes this did not work. LSL is actually new to me too - that sounds like a good approach for my and my colleagues' use cases. (2) I only need synchronous time for the evaluation, so post-hoc would be fine. However, I have a lot of experiments planned, so I would like to automate the process as much as possible. As a last resort, I've already thought about using the video from the glasses to record the PC's timestamp and calculate the difference in time. But I would like to avoid that. I'll get back to you as soon as I've tested your suggestions. Thank you very much and have a nice weekend!

user-f43a29 21 February, 2025, 20:27:18

Hi @user-2d7dd5 , you can also schedule your free 30 minute Onboarding call, if you have not yet had it. Just send an email to info@pupil-labs.com with the original Order ID.

Then, we can discuss this over video, as well as how to use Neon optimally in your experiment.

And there should be no need to record the PC's display of the clock. There are automatic and accurate methods that are easier. LSL is of course one of them, as well as estimating the clock offset of both devices yourself, and applying that either in real-time or post-hoc.

user-56256e 22 February, 2025, 00:27:15

I'm curious... what kind of 3d visualization tool is used for this?

Chat image

user-cdcab0 24 February, 2025, 04:42:20

This comes from an internal tool. It's built on Python and OpenGL

user-bda2e6 24 February, 2025, 04:13:29

Hi! I am using Neon in Psychopy and in the hdf5 file I get at the end of experiments, there both monocular and binocular data. Is there a description of what each column is in both data? Like what it measures and in what units?

user-cdcab0 24 February, 2025, 05:01:07

Hi, @user-bda2e6 - PsychoPy's documentation is a bit weak regarding their HDF5 data files. Some of the less-obvious fields we have documented on our page.

Regarding velocity, there are at least 3 different interpretations - angular velocity, velocity in screen-space, and velocity in scene-camera space. I'd recommend calculating the desired measure yourself post-hoc.

user-bda2e6 24 February, 2025, 04:15:41

And in the screen-based monocular data, there are columns for velocity_x and velocity_y but they are all 0. Is there a way to automatically get a screen-based velocity?

user-bda2e6 24 February, 2025, 04:15:44

Thank you!

user-d6a352 24 February, 2025, 04:32:55

Context: I have a Neon (is this thing on

user-bda2e6 24 February, 2025, 05:05:32

@user-cdcab0 Thank you for the quick reply! For calculating velocity post-hoc, do you have an open source readily available algorithm? I can do it myself but I’m asking just in case

user-bda2e6 24 February, 2025, 05:06:31

For example one of the difficulties to get angular velocity is that I need the viewer’s distance from the monitor. But I don't have this information

user-cdcab0 24 February, 2025, 05:09:03

Rather than working backwards to calculate the angle, if you have eyestate computation enabled in the companion app then the optical axis vectors will be saved in your Neon recording and streamed to PsychoPy and saved in the BinocularEyeSample events

user-bda2e6 24 February, 2025, 05:53:26

I see, this makes sense. Thank you very much!

user-613324 24 February, 2025, 13:04:28

Hi Neon team, I was using Neon Player 4.1.6 to process a recording that made in April 2024. I re-downloaded the native recording data from cloud today and uploaded to Neo Player. However, it crashed at the beginning. Please see below for the error message:

user-613324 24 February, 2025, 13:04:47

Chat image

user-cdcab0 24 February, 2025, 13:29:17

Hi, @user-613324 - do you mean 4.6.1 (the latest version)?

I'm not able to replicate this issue, so it's like a problem with either your settings or something specific about that recording. Can you try resetting your Neon Player settings back to default and then loading the recording? To reset settings to default, you can either use the button in the general settings of Neon Player or you can delete or rename the neon_player_settings folder in your home directory

user-613324 24 February, 2025, 13:34:22

yes, I meant 4.6.1

user-613324 24 February, 2025, 13:39:50

I deleted the neon_player_settings folder and re-uploade the freshly unzipped native recroding data to the Neon Player 4.6.1. The same error occurred

user-cdcab0 24 February, 2025, 13:48:03

We'll have to take a deeper look then. Do you mind opening a ticket in πŸ›Ÿ troubleshooting?

user-817cfd 25 February, 2025, 14:41:16

I have noticed differences between the fixations detected in neon player and pupil cloud. It seems the algorithm used by pupil cloud detects more fixations in the same recording (for example I have a recording which shows 180 fix's in cloud, and only 159 in neon player). This makes it hard (and to be honest... a bit annoying) to work in neon player, using the fixations.csv file from pupil cloud. Investigating this more in detail, I noticed that pupil cloud is stricter, more often showing two shorter fixations instead of one longer one in neon player.

Additionally, I saw another strange thing in Pupil Cloud: In a excel aoi_metrics file I saw average fixation durations of 30, 45 and 55 ms. This seems strange to me, as according to the "Pupil Labs fixation detector Whitepaper", the minimum fixation duration is set to 70 ms. Could this be related to the same thing?

I am relatively new to programming, so am very thankful for the software you provide. I am just wondering which one I can rely on more? What I am doing at the moment is analysing the overlaid worldvideo of mountainbike runs. I do this fix by fix (to see if it concerns look-ahead fixations or guiding fixations), but also watch frame by frame to visually estimate the time between fixation onset and passing of the front wheel on that location. I really like the Gaze History of the Vis Polyline for this. Nevrertheless, this needs lots of scrolling forwards and backwards for which it would be useful if the fixation id's in the csv file are coinciding with those in neon player. Is there any advise you could give me on this, besides, obviously, using multiple screens. Thanks in advance

user-d407c1 26 February, 2025, 11:46:58

Hi @user-817cfd ! Could you create a ticket on πŸ›Ÿ troubleshooting and follow up with a recording ID such that we can investigate this?

user-80c70d 25 February, 2025, 15:27:08

Hi Neon team,

We wanted to get back to you because we had a couple more questions regarding the saccade detection algorithm implemented in your Cloud. Specifically, we would like to know how and if smooth pursuit interferes with saccade detection. Your documentation states that saccades are detected and defined as starting after the previous fixation and ending with the following fixation. We are interested in using the identified saccade onsets as time-locking events for the analysis of concurrent EEG data. In this case it would be important to us to be able to identify the onset of saccades that precede specific fixations (i.e. beginning of directed eye movements towards a point of fixation). Do you see any issues in the way the saccade identification algorithm is implemented for our use case?

Thanks.

user-480f4c 26 February, 2025, 10:30:12

Hi @user-80c70d! Could you maybe share a few more details about your setup (e.g., does your design trigger smooth pursuit movements)? Note that we detect saccades based on the fixation results, considering the gaps between fixations to be saccades. Note, that this assumption is only true in the absence of smooth pursuit eye movements.

user-75d5ea 26 February, 2025, 09:48:01

Hi, we use the NEON glasses to record eye movements while watching images and videos on a screen. For the images we use the reference image mapper to map the fixations on the image (it's super helpful). Is there a way for the videos that is as simple as the one for the images to map the fixations during the videos on the screen? Thanks for your help

user-f43a29 26 February, 2025, 10:35:33

Hi @user-75d5ea , you might want to check out our Map Gaze Onto Dynamic Screen Content Alpha Lab guide. It sounds like what you are looking for. Let us know if not!

user-23177e 26 February, 2025, 10:42:06

Hi, I did search but could not directly find a response with the answer, so I'm asking here. Do the scene camera exposure presets have fixed fps settings? I know that with manual mode, you need to stay under 330 but how does it work for the presets? Can we rely on a fixed 30fps here or are you using a variable framerate, depending on the exposure?

user-f43a29 26 February, 2025, 15:56:33

Hi @user-23177e , Neon's scene camera records at a steady 30 FPS in all automatic exposure modes. The presets determine which intensity bands are given priority when the Neon Companion app automatically adjusts exposure.

This can be confirmed by using tools like ffprobe or VLC on the Scene Camera video in a recording folder.

May I ask if you have you seen discrepancies in recordings?

user-51f934 26 February, 2025, 17:44:00

Hi @user-f43a29 @user-480f4c , Please, I will like to know the default time set (in milliseconds) on my neon eye tracking to achieve fixation. Is this something I could change? I understand that timing to achieve fixation might vary across different tasks. Thank you.

user-f43a29 27 February, 2025, 12:00:37

Hi @user-51f934 , you can find the details of the fixation detector algorithm here. I believe you are asking for the minimum duration of a fixation in the context of Neon's algorithm, which is 70ms. The other parameters can be found in that report and you can also find a more detailed explanation for why they were chosen in the associated published article.

May I ask why you need to change the minimum duration parameter? If you indeed need to do so, then you could modify the code of pl-rec-export, which runs the fixation detection algorithm as described in the report linked above.

user-e7102b 27 February, 2025, 00:09:32

Hi, is there a powerpack solution you'd recommend for if we want to extend the recording life of the smartphone (Moto Edge 40 pro)? We often run multiple sessions back-to-back and might not have a chance to charge the smartphone sufficiently between sessions. I tried purchasing this magnetic wireless charging powerpack (https://www.amazon.com/dp/B099284SRR?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1) with the intention of sticking it to the back of the smartphone if battery started to run low, but it doesn't adhere well to the back of the Moto Edge, and it also seems to cause the phone to crash and restart, so overall not a good option! Thanks!

user-d407c1 27 February, 2025, 08:41:40

Hi @user-e7102b πŸ‘‹ ! The Moto Edge 40 Pro supports wireless charging up to 15W, but it does not have built-in magnets for attachment. If magnetic attachment is essential for you, I’d recommend looking into third-party cases that include magnets to hold the phone in place.

For charging, you can use something like this wireless charger to maximize the 15W capability.

However, keep in mind that wireless charging can generate additional heat, which might lead to overheating and preventive shutdowns on the Companion Device. If you experience this, a battery pack with active cooling, like this one, could help keeping the temperatures down.

I do have tested the latest and it works although it is a bit noisy and I generally use the fast charger.

nmt 27 February, 2025, 08:51:23

@user-e7102b, is a USB hub an option for you? These can also be used to keep the phone charged during data collection: https://docs.pupil-labs.com/neon/hardware/using-a-usb-hub/#using-a-usb-hub. How long are you planning to make recordings for btw?

user-e9e92c 27 February, 2025, 21:42:58

Hi Pupil Labs Team, thanks for the nice app and hardware. I have the Neon glasses and they work well. However, I ran into a problem today. My setup is the Motorola Edge 40 Pro + Neon glasses with the recommended Anker hub (phone charging with this hub) with a direct Ethernet connection to a Ubuntu computer. This computer runs ROS and some scripts using the python simple API to publish data on topics at 30Hz (eye gaze, scene images, eye images, IMU). I was recording data in parallel with the companion app and all was going good for about 1 hour before the app went unresponsive and stopped recording. I clicked "don't close" and then there was a message that popped up to unplug and replug the glasses. I did so and then it started working again. I saved the data at this point and uploaded to Pupil Cloud. Is recording within the app and streaming at the same time too taxing? It seemed to work for an hour so not sure what happened. Are there logs somewhere to diagnose the problem? Thanks!

user-f43a29 28 February, 2025, 11:47:44

Hi @user-e9e92c , simultaneous recording and streaming is not too taxing for the device. It is actually a typical way to use Neon.

Could you open a ticket in πŸ›Ÿ troubleshooting? I'll follow-up there.

user-b55ba6 28 February, 2025, 13:14:06

Hi !

I have two mutually related questions.

1) Is there documentation on how the eyeball center is calculated? And any notion of its accuracy?

2) I need to locate an object at a certain angle with respect to the eyeballs (3d coordinate system). Reading the documentation I see the origin of this measure is a coordinate system centered in the scene camera. Can you say the (0,0,0) is located in the surface of the scene camera? Or somewhere deeper in the frames? Again I was looking for a measure or precision to know if I could use this coordinate system with confidence. This is of course related to question 1 and how everything is calculated.

Thanks! AndrΓ©s

nmt 03 March, 2025, 09:08:52

Hi @user-b55ba6! Thanks for your question. Before I answer, I want to be sure I understand your goal. Can you elaborate more on what you mean by locating an object at a certain angle with respect to the eyeballs? Are you trying to compute, e.g. viewing depth based on the eye state measurements?

End of February archive