๐Ÿ‘“ neon


Year

user-bc8e66 03 March, 2025, 14:20:42

Hi, how did you do the normalization for x (gaze x [px]) and y (gaze y [px]) corrdinates of gaze in "gaze.csv"? I need to know the max and min value of these pixels to cut the plan into grids.

user-f43a29 03 March, 2025, 14:25:01

Hi @user-a97d77 , if you are referring to gaze.csv from Pupil Cloud, then those coordinates are not normalized, but are rather in pixel units. They refer to positions in the scene camera image. For Neon, the scene camera image is 1600 pixels wide by 1200 pixels high.

user-937ec6 03 March, 2025, 21:22:07

I am receiving the Lab Streaming Layer from the Neon companion application. As expected and per the gaze data fields, I am receiving 16 channels (fields) per sample. Can someone confirm for me the order of the fields being output over LSL? I checked the docs (https://docs.pupil-labs.com/neon/data-collection/lab-streaming-layer/) but they appear out of date. I took a stab at the order as follows:

"pixels_x","pixels_y","pupil_diameter_left","eyeball_center_left_x","eyeball_center_left_y","eyeballs_center_left_z","optical_axis_left_x","optical_axis_left_y","optical_axis_left_z","pupil_diameter_right","eyeball_center_right_x","eyeball_center_right_y","eyeball_center_right_z","optical_axis_right_x","optical_axis_right_y","optical_axis_right_z"

Thank you!

user-cdcab0 04 March, 2025, 10:13:11

What are you using to receive the LSL data stream? The order and format of the data comes in the stream header, and the software you're using to receive the data should receive and parse that

user-edb34b 04 March, 2025, 15:26:09

Hi, I have four projects (=subjects) where the 3d_eye_states.csv file does not contain any values but the headers. This occurred for all the recordings from these 4 consecutive subjects, but not for other recordings collected before or after. Do you have any idea on how to fix that?

user-f43a29 05 March, 2025, 09:44:58

Hi @user-edb34b , could you open a ticket in ๐Ÿ›Ÿ troubleshooting ?

user-04e026 04 March, 2025, 16:22:48

Hello, I was wondering if there was any advice for helping the Neon better recognize the April tags? (Using Neon Player)

The lighting in our workspace is my main issue. Our setup requires lights to be off and the tags arenโ€™t able to be placed on the screen.

Currently Iโ€™ve rigged up โ€œlight boxesโ€ to backlight the tags, but Iโ€™m getting really inconsistent results with that.

Is it possible there is some distance/size ratio that the April tags are more likely not to be readable?

Edit: I should add, weโ€™ve tried leaving the lights on or having lamps on in the room to partially light things. However, these results are inconsistent too as it seems the light is washing out the tags

user-cdcab0 04 March, 2025, 20:28:00

If you can share your scene camera video (even just a single frame), I can better help you diagnose. Typically though, poor marker detection is due to markers being too small or not having enough margin around them. I also recommend adding more markers - many people just use 4, but if one fails to be detected, the mapping can be quite poor.

Also, in case you haven't seen it, the Surface Tracker plugin in Neon Player has image adjustments (brightness and contrast) that are applied under-the-hood before scanning each frame for tags. These are done "under-the-hood" to prevent these adjustments from affecting exported video - the only problem with that is that it can be hard to know what effect you're having when changing those values. So, there's also an "Image Adjustment" plugin where you can affect the scene video.

So start with the "Image Adjustment" plugin and tweak the values until YOU can see the tags clearly. Then apply those same values to the surface tracker plugin's brightness/contrast values.

Please keep in mind that these are kind of crude adjustments though, and really the best thing you can do is start with tags that are highly visible

user-057596 04 March, 2025, 19:54:41

Is it possible yet to use to Neon glasses as if it was a Pupil Core headset with a MOS rather than with MacOS and Linux as there had been a mention a while back of developing this facility and our eyetracking analysis software is developed for MOS so it would save us the hassle and expense of requiring one device to capture the data and then connected to another device to analyse the data with our software?

nmt 05 March, 2025, 03:59:18

Hi @user-057596! Thank you for your question! Currently, using Neon glasses like a Pupil Core headset with MOS isnโ€™t supported or on our roadmap. Our development efforts have focused on other priorities to enhance the native Neon experience. Linux and MacOS should continue to work.

user-56256e 05 March, 2025, 00:06:57

Can we set the scene camera exposure using pupil_labs.realtime_api?

nmt 05 March, 2025, 04:04:17

Hi @user-56256e! No, this is not exposed via the real-time API

user-56256e 05 March, 2025, 00:17:49

My understanding is that we have two kinds of gaze angle outputs from Neon: (1) cyclopian gaze angle in 2D coodinate [px] that is scene camera coordinate and (2) binocular 3D gaze unit vectors (optical axes for left and right eyes). I'm trying to convert the first cyclopian gaze data to 3D gaze unit vector. Do you have a conversion from pixel to angular coodinate for the first data? Alternatively, is it possible to generate a cyclopian gaze vector from the second data (is this how Neon calculates cyclopian gaze in pixel coodinate for (1)?)

nmt 05 March, 2025, 04:18:23

Your understanding is almost correct. I'll provide some clarifications below: 1. Neon's gaze measurements are given as:
a) 2D coordinates in pixel space
b) The angle of a gaze ray in relation to the scene camera in degrees

You can easily convert the latter (b) to a vector by transforming from spherical to Cartesian coordinates, as demonstrated in this example.

  1. Neon's eye state measurements provide optical axis vectors, which point from the eyeball centre to the pupil centre. These measurements are independent of gaze, meaning that it's not really practical to transform eye state measurements into gaze measurements. Although this can technically be done with some calibration routine, I don't think it's relevant for your question.

I hope that makes sense!

user-37a2bd 05 March, 2025, 06:05:16

Hi team, we are getting this message on our cloud on the workspace login for the owner. I was under the impression that we would have unlimited storage for all of our devices? We have currently two cores and four frames. Two frames were the latest additions.

user-480f4c 05 March, 2025, 07:20:33

Hi @user-37a2bd! Thanks for bringing this to our attention. I'll liaise with the relevant team and follow up asap

nmt 06 March, 2025, 01:55:00

Third-party eye camera

user-057596 06 March, 2025, 17:48:39

Iโ€™ve just updated Neon Companion App and now on the world view on the App the gaze circle is staying on the same position as I move my head and gaze around but is working as it should be in the recording. Furthermore with the Neon Monitor when the page is loaded you are presented with a warning that the Video Preview May Not Be Available in IOS, also the video frozen and the gaze circle is moving in the opposite direction to the movement of your eyes and head

user-d407c1 07 March, 2025, 07:33:30

Hi @user-057596 ๐Ÿ‘‹ ! Would you mind trying to clear the cache of the Companion App?

If that doesnโ€™t resolve the issue, please follow up by opening a ๐Ÿ›Ÿ troubleshooting ticket with relevant details, including your Companion Device model and Companion App version.

user-057596 07 March, 2025, 08:04:26

Hi @user-d407c1 how do you clear the cache again?

user-d407c1 07 March, 2025, 08:25:44

Press and hold the Companion App icon in the app launcher (home screen), then select:

App Info โ†’ Storage & cache โ†’ Clear cache

You can also clear storage.

Note that the naming may vary depending on your Companion Device model and Android version.

user-d54741 07 March, 2025, 15:31:10

Hello, sorry I have an issue when I want to delete a recorded file from the trash area. I want to free some space in pupil cloud, therefore delete some recorded videos, then there were moved to Trash area. and then tried to delete them premanently form trash memory by right click on it and select the delete , it gives me an "Internal Server Error" . what should I do for this matter? thanks in advance.

user-d407c1 07 March, 2025, 15:46:16

Hi @user-d54741 ! Could you create a ticket at ๐Ÿ›Ÿ troubleshooting with the recording ID?

user-a4aa71 07 March, 2025, 18:36:14

Hello everyone, I need just a clarification (maybe its a silly question). By buying the "bundle - I can see clearly now " is it possible to detach the neon module and put it in the frame mount for the pico or quest 3? Because it is not clear to me if buying the frame mount for quest 3, for example, the NEST PCB is integrated ( I suppose yes) and so I can then just take the neon module mounted on another frame and mount on the other frame. Thank you very much in advance!

nmt 10 March, 2025, 11:26:06

Hi @user-a4aa71! You're correct. You can swap the module between frames. This video shows you how: https://docs.pupil-labs.com/neon/hardware/swapping-frames/#swapping-frames. So it's possible to purchase one ICSCN bundle, and just the pico or quest 3 frame.

user-b03a4c 10 March, 2025, 00:51:02

Hi, I used pupil NEON device, to record about 10 situations. I transfered these recording into gaze overlayed MP4 files using NEON player (using 'Fixation detector' and 'World video exporter').

But, a few files can not convert into gaze overlayed MP4, although I can see gaze overlayed movie at my local PC . I can confirm that 'Fixation detector' worked well, but NEON player app crashed in the process of 'World video exporter'.

Could you check my recording files if I send you these files?

nmt 10 March, 2025, 11:27:03

Hi @user-b03a4c! Yes, we can take a look for you. Can you please open a ticket in ๐Ÿ›Ÿ troubleshooting and we can coordinate there ๐Ÿ™‚

user-3bcaa4 10 March, 2025, 11:56:23

Good morning! I am working with the Pupil Neon glasses. I was wondering if you have instructions of how to include the Picture in Picture with the video of the eye and blinks for displaying in the pupil lab cloud .

nmt 10 March, 2025, 12:05:03

Hi @user-3bcaa4! Thanks for your question. The eye videos are not playable in Cloud. Blinks are visualised when the red gaze circle turns grey, and also in the timeline of the Cloud player window. If you want to watch the eye videos alongside the main recording, you could use Neon Player, our free desktop software. You'd need to enable the eye overlay plugin: https://docs.pupil-labs.com/neon/neon-player/#neon-player

user-80c70d 10 March, 2025, 15:46:48

Hi team, I have another question about our recordings. We have one recording in which we noticed that sometimes, one video frame to the next stays completely the same while the fixation moves. We think this should not be the case (e.g., in the background, people are walking and suddenly freeze for two frames before their movements continue as if nothing has happened). We have a screen recording that we could share of the effect.

user-480f4c 10 March, 2025, 18:33:08

Hi @user-80c70d! Do you mind opening a ticket in our ๐Ÿ›Ÿ troubleshooting channel and sharing the recording ID for this recording, along with a screen rec showing this issue you report? We can continue the conversation there in a private chat.

user-796c70 10 March, 2025, 17:23:45

Hello - Recorded a session and received this message when I tried to add to a project: "1 recording is processing, has errors, or blocked. This recording can not be added to the project." I can view the enture recording on the cloud, it seems OK - anything I can do to be able to add it to a project for enhancements? Thanks in advance!

user-480f4c 10 March, 2025, 18:27:54

Hi @user-796c70 ! Can you please open a ticket in our ๐Ÿ›Ÿ troubleshooting channel? We'll assist you there in a private chat.

user-d086cf 10 March, 2025, 17:59:16

Hi guys. I was chatting with some of my collaborators, and they were interested in the Neon eye trackers, but apparently have some on-site restrictions on wifi/bluetooth devices, so they'd not be able to use the companion android device with the Neon. Is there any work being done to have a version of the recording software on the PC compatible with the Neon? Or would they be better off using the Core version?

user-480f4c 10 March, 2025, 18:31:40

Hi @user-d086cf! Could you share more details about your colleagues' setup?

Note that WiFi is not required to collect data with Neon. An internet connection is only needed for the initial setup (e.g., downloading the app from the Google Play Store), see these instructions on setup. Once everything is set up, you can record data without being connected to WiFi.

Recordings are first saved locally on the phone. After data collection, you can connect the phone to WiFi to upload the recordings to Pupil Cloud.

user-bda2e6 11 March, 2025, 15:41:28

In both the monocular and binocular hdf5 output, there are always three times, device time, logged_time, and time. I know that logged_time should be the Psychopyโ€™s experiment clock. The problem is logged_time always have many duplicate timestamps

Chat image

user-cdcab0 12 March, 2025, 08:23:30

Hi, @user-bda2e6 - there have been some updates to the PsychoPy plugin recently that deal with clocks/synchronization. Could you try uninstalling and then re-installing the plugin so that you have the latest version? Let me know if there are still uncertainties after that

user-bda2e6 11 March, 2025, 15:45:55

This is not a big problem since I can interpolate the timestamps myself. However, when there are duplicate timestamps in adjacent rows, which row corresponds to the actual moment that timestamp occurs, the first appearance or the last?

user-bda2e6 12 March, 2025, 13:10:50

This question: โ€œwhen there are duplicate timestamps in adjacent rows, which row corresponds to the actual moment that timestamp occurs, the first appearance or the last?โ€

Again thank you so much for the help

user-12efb7 12 March, 2025, 15:03:14

Hi, I'm facing an issue with empty csv files after using the Reference Image Mapper and haven't found a solution through search here. Here's what I have: a recording (with special events) that lasts 1.27 min, "enrichments" successfully identified fixation points on reference image (visually), but 1) I can't visualize the heatmap (it seems there are no points, so I see only the reference image) 2) all csv files are empty. Where could the problem be?

user-480f4c 12 March, 2025, 15:05:01

Hi @user-12efb7! Could you please open a ticket in our ๐Ÿ›Ÿ troubleshooting channel and we'll assist you there in a private chat?

user-cdcab0 12 March, 2025, 15:08:33

This question: โ€œwhen there are duplicate

user-cc6fe4 12 March, 2025, 16:26:59

Hi, I am planning a new experiment using neon pupil labs and I am considering using it outside in the dark. Do the glasses work in the dark? Thank you

user-f43a29 13 March, 2025, 10:52:12

Hi @user-cc6fe4 , NeonNet works in complete darkness, thanks to the IR illuminators in the module. The scene camera's exposure can be adjusted to optimize it for dark conditions. It might be easiest to do a test or two and then check in with us again with how it works out.

user-a09a75 12 March, 2025, 19:27:22

Hello, I am using Neon during an experiment where participants interact with a collaborative robot in the construction of components. I need to log an event every time a participant looks at the robot (e.g., "look.robot.start" and "look.robot.stop"). Ideally, I would like to generate this event programmatically via the API. However, I am currently unable to reliably track this event, even on the Cloud.

I have tried creating a Reference Image Mapper of the scene and drawing an AOI on the robot, but I have encountered two main issues: 1) I am unsure whether this approach will work, as my AOI is dynamic rather than static. 2) I cannot verify this approach, since I can only see the AOI on the reference image and not on the video. I have also attempted to attach several AprilTags to the robot and create a Marker Mapper. However, in this case, I would need to create different surfaces for different parts of the robot (since as the robot moves, some AprilTags may disappear while others appear), anyway I do not see a way to create multiple surfaces within a single enrichment.

Would you be able to suggest a reliable solution for my problem? Specifically, I need to: 1) Accurately track when participants look at the cobot and log an event each time this occurs 2) (Ideally) Generate these events in real-time via API Thank you very much in advance for your help!

user-f43a29 13 March, 2025, 11:19:33

Hi @user-a09a75 , do you know the position of the robot and the articulation/pose of it's joints over time?

user-3ee243 13 March, 2025, 05:28:59

Hi general question: after we finish recording on the Neon for a study, what would be the best recommended method in terms of sharing the data - including the video, and all metrics collected - to other collaborators who might not have an account for pupil cloud? Any simple and accessible way people have used?

user-f43a29 13 March, 2025, 10:49:24

Hi @user-3ee243 , all of your colleagues are allowed to make free accounts on Pupil Cloud. Has this not worked for them? Then, you can invite them as Collaborators to that Workspace. That would be the easiest way and you would all have access to the same interface, analysis tools, and metrics.

user-b57ada 13 March, 2025, 11:48:13

Hello! Could somebody tell me whether it is possible to export the recorded scene camera video with the gaze overlay (the red circle) from the cloud? Whenever I choose any version of 'Download' the only video I get is the scene camera video without gaze overlay. Thank you! ๐Ÿ™

user-f43a29 13 March, 2025, 11:49:36

Hi @user-b57ada , sure, you want to give the Video Renderer Visualization a try. Let us know if you need more info.

user-97997c 17 March, 2025, 10:41:05

Hi Pupil Labs, I am purchasing the frame "Every move you make" (ID: 20250311035037)

Is the CAD design available? Or other design specs? Do you have an estimation of the accuracy of the physical tool? Or even (if it possible to export/import it) the rigid body definition from the OptiTrack software?

Thank you,

Agostino

nmt 18 March, 2025, 02:36:11

Hi @user-97997c! We don't really have this information โ€“ the marker cluster geometries and their pose with respect to Neon's scene camera will need to be established by the user in motion capture coordinates. That said, could you share more details about your proposed use case with the Every Move You Make frame and OptiTrack?

user-cbf227 17 March, 2025, 11:36:28

Hello.

I'm parsing data from neon companion + pupil cloud.

I want to obtain the average position of each fixation for each eye separately. To do this, I am analyzing the eye data for each fixation, obtained from 3d_eye_states.csv. This file provides the position of each eye separately (eye) and its unit gaze vector (v).

With this, I can make a projection, for example, at a distance of 1000mm and obtain a distant point in the gaze direction of each eye. It would be something like:

objectPoints = eye + 1000 * v

I then project this point back to screen coordinates using:

cv::projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs, imagePoints);

For rvec and tvec, I am using null matrices since I understand that the eye positions are already given in camera coordinates, so no translation or rotation is needed.

I do this for all data lines corresponding to the fixation. In the end, I compute the average of the obtained pixel points.

This gives me the average fixation point for each eye, in pixels.

However, these points are not symmetric with respect to the fixation coordinates stored in fixations.csv. I expected them to be symmetric. They are near but not symmetric.

Am I missing something in the projection, or have I misunderstood something?

Additionally, if I do something similar using the azimuth and elevation of the fixation from fixations.csv, calculating a distant point from the camera using polar coordinates and projecting it with cv::projectPoints, I do obtain the coordinates indicated as the fixation center in fixations.csv.

What is happening here? Am I doing something incorrectly?

Thank you

nmt 18 March, 2025, 02:16:23

Hi @user-cbf227! Thanks for your question. A clarification is needed which should help clear things up.

The eye state measurements you're referring to characterise the optical axis of each eye, represented as a vector originating at the centre of the eyeball and pointing towards the centre of the pupil, essentially characterising eyeball orientation. This is not the same as gaze.

Neon's gaze measurements are a separate data stream, represented as a 2D gaze point in scene camera coordinates, along with the elevation and azimuth of a gaze ray. It's important to note that the origin of the gaze ray is the scene camera, not the centre of each eyeball.

While both measurements (gaze and eye state) are provided by NeonNet, they are not directly comparable as they characterise slightly different properties of the visual system, and therefore, not symmetric with respect to the fixation coordinates.

If you want dual monocular fixations, you essentially have some options: 1. Perform a person-specific calibration routine that generates dual monocular gaze, i.e., transform the optical axes to gaze/visual axes. 2. Wait for NeonNet to provide dual monocular gaze and binocular gaze estimates concurrently. We have it on the roadmap for NeonNet to output left eye, right eye, and binocular gaze at the same time. These will be available in the raw data, though I can't share a concrete timeline for the release. 3. It's also currently possible in the Companion app to toggle between left, right, and binocular gaze, although I appreciate it only allows you to save one at a time.

I hope this helps! Let me know if you have follow-up questions.

user-6952ca 17 March, 2025, 17:01:43

Hello! Is gaze data recorded by Neon saved onto a micro SD card on the companion device?

nmt 18 March, 2025, 02:21:30

Hi @user-6952ca! Recordings are saved onto the Companion device's internal filesystem. Not an SD card.

user-796c70 18 March, 2025, 17:14:51

Hello - a question ... Is it possible to view a recording immediately after completion on a connected laptop device, before it is uploaded to the pupil cloud? We'd like to allow a pitcher to view a windup and delivery immediately after the pitch is completed without a wait - thanks

nmt 19 March, 2025, 09:33:24

Hi @user-796c70! It's of course possible to playback the recordings on the Companion device immediately for review. Have you already tried that / is the screen big enough? If not, there are a couple of options. Firstly, to answer your last question, yes, you can use the Monitor App to real-time stream the eye tracking footage. You can read more about that in this section of the docs. Secondly, it's possible to transfer your recordings to a laptop running Neon Player, our free desktop software. You can load the recording into Neon Player. This isn't immediate as you need to transfer them, e.g. via USB. But it would be relatively quick and easy.

user-796c70 18 March, 2025, 17:16:00

Also, if its not possible to view it immediately - is it possible to send the live stream to another app via the neon.local:8080 port?

user-688acf 19 March, 2025, 08:02:56

good morning, does anybody happen to know if and where I can set the detection threshold for fixations in neon player? thx in advance

user-cdcab0 19 March, 2025, 08:51:16

Hi, @user-688acf - sorry, but it's not configurable. Although, the current version of Neon Player uses pl-rec-export under the hood for fixation calculation. This tool doesn't directly expose any timing thresholds as configurable parameter, but it is open source, so one could tune it to their specific needs

user-a7588f 19 March, 2025, 08:31:44

Hi ๐Ÿ™‚

Iโ€™m working with the Pupil Labs Neon glasses and the Pupil Labs Asynchronous API for streaming data. Thereโ€™s an issue with the framerate that isnโ€™t entirely clear from the API guide. While the gaze data is captured at 200Hz, the scene video (which we use to detect the screen in real time) is only recorded at 30Hz. The problem arises when combining these two streams: the recommended implementation syncs the gaze data to the scene videoโ€™s framerate, meaning you lose the high-frequency gaze data.

Iโ€™ve been experimenting with a workaround to decouple the two streams (still a work in progress), but Iโ€™m wondering if thereโ€™s a more elegant solution. Specifically, is it possible, by design, to overlay the gaze data onto the scene video while maintaining the full 200Hz framerate of the gaze data? Of course, I could have missed something in the documentation, please let me know if that is the case. Thank you!

user-cdcab0 19 March, 2025, 09:10:04

Hi, @user-a7588f - I suppose you're using the receive_matched_scene_video_frame_and_gaze function?

Instead, you can use device.receive_scene_video_frame() and device.receive_gaze_datum() to pull the data separately.

If called with no arguments, these functions will block until the requested data is ready, but you can specify a timeout_seconds argument here to control how long the call will block waiting for data before giving up. Using a value of 0 will cause the function call to return immediately - either with the data (if it was available) or None if it was not.

So something like this would get you 200Hz gaze with 30Hz scene video (depending on factors like the speed of your network, workload of the pc, etc):

... snip ...
gaze = None
scene_frame = None

while True:
    gaze = device.receive_gaze_datum(0) or gaze
    scene_frame = device.receive_scene_video_frame(0) or scene_frame

    if None in (gaze, scene_frame):
        time.sleep(0.5)
        continue

    frame = scene_frame.bgr_pixels
    cv2.circle(
        frame,
        (int(gaze.x), int(gaze.y)),
        radius=80,
        color=(0, 0, 255),
        thickness=15,
    )

    cv2.imshow("Scene camera with gaze overlay", frame)
    if cv2.pollKey() & 0xFF == 27:
        break
user-796c70 19 March, 2025, 10:35:10

Thanks Neil - I'm looking to replay the captured video pretty immediately in slow motion on a screen that is larger than the devices running the companion app. Dealing with athletes that are warmed up and can't wait for a USB transfer, they can wait 20-30 seconds, but not longer before getting back to activity. If like to review there last play with them immediately after completion and before the next one?

user-21cddf 19 March, 2025, 18:14:51

Hello, I am wondering if anyone ran into a problem when trying to permanently delete videos from the drive, they dont delete?

nmt 20 March, 2025, 04:10:43

Hi @user-21cddf! If the recordings you're trying to delete are currently part of an enrichment, that might be why you cannot delete them. This is a failsafe.

user-21cddf 19 March, 2025, 18:59:09

I am also wondering if the scanning recording for the reference image mapper has to be from the same device as the videos or from the same day? Ive been having issues where an identical scanning recording does not create a good match for the model, even if the context, lighting etc. are identical I would also appreciate any tips on using the reference image mapper in general:)

nmt 20 March, 2025, 04:11:48

It should not matter. But so that we can provide more concrete feedback, would you be able to invite [email removed] to your workspace so we can take a look?

user-a4e164 19 March, 2025, 22:13:41

Hi there!

We're hesitating between upgrading our older Pupil Core (with the original eye camera on a single eye) to the newer binocular Pupil Core, or switching to a Neon. Could you please help me clarify a few things ?

1/ Is it fair to say that compared to the Core, the Neon is actually a downgrade in terms of accuracy (reported as 1.3ยฐ with offset correction for the Neon, and 0.60ยฐ after calibration for the Core)? Is it the case that better resistance to slippage for the Neon somehow offsets the loss in accuracy in actual everyday use?

2/ Is it the case that the "Just act natural" frame would be the most suitable for in-lab studies in terms of generally fitting well to all head shapes? I think I understand the "better safe than sorry" frame fits more tightly and could be less suitable for some participants. What about the "Is this thing on" and "Nothing to see here" options - is there a meaningful difference in fit or in accuracy?

3/ Is it correct that I can buy the Neon with a "Just act natural" frame, add an empty additional "All fun and games" frame for cheap, and easily swap between the two?

4/ Is it correct that you have to use the Neon Companion app on a smartphone and cannot connect the Neon directly to a computer? I'm a bit confused about this. Our current setup with the Core is that we use PsychoPy to display auditory stimuli, and at key moments in the task we have PsychoPy send a notification to Pupil Remote which we then use to synchronize timestamps. How exactly could we make this work if we can't actually connect the Neon to the computer? (Note that participants in this study don't actually look at the computer so we can't even use the world camera to synchronize the two.)

5/ After ordering, what is the approximate delay to receive a Core vs. a Neon ?

Thanks for all your help ! :)

nmt 20 March, 2025, 04:34:16

Hi @user-a4e164! Certainly, I'll respond to your points below: 1. No, not really. It is true that Core can get you very accurate data, but this requires very controlled conditions, e.g., head very stable, 2D calibration pipeline (not robust to slippage), short recordings and repeated calibrations, constant lighting, etc. In practical everyday use, Neon provides equal to or better accuracy than Core and it is very robust to slippage. It's just much easier to obtain high-quality data over the course of testing. It also provides rich datastreams. 2. This really depends on the types of studies. Because Neon is modular, it can be fitted into different frames best suited to a specific use case. The frame closest to Core is called 'Is This Thing On'. It has no peripheral visual field occlusion and works well for many lab studies. However, 'Better Safe Than Sorry' is also an option. Both of these frames are more snug fitting that Just Act Natural. There is no difference in accuracy across framesโ€“the same module is used. 3. Correct! 4. Also correct. We actually have a Neon-PsychoPy integration. Neon connects to PsychoPy over a local network; you can use a tethered connection via a USB hub and Ethernet for low-latency communication. But if you just want to send events to Neon, this is possible via our real-time API. 5. Our target fulfilment is within 10 days + 3 days shipping for both Core and Neon. However, we can provide a more concrete estimate when generating a quote. If you'd like, we can also arrange a demo and Q&A session via video call. Just let me know, and I can send a link!

user-f43a29 20 March, 2025, 11:29:54

Hi @user-a09a75 , glad to hear it worked out with the gaze overlay!

All of the datastreams from Neon are timestamped from the same high-precision clock, so all of Neon's data are synchronized by default. The Native Recording Data and Pupil Cloud's Timeseries Data are in principle the same, so the approach is equally valid for both formats. Pupil Cloud re-processes gaze at 200Hz and computes some additional streams

The pl-neon-recording library used in @user-d407c1 s YOLO example is a Python interface to work with the open data saved on the phone. Internally, it uses pyav and interpolates when sampling streams

Regarding the LSL synchronization, when Stream to LSL is enabled and you send an Event, then it is automatically saved in the Neon recording timeline, as well as LSL's. It sounds like you are already doing this

Note that converting to different timezones does not change approaches to synchronization. It can even obscure that process. Timezone conversion changes how the timestamps are visually represented for display purposes. For accuracy, it should be easier to stick with the default formats and save the timezone conversion for a final step

Since your analysis is already done post-hoc, I'd personally stick with doing it like that. I'd assume it is easier than migrating to a real-time implementation

If you want to apply YOLO or similar real-time models to your situation, I'd recommend checking out one of our Support Packages. Then, we can dedicate resources to assist with code & implementation. Briefly, whenever gaze is on an object that YOLO has detected, you can send an Event

user-a09a75 20 March, 2025, 12:09:29

Ok everything clear! I just want to stress a little bit the issue of post-hoc events alignment. So, Right now, the procedure Iโ€™m following consists of:

1) Sending events to Neon with "stream to LSL" enabled, so they are recorded both in the Neon timeline and in the LSL timeline. These include default events like record begin, record end, and experimental session start and end. - When downloaded from the Neon Cloud, these events have timestamps in UTC. - When recorded via LSL, the timestamps are in the LSL time domain, which (if I understand correctly) are timestamps relative to when LabRecorder was started.

2) Synchronizing post-hoc the other events (gaze on cobot or not) in the LSL timeline. This is the tricky part because these events only reference the Neon timeline and their timestamps are only in UTC.

So, why I converted Neon timestamps from UTC to CET: Since LSL XDF files include metadata that provides the recording start time in CET, I can use this as a reference and align all timestamps in the recording relative to that start timestamp. This is why, for convenience, I also converted Neon timestamps from UTC to CET, allowing me to compare and align LSL and Neon timestamps more easily.

I hope this explanation clarifies my approach! If you have any suggestions for a better or more direct method, Iโ€™d love to hear them! ๐Ÿ˜Š

nmt 21 March, 2025, 00:04:26

Motion Capture w. Neon

nmt 21 March, 2025, 00:28:30

Reference Image Mapper

user-2d96f8 21 March, 2025, 08:23:05

Hello! I'm using Pupil Neon for the first time, and I'm encountering an issue where there's an offset between where I'm actually looking and where my gaze is being detected. I've performed manual offset correction and inter-eye distance measurement in the app, but there's still some offset remaining in certain areas.

https://www.youtube.com/watch?v=08z9t3BZ7OI The video is a screen capture of the recording from the app. I'm actually looking at the position of the white circle. As you can see in the last part, there's still a significant offset remaining in the left edge area of the screen.

Is there any way to address this issue? Thank you!

user-d407c1 21 March, 2025, 08:29:14

Hi @user-2d96f8 ๐Ÿ‘‹ ! May I ask, are you wearing your own spectacles under Neon? Can you share a screenshot of your eye images so we can better understand what's happening.

Feel free to open a ticket in ๐Ÿ›Ÿ troubleshootingto share that image.

user-e13d09 21 March, 2025, 17:51:00

Hi! I was wondering if any of you could point me towards where I can find the script to convert raw Pupil Lab data into csv files

user-d407c1 21 March, 2025, 18:00:00

Hi @user-e13d09 ๐Ÿ‘‹ ! To convert from Native Recording Format to CSV files locally you can use pl-rec-export. Lemme know if you need help with

user-afb0c1 24 March, 2025, 06:56:08

Regarding I'm testing the same vdo clip with 5 people, is it possible to aggregate gaze point of each subjects in the SAME frame of analysis?

Because I love to see distribution of attention at the certain frame

Plz kindly suggesting us

user-f43a29 24 March, 2025, 08:37:47

Hi @user-afb0c1 , are you using Pupil Cloud?

user-afb0c1 24 March, 2025, 06:56:23

๐Ÿ‘“ neon

user-2d96f8 24 March, 2025, 07:05:50

Hi, It would be really helpful if we could develop with the realtime API without needing physical devices. Are there any mocks or simulations available to support this kind of development?

user-f43a29 24 March, 2025, 08:34:53

Hi @user-2d96f8 , we currently do not offer something like that, but perhaps members of the community have created it.

You could approach this by taking the data from a Neon recording and streaming it over the network from a local server. You would need to simulate at least some of the functions described in the Under the Hood section of the Real-time API docs. You would probably also want to simulate the sampling rates of the different datastreams.

I'm not sure though if it would be the most robust way to develop, since there is always a chance for discrepancy from the official version running in the Neon Companion app and anytime the app is updated, the mock may similarly need to be updated.

May I ask what your use case is?

user-fea107 25 March, 2025, 13:06:30

Hello! I have been attempting to run Face Mapper data overnight, but Pupil Cloud has been stuck in the "Processing" stage for over 10 hours. I wanted to check if the website is currently experiencing issues. Is there anything I can do to resolve this and get it working again?

Thank you for your assistance.

user-edb34b 25 March, 2025, 15:37:23

Hello, same for me with both the Face Mapper and the Marker Mapper, the enrichments are still "Processing" for 24h now. Thanks for your assistance.

user-a09f5d 25 March, 2025, 15:49:59

I can confirm that I am also experiencing the same issue with pupil cloud as @user-fea107 and @user-edb34b. I have been trying to run a RIM enrichment and it has also been stuck on the processing stage for almost 24h.

user-d407c1 25 March, 2025, 16:26:21

Hi @user-a09f5d , @user-fea107 , @user-edb34b ๐Ÿ‘‹!

Cloud is currently under an exceptionally high load, which might result in longer enrichment processing times.

First of all apologies, as you may experience some delays until results are available. The team is actively working to speed things up and prevent this from happening in the future.

We appreciate your patience !

user-665dc2 26 March, 2025, 12:08:33

Hi, I am working with AOI gaze data, to detect what percentage of gaze data is not detected by the Pupil Cloud enrichment and neon glasses. Is there a way to calculate that? I have tried looking at the data where gaze position on surface x and y are empty, but it was not the case in any raw data. Is there any better way to detect the gaze coverage percentage by the eye-tracking system?

user-f43a29 26 March, 2025, 12:16:50

Hi @user-665dc2 , since you mention on surface, you are looking at the gaze.csv file from the Marker Mapper Enrichment, correct?

To be sure I can provide the best answer, may I ask for more clarification about what is meant by gaze coverage percentage and gaze not detected?

user-665dc2 26 March, 2025, 13:09:50

Is there a way to realize when the system failed to detect gaze due to low light, not detecting the april markers, or other external/internal disturbances? for example.

user-41d16e 28 March, 2025, 05:00:38

Hi! I'm looking into timestamps for the eyetracker's world video. It seems like world_timestamps.csv has different length compared to the timestamps obtained from neon_recording.scene.ts (world_timestamps is longer for most of my recordings), upon closer inspection it also seems like their values can differ substantially when refering to the same frame in the video. I was wondering if this is expected or if I'm missing something? Any help is appreciated, thanks in advance!

user-d407c1 28 March, 2025, 09:00:21

Hi @user-41d16e ๐Ÿ‘‹ ! Some sensors can have a slight startup delay. Rather than discarding that early data, in Cloud we fill the scene camera feed with gray/empty frames โ€” this ensures no data is lost. pl-neon-recording will not generate those empty frames by default and timestamps start at the beginning of the scene camera feed.

Is this what you are observing? Or are the timestamps completely different? If you share some example data (through ๐Ÿ›Ÿ troubleshooting ) we can have a look.

user-afb0c1 28 March, 2025, 13:05:29

I donโ€™t think thatโ€™s what I want to do.

Imagine that we have 4 people who seeing the same vdo clip 1mi. 30 sec (separately 1 by 1).

Then I love to see the distribution of fixate of all 4 people at a certain time.

Could we do it on a PupilLab?

user-f43a29 28 March, 2025, 17:11:02

Hi @user-afb0c1 , I see. So, something like a beeswarm visualization? While that is currently not possible on Pupil Cloud, you could build on the code in the Scanpath Alpha Lab guide to do this.

Otherwise, feel free to make a ๐Ÿ’ก features-requests !

user-afb0c1 28 March, 2025, 13:05:45

I meant, on a PupilCloud

user-afb0c1 28 March, 2025, 13:05:52

@user-f43a29

user-98b2a9 28 March, 2025, 13:39:38

Hi, we are having an issue with our recording. We are recording two 30-minute sessions. While everything works perfectly fine for some recordings, others (and we could not find an error on our end) do not have any events in the recordings. This is most often but not always the first out of the two 30-minute recordings. Specifically, we only get the events 'recording.start' or 'recording.end' but not any of the synchronization triggers via LSL and also do not get any of these.

user-f43a29 28 March, 2025, 17:11:22

Hi @user-98b2a9 , could you open a Support Ticket in ๐Ÿ›Ÿ troubleshooting ? We will follow-up there.

user-98b2a9 28 March, 2025, 17:11:50

Sure, Thanks!

user-24f54b 28 March, 2025, 20:00:04

Hi all, for the IMU in the neon, are the X, Y and Z directions what you would typically expect? X is subject's left-right / medial-distal, Y is up and down towards the ceiling or floor / superior-inferior, Z is forward-back / anterior-posterior?

user-f43a29 28 March, 2025, 20:08:59

Hi @user-24f54b , the IMU's axes are documented & diagrammed here. You might also be interested in our IMU Transformations Alpha Lab guide.

user-4a0c14 31 March, 2025, 22:11:47

Hey there, was wondering if anyone had undertaken a project that involved gaze-contingent displays/stimuli that respond to the eyetracking data in real time. I know the latency of such a system would depend on one's own network, since the coordinates are transmitted wirelessly, but I was hoping somebody could share what latency they were able to achieve with this type of setup?

also if this is not the best channel to send this question, would much appreciate being redirected, thanks!

user-d407c1 01 April, 2025, 06:49:03

Hi @user-4a0c14 ๐Ÿ‘‹ ! Have you seen our Build Gaze-Contingent Assistive Applications Alpha-Lab article? Not only you can see an example there, but the code is open-source so you can run it locally and test what latency you see in your own setup.

End of March archive