I am using the Neon player. When comparing the fixations export with the fixations on surface export the timestamps differ a little bit in the last 3 places. The duration of the fixations is the same, indicating that the IDs are correctly asinged. What is interesting is that all of the missing fixations are in one block at the end. This means that the last fixations are always missing, e.g. 3000 fixations are found in the fixations export and fixations 1-2800 are found in the fixations on surface export. There are no cases in which a fixations in missing in between. This seems really odd to me. Additionally, in most cases where fixations are missing about 10% of fixations are lost but in some cases less than 30 % of the total fixations are listed in the fixations on surface export.
Hi, @user-ab3403 - would you be able to share a recording with us? If it's on Pupil Cloud, you can invite [email removed] If it's not on Pupil Cloud, you can upload it to file sharing service of your choice and share with data@pupil-labs.com
Hi, @user-ab3403 - thanks for sending the recording. I manually examined the first 10 missing fixation IDs (49, 82, 155, 158, 163, 165, 167, 168, 171, 175) and found that all of these occur during segments of the recording where the surface is not detected (see attached image). This is typically due to motion blur. That's always going to occur when the head is moving, although larger markers are a little more robust in this regard.
Looking at your data, I think this case applies to 102 of your fixations. However, starting from fixation id #1580 to the end (1,901 fixations), a different problem occurs where the fixation event is reported without a position. This will require more investigation
Dear pupil team, we had this problem for the second time now. What yould be the problem?
Hi @user-9a8fd8 ! Could you kindly create a ticket in the 🛟 troubleshooting channel so we can follow up with some troubleshooting steps? Please when following include the serial number of your device and if the recording is uploaded to Cloud, please follow also with the recording ID.
I've seen the Neon Companion app is also supported by the Samsung Galaxy S25? If I had a Samsung Galaxy S24, could it work (worse, but work?) as well? Or is that a question that quickly devolves into every dev asking for their mobile phone and whether it works? 🙂
Hi @user-1391e7 , the Neon Companion app is not supported on the S24. Only the devices listed in this part of the Documentation are supported.
thank you!
Frame ids are correlated to frame names, these are written on the frame so you can't change them
Hi everyone, I have observed an issue with the world camera timestamps and was hoping you can help me understand what is happening. My understanding is, that the file wold_timestamps.csv contains an entry/row with a timestamp [ns] corresponding to each world camera video frame (Neon Scene Camera v1 ps1.mp4). However, when comparing the length of the “wold_timestamps.csv” with the length of the “Neon Scene Camera v1 ps1.time” timestamps, the world_timestamp.csv has always more entries/rows. I discovered this when exporting the video frames from the mp4 file (using the method recommended in the tutorial 07_frame_extraction.ipynb). Interestingly, the amount of extracted video frames matches the length of the “Neon Scene Camera v1 ps1.time” file, but there are always less frames/timestamps than entries/rows in the corresponding world_timestamps.csv files. The offset is different for each recording I checked (see screenshot below for more examples). Any idea what is going on? I was planning to use the word_timestamps.csv timestamps to match the gaze entries corresponding to each world camera frame, but since the world_timestamp.csv do not match the world camera video frames, I am unsure how to proceed.
Hi @user-33134f 👋
I think you might be mixing a few things together from different resources 😅
First — the 07_frame_extraction
tutorial you're referencing is for 👁 core data, not 👓 neon .
Regarding the mismatch between Native and Cloud data: this has come up a few times before — you can check out past discussions here:
- https://discord.com/channels/285728493612957698/1047111711230009405/1355104213067370566
- https://discord.com/channels/285728493612957698/633564003846717444/1227150589352218687
- https://discord.com/channels/285728493612957698/1047111711230009405/1364251433612087416
In short: sensor data (like gaze or IMU) can be available before the scene video starts. Instead of discarding that early data, Cloud fills the video timeline with gray frames — you can see those in the Cloud player.
The world_timestamps.csv
file is usually generated in Cloud and includes timestamps for all frames — including the gray fillers.
On the other hand, the native format doesn't contain those gray frames, which is why the timestamps don't align.
Hope that helps clear things up! If you share whether you prefer to work with the Native data or the time series, I can follow up with some frame extraction examples
ah ok, thanks! I was using the native mp4 file, since some of the data is a bit older, so the time series szene video still had the gaze cursor enhancement. But this makes a lot of sense and I will just use the time series scene video instead. Sorry about that, I totally missed this explanation. Just to double check, if I wanted to synchronize the pupil camera recording (Neon Sensor Module mp4) from the native data as well, should each frame of the sensor correspond to the gaze.csv entries/rows? Or do I need to use a different video file here has well?
Hi everyone, i am very new to all of the NEON features and trying to wrap my head around it. I am utilising it for my research project on eye tracking of participants whilst completing a simulated surgical procedure.
I am wondering if anyone might be able to give me a general idea on how best to begin setting it up. I mainly need to track fixations, saccades, blinks, focus on key areas, pupil dilation, gaze patterns. I had a read through and ideally heatmaps for the key areas would be awesome, and also being able to tabulate my results
i have no background in coding or using similar software so any help would be very much appreciated
Hi @user-96cc18! Setting up Neon to start acquiring eye tracking data is actually very straightforward. I’m not sure how far you’ve got already, but in principle, it’s really just a matter of connecting it to the Companion device, putting the system on a wearer, and hitting record—fixations, saccades, blinks, and gaze patterns are all recorded by default.
It’s what comes after, in terms of analysis and how you use these data streams, that requires more careful thought. A good place to start would be to clarify what you hope to achieve with this data in the context of surgical simulations. You mentioned heatmaps; these can be powerful tools for visualising which areas of an environment attracted attention. Perhaps you could elaborate on your research question, or is this more of an exploratory study? Either way, the more details you can share, the better we’ll be able to assist.
Moreover, could you describe your surgical simulation environment in more detail? What does the set-up look like, which areas are you interested in mapping gaze to, and so on? A photo would be extremely helpful!
My set up is quite simple, but it will be to track a rectangular block of gelatin on a bench, and the operators eye movements whilst completing a venopuncture procedure. There will also be a ultrasound system set up to help with the placement and correct insertion of the needle.
there have been some studies that have showed a correlation between the operators fixations, saccades, blinks and their expertise (i.e. experts have less fixation on unnecessary tools and environmental distractions, proven by longer fixations on key task at hand, faster saccades etc)
the key areas i would like to map gaze would be the gelatin mould in the middle as well as the computer once that has been set up properly
Thanks for describing your project in more detail and for sharing the images. That's very helpful. One more question: do you plan to use Pupil Cloud for your analysis?
Hello Neon team. We are batch processing several recordings for enricment, each with 20 events. Is it possible to have a table with all the events (anotations in pupil core) related to each time stamp. Not just between two events?
Hi @user-b3b1d3 👋 ! Just to make sure I’m following — are you looking for a table with events and their corresponding timestamps? That is available in the events.csv when you download the Timeseries.
You can download it for all your recordings in a project, by going to the downloads tab of the project.
Hi! Is the main difference between the pupil core and the Neon the "no need" of calibration?
Hi @user-7c5b51 , apologies for letting that slip off the radar. To answer your question:
Neon's calibration-free nature is a big difference that simplifies many things, but there are also other main differences, some of which derive from that calibration-free nature:
@user-7c5b51 May I ask what your research goals are?
There are also other reasons, but these are a few that come to mind.
Our research is sound localization in space. With the eye trackers, we are hoping to understand how well patients with hearing loss are able to identify different sounds in space. The eye tracker would be useful in helping us mapping accuracy.
I see. This is another area where Neon can be helpful.
Neon has microphones, whereas Pupil Core does not. The audio streams, when enabled, are part of a standard Neon recording.
With Neon, you don't need to give any instructions, which can be helpful if the participants have hearing loss. You simply put it on and you are eyetracking. With Pupil Core, you have to instruct the participant on how to do the calibration (i.e., "first, roll your eyes around in a circle").
I could also imagine Neon's IMU being helpful here, since you can study how head & eye movements interact when localizing sound.
Thank you @user-f43a29
Hi, I would like to download the recorded eyetracking video including the gaze overlay + fixations from the cloud. Somehow I cannot manage. I have already tried the video renderer visualization but it says 0/19 recordings included and I am stuck. Thank you for the help in advance.
Hi @user-f4d2b3 👋🏻 May I ask which events you selected in the Temporal Selection section of the Video Renderer visualization? Since 0 out of your 19 recordings were included, it seems they may not contain the selected events. Could you please double-check this?
I seleced recording.beginning und recording.end. Now I first created a reference image mapper as an enrichment and now I think it looks more promising. But from the docs it wasn't clear to me that I need to make an enrichment first before using the renderer. Maybe I also just misunderstood 🙂
You do not need to create an enrichment to use the Video Renderer visualization. Once you've added your recordings to a project, the Video Renderer will automatically generate a video with gaze and fixation overlays for each recording included.
Since you selected the default events recording.begin
and recording.end
, all recordings added to the project should be included in the visualization, as these events are always present.
If you're still seeing 0 out of 19 recordings included despite selecting the default events, I’d like to take a closer look to better understand the issue. Could you please open a ticket in https://discord.com/channels/285728493612957698/1203979608563650650 ? I'll be happy to continue the conversation there.
hi. I found this video IMU visualisation interesting and useful. However in the code that you have provided i am only able to generate the animation path in the side. Is there any version that allows us to visualise the gaze overlay with the animation beside as shown in this link. link :https://youtu.be/3OdkHo3ThAE
Hi @user-15edb3 , to make that video, the ffmpeg
command-line tool was used to horizontally stack them side-by-side.
Code like this is essentially what was used, but you may need to modify it if your videos have different width/height:
ffmpeg -i video1.mp4 -i video2.mp4 \
-filter_complex "[0:v][1:v]hstack=inputs=2" \
-c:v libx264 -crf 23 output.mp4
You could also use any other video editing tool to achieve the same effect.
The gaze + eye overlay was based on code from these pl-neon-recording examples:
https://drive.google.com/file/d/1OhyfLiLJMVQIwEZReFjdzjekFWcA41L0/view?usp=sharing here are the world video and IMU visualisation video. They dont seem to match i believe. For eg, when wearer moved head up and down , i dont see the sky-earth part being captured rightly. However, the imu.csv file has recorded the data rightly
Hi @user-15edb3 , I double checked. Apologies, as it has been some time since I have looked at that code.
Would you be able to share the data from that recording with us? You can put it on Google Drive and share it with [email removed]
For now, you can change the line that defines relative_demo_video_ts
to the code below and that should include an additional 10 seconds of data in your case:
relative_demo_video_ts = np.arange(
world["relative ts [s]"].iloc[0], world["relative ts [s]"].iloc[-1], 1/30
)
That line had been added to make the visualisation in the Alpha Lab guide. Only a subsection of the recording was intended for demonstration & pedagogical purposes and used to make the animation.
Hello! I have a questions for the staff and other researchers here - Is it possible to extract pupillometry measurements and if so how reliable are they? Did anyone look at pupilometry data of infants? Thanks:)
Hi @user-21cddf 👋 ! Yes, pupillometry is available with Neon — both in real time and post-hoc. It's reported in millimeters and accounts for gaze angle. For details on accuracy and robustness, I’d recommend checking out our whitepaper, where we replicate several traditional experiments. If you're working with infants, we recommend measuring their IPD and entering it in the wearer’s profile to improve the accuracy.
Hello,I would like to know if wearing contact lenses will affect the measurement of various eye tracking data, such as pupil diameter, etc. Thanks!
Hi @user-5a90f3! Neon works with contact lenses. Contact lenses do not affect the performance of eye tracking data, such as gaze measurements or 3D eye poses. There may be a small effect on the absolute measurement of pupil diameters (the effect on relative changes is negligible). Therefore, I'd recommend that the wearing of contact lenses (or not) should be consistent across trials when conducting pupillometry studies.
And how do I feel so hot when I wear it this time, I didn't feel this way before, and I only used it for six or seven minutes this time. What is the reason for this, is it normal, and how to adjust if not.
The module itself might warm up but should not overheat. This should not affect Neon's functionality.
May I ask which frame you're using?
See this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1265679380471087246
Thank you very much, I used it for a long time and the fever should be normal. There is also a question, I changed the wearer, why can't he see both eyeballs at the same time, adjusting the IED value doesn't work either, is there any other workaround?
Hi @user-5a90f3 , I'm briefly stepping in for @nmt .
Could you send an image of what you mean by both eyes not being seen? You can send it via DM if you don't want it to be public.
Greetings! I have been working with the Pupil Labs Realtime API to retrieve data from the Neon. Is there a way to get eye openness data from the API? For example, I am keenly interested in obtaining the eye aspect ratio. Thanks!
Hi @user-937ec6 👋 ! If you are on the latest version of the app and have "Compute eye state enabled", eye openness can be accessed from the gaze datum.
Thank you. I see the eyelid data and that is a big help. Am I correct in my assessment that the eyelid aperture height is available while the width is not?
Do you mean from the medial canthus or caruncula to the lateral canthus (i.e., eye width)?
If so, no, that's not measured.
Eye openness is defined here
Thanks for the response. I appreciate the need to be precise about defining eyelid width and height but only to a point since with wearable eye trackers the cameras move with the subject and thus have a relatively fixed frame of reference to the face. That was my thinking anyway.
Hi team, in my experiment, I have two blocks of trials, the first block has 6 trials and the second block has 12 trials. I wonder if there is a way to mark the trials uniquely using event plEvent names so that my event CSV data, after enrichments, can tell me which images each trial is associated? Currently, the event CSV looks like below. Thanks a lot in advance!
Hi @user-9a1aed 👋 ! How are you annotating the events — using the Monitor app, or through the real-time Python client? Note that you can define unique names there. Regardless, every enrichment file also has the corresponding unique recording ID and a unique section ID, that you can use to split them.
Sorry I haven't mentioned my annotation. I am using psychopy program to define the event name. The component is called plEvent.
Hi, @user-9a1aed - would it satisfy your needs if you could add the current block/trial number to your event name?
Good morning! I've been using the Mapping Correction on Pupil Cloud for a few days now, but every time the fixations are in a different place from where I clicked. I have to correct the same fixation 2 or 3 times before it's in the right spot. Anyone knows if there's some way to fix that?
Hi @user-e33073 ! Could you create a ticket on 🛟 troubleshooting and share the following?
This would help us to investigate the issue, and open a direct line in case we need further information.
I added the plEvent in Psychopy, but it does not seem to allow me to add a block or trial number that's associated with event CSV data. My event names are like this: 'study1StartplEvent'
'study2StartplEvent'
'study3FormalStartplEvent'
Are you wanting to modify your experiment so that this info is included in the event name, or have you already collected your data and need to work with the data that you have?
Good morning everyone, We recently purchased a Neon (All Fun and Games), but I’m experiencing a few issues. At first, the device was frequently showing the error you can see in the attached screenshot. After searching here on Discord, I found that others recommended updating the "Neon Companion" app to address this problem. I updated the app, and this issue seems to no longer occur.
However, a new problem has come up: when I start a recording, sometimes it stops after a while and the device is no longer recognized, even though it is still physically connected.
Has anyone else experienced this? Could it be a defect with my device? Thank you very much for your help!
Hi @user-3c0808! Could you please open a ticket in our 🛟 troubleshooting channel? We'll assist you there in a private chat!
Hi Pupli Labs team, I have a few questions about the accuracy report (https://zenodo.org/records/10420388), that I hope you can help me answer. Did you include eye height and head rotation/imu for the angular error calculation? Or is the code and accuracy calculation pipeline open source and accessible somewhere? In my validation pipeline calculations I noticed some variance in my results that can probably be attributed to my error formula assuming a constant distance from the calibration targets while in actuality the participants had slight variations in their head position, thus violating the assumptions of my formula. So I was wondering, how you approached this issue in your "wild" condition of the accuracy pipeline, or did participants also assume a normalized head position similar to the head rest setup in the lab condition?
Hi, I am experiencing a very big overexposure in the road scenery with the Neon when driving a car compared to Pupil Invisible. I tried Balanced and Highlights. What should I choose to have a clear picture of the road ahead and at the speedometer when I look at it?
Hi Pupil, I am using the Neon with a Win10 pc and the ANKER hub, to work with Matlab I use a Wifi from my phone, for both the PC and the Moto phone From neon.local:8080 I can see the live stream
BUT
When I run anything from Matlab, it gets stuck! Specifically, it gets stuck in the constructor (Device) at line 60, to initialize the common streams I tried to put a breackpoint there and initialize some other stream first, but it did not work
Any suggestions? Thank you,
Agostino
Hi @user-97997c , to clarify, are you using the Ethernet via the Anker hub and a hotspot from the phone at the same time to connect to the same machine? Or, do you mean that you previously used the Anker hub and are now testing how a hotspot works in your situation?
Hi @user-3c26e4 , have you tried the scene camera's Manual Exposure setting?
[email removed] (Pupil Labs) I didn'want to use it, because i need the exposure differences to change dynamically when I fixate otside and inside of the car. I always used the automatic exposue with the Invisible and it worked great. I guess manual mode would work for the road scenery but not for the elements at the dashboard.
Hello team, I often encounter this problem when downloading time series data on the cloud. I can't download it, but I can download it normally when downloading other things. Changing the network and browser doesn't solve this problem. Currently, I'm using the edge browser. Is there a solution?
Hi @user-5a90f3! Would you be able to elaborate on what you mean by can't download it? Do you see an error message? Or does the download not complete? Anything to help us better understand the issue would be great.
Hello, I am using the Crawl walk run module with children. The module warms up after a couple of minutes of use. Using the eye-tracker for a longer period also leaves a red spot on the forehead (due to the heat) of the child. Even though it is not too hot and the red spot goes away after a while, this is still a concern for parents. Is there any suggestion on how to solve it, like sticking something between the module and the child's skin? Thank you in advance
Hi @user-5b6ea3! The module is expected to warm to about 10 deg over ambient temperature. The Crawl Walk Run frame has a built in heatsink to dissipate the warmth. That said, it's also possible to use a foam pad. Please contact [email removed] in this regard and we can help! Alternatively, if you DM me your email we can reach out directly 🙂
Validation
Hello team, I'm new to neon and I have a very basic question. I'm trying to download recordings of more than 50 subjects from pupil cloud using API, and I can get the raw data. However, I'm primarily interested in the enriched .csv outputs (e.g., saccades.csv, fixations.csv, blinks.csv, etc.). These files are included when I download recordings manually from the Pupil Cloud web interface, but I haven't found a way to access them programmatically through the API. Am I missing something very obvious here? Thank you!
Hi @user-f43a29
I was using both, I removed the wifi and left the hub, now it is working.... thanks for the help!
Hi @user-97997c , you are welcome. That combination may have caused a conflict. If you try using WiFi/hotspot without the hub connected, then let us know how that works out.
@user-d407c1 I have been testing this today, and I am indeed receiving EyestateEyelidGazeData within Python as expected. However, I have observed that the LSL output from the application lacks these extra fields despite the following being enabled in the Neon app: Stream over LSL and Compute eye state. I assume this is a bug, as the expected intuitive behavior would be to see the additional EyestateEyelidGazeData fields output over Lab Streaming Layer if the aforementioned options are active in the Neon mobile application.
Hi @user-937ec6 👋 ! We do plan to update the LSL output to incorporate the new eyelid data, that said I can't currently share an estimate of when it will be available. Feel free to create a 💡 features-requests post to track it or get notified.
Hello, I had a quick question where I was setting up the neon glasses for the first time while also using the pupil cloud. However, when I try to record with the glasses, they don't send any data to the cloud. I was wondering if I have to manage my email to allow it to connect to the cloud or anything like that?
Hi @user-7e49f7 👋 ! Quick question — is the “Upload to Cloud” toggle enabled in your settings?
Since you already created a ticket, let’s keep the discussion there so we can streamline support. Thanks!
Hi @user-3c26e4 👋
Neither Pupil Invisible nor Neon scene cameras are HDR, so in situations like driving on a sunny day or moving through a tunnel, parts of the video can become overexposed.
While the scene camera built into the module can’t be replaced, there’s a nice workaround if you need HDR capabilities. This tutorialshows how you can pair the system with an action camera and map gaze onto that footage.
Action cameras usually offer a wider field of view, higher frame rates, and HDR support. The trade-off is that they’re a bit bulkier, but you can mount them on the head rather than in front of the eyes, which gives you more flexibility. Let us know if you have any questions about.
Hi @user-d407c1 With Invisible when I look ahead it corrects the exposure itself, and when I fixate at the speedometer it changes the exposure, making it brighter and the road ahead darker. When I gaze at the road again, the exposure changes. This is not the case with Neon. Which settings should I use to reach the desired exposure as with Invisible? I don't want to use an external action camera.
Both Neon and Pupil Invisible use auto-exposure settings. Pupil Invisible might appear to adapt faster because of its smaller field of view — it captures a narrower portion of the scene, which can make exposure shifts look more responsive.
That said, the two devices use different sensors, so their exposure behavior isn’t directly comparable.
Unfortunately, there’s no software setting that can properly expose both the inside of a car and the bright exterior at the same time, due to the lack of HDR support.
For completeness — and since you're not planning to use an external camera — if your focus is primarily on the outside view, you might try setting a fixed manual exposure. That would help maintain a clear image of the exterior, though interior visibility would be limited.
OK, thank you @user-d407c1
Hello Hello! I have been using the new option to compute Eye State and Fixations and am streaming these as well in a little test environment
I have noticed, that, if I take off the Glasses for a few seconds and put them on again
that blink detection has stopped entirely
I can see that the eye aperture is still there, but the act of taking them off seems to have put a stop to the detection after putting them on again
has anyone else seen this behavior as well? the only way I can currently "reignite" the blink detection is to stop and start a new recording
hm. might just be me, or actually something for neon-xr. sorry for the inconvenience
ok no, I was right after all. I just wasn't right about the time it took
could it be, that whatever is detecting blinks, unsubscribes or deactivates at some point, if no eyes were detected for an unknown period?
Neon - Data Streams - Pupil Labs Docs
Hi @user-1391e7 , could open a Support Ticket in 🛟 troubleshooting about this?
thank you, will do!
Hi , our client has asked for the attached metrics , how to get these metrics from pupil neon glasses?
Hi @user-67b98a , a number of these metrics are already automatically provided for you by the AOI Heatmap tool in Pupil Cloud. It does batch processing of recordings in a Project by default, providing aggregate statistics. Revisits can be calculated from the resulting aoi_fixations.csv
file.
thank you,
Hello, I have a few recordings in my enrichments in which Pupil Cloud can only see one marker. Is there a way to manually indicate where the markers are or something so that I don't have to manually map all the fixations?
Hi @user-e33073 , the markers need to be detected by Marker Mapper to be usable. Manually indicating the markers would in principle be more time consuming than manually mapping all the fixations, as you would need to do it for N-1
markers per scene camera frame for robustness.
Could you share a screenshot, so that we can provide some feedback? You could also invite us as Collaborators on your Workspace to take a look.
Yes of course, I didn't mean frame by frame but I suppose that would be the only way
You could technically write custom code that allows you to manually mark them on a subset of frames and then interpolate their positions in the scene camera images, but this would not be as robust. Let's see first what can be achieved with the standard tools.
Most of my recordings are fine, but a few have some lighting problems and it didn't detect any fixation
Ah, yes, so it looks like the markers are not well illuminated enough. As a tip, if you cannot easily see the white-black of the tags in the image, such as here, then the algorithm will also have a harder time detecting them.
At the moment, you could load them into Neon Player and try it's Image Processing plugin. Adjusting the recording's brightness/contrast might improve the markers' detectability and then you could use its Surface Tracker plugin.
For the future, you can also present the AprilTags as images on the display, rather than printed on paper, which can help mitigate low lighting conditions.
alright, thank you!
Hi, I have a recording that I need to be able to see and jump through the millisecond timestamps from the eye cameras. I've tried opening it in the Neon Player, but I can only get it to jump in the scene camera frame rate (30Hz). Is there a way to go through the video using the eye camera frame rate (200Hz)?
Hi @user-0e3d8b , is the goal to overlay the eye camera playback on the scene camera frames or simply to playback the eye camera video by itself?
Just playback the eye camera video by itself
Ok, to better help you, may I ask what the end goal is?
If you'd like, you can use a video player to open the Neon Sensor Module v1 psX.mp4
files in the recording directory. Although, not all video players might properly play it back at 200 Hz.
If you would like to ensure 200Hz playback or be able to display some data along side the playback, then you can follow the examples for our pl-neon-recording Python library and display eye camera frames via OpenCV, matplotlib or a similar library.
Okay, thank you. I'll try to look at the pl-neon-recording. I have an LED light in the frame of the eye camera and I need to know the exact time the LED turns on. Eventually the goal will be to get rid of the LED and use LSL instead
I see! Just note that it should be an LED that emits in the IR range to be visible in the eye camera videos.
Using pl-neon-recording to load the images will make your investigation easier, since it will also provide Neon's timestamps for each eye camera frame. This will be more precise than inspecting in a typical video player.
And, you are using this for synchronization purposes? If so, you can also reference our Time Sync guide. Using the Real-time API, you can send clock offset-corrected Events for syncing with other devices, which you can validate against your LED method.
Hello, I am running a study where use the pupil labs api to connect a computer to the eye-tracking phone over a wifi connection to send events (and also to stream the eye videos to the computer to ensure proper alignment). This has been working fine for most of the last year, but recently it stopped working. The devices initially seem to connect fine, but then our software crashes because some connection issue happen when it goes to stream the eye-videos. Any insight on what might be happening, and why it started happening recently? Could it be some update on the phone or computer, or in the wifi security hindering the connection?
Hi @user-a85b4a! Would you be able to share which version of the Companion App you're currently running?
Hi Team, I am trying this aprilTag images. I want to check what it means by automatically detected and separated to each enrichment? Thanks a lot! If I use the same aprilTag for all the trials, is it not gonna work properly? Thanks!
@user-9a1aed It means that when you create an enrichment, you select four or more markers to define a surface. That enrichment will then track those specific markers across the entire recording or across an event-based timeline, if events are defined.
If you use the same AprilTags for the whole session, it will still work, but you’ll end up with just one surface image for the entire session.
That’s why I recommend changing the markers whenever the stimulus changes, this way, you can generate a separate image (and enrichment) for each stimulus, which is often more useful for analysis.
https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/
Hello Pupil Labs team, I recently used the Mapping Correction tool to manually adjust missing fixations from some enrichment I ran using Mark Mapper. I followed the instructions on your website and, after completing the process, clicked the "Done" button. In the Mapping Correction panel, I can see the additional fixations I accepted, so the changes seem to have been saved. However, when I download the bundle of CSV files and compare the gaze.csv and fixations.csv files, I notice that the number of TRUE/FALSE entries under "gaze detected on surface" and "fixation detected on surface" hasn't changed. I expected to see a higher number of detected fixations, given the manual corrections I made. Is this behaviour expected? Should the manual corrections be reflected in the exported CSV files, or is there another step required to update them? Thank you in advance for your help!
Hi, @user-e77b70! Would it be feasible for you to invite [email removed] to your workspace so we can take a closer look?
Hi! I'm looking into the different neon bundles to purchase for my lab, and I'm considering purchasing a standalone neon module for participants who might not know their prescription offhand or who might prefer to wear their own glasses. Is it possible to attach the neon module to everyday glasses frames?
Hi, @user-83c78b! The recommended workflow for users who require vision correction is to use our 'I Can See Clearly Now' frame, which comes with magnetically swappable lenses. Many groups have had a lot of success in their research with this set up. Contact lenses also work just fine! We don't have a mount for Neon that attaches to third-party glasses. In general, we don't recommend using Neon's standard frames with other glasses either. It can sometimes work, but due to the variety of sizes and shapes, it can be awkward and often Neon's cameras become occluded.
I had used a green LED, so I'll need to change that, but in the eye camera videos it still picks up the LED turning on and off. Shouldn't it not pick up green?
Yes, you'll likely be able to see it in the eye videos 🙂
Hi Neil, we use a device manager to keep the applications on the phone up to data. We are currently working with (I believe) the most recent version, 2.9.8. We first encountered the error on May 29th, but we hadn't used this particular device/phone in about a month. Our last successful use of this device was on April 23rd. I'm not sure the exact version of the Companion App at that time, but it should have been up to date then. Some additional info: we several devices being used at different sites, and another site recently used their device with no problem, so I'm not sure why this particular one is having an issue.
Hi @user-a85b4a! Yes, that's correct. As of today, 2.9.8-prod is the most recent version. It is unusual that streaming eye videos via the realtime api has suddenly started to cause an issue with your software. It could be something simple that's causing this, e.g. a network-specific problem. But I'd like to look into a few things. Could you please open a ticket in 🛟 troubleshooting and share the module serial number?
Hi Neil, Yes, it's feasible. I just sent an invitation! Thanks
Thanks, @user-e77b70. We'll look into it today
Mapping Correction issue
Thanks a lot mgg! it is super helpful! Do you know if I could change AprilTags for different trials in PsychoPy's program? Thanks!
Yes! Let's assume you are following the Example Coder Experiment in our documentation.
The AprilTagFrameStim
class accepts marker_ids as an argument, pass a list of ids like [0, 1, 2, 3]
. Just ensure the ids match the amount of markers 😉
Thanks! I am not familiar with python, so I am following the gaze contingent psychopy example. In the program, the tagFrame does not have an ID. Should I add IDs into the Marker IDs using 1,2,3,4 etc? https://github.com/pupil-labs/psychopy-gaze-contingent-demo Also, I have one loop to show 12 study trials with different stimuli, which means these study trials will have the same tags? When processing the data with enrichment, the marked image will be the same across these study trials, right? Is there a way to get around this? thx!
btw, during my experiments, i notice that some participants eye glasses kept falling down, and the recording shows that the eyes are not captured in the center. Is there a way to avoid this?
Let me break this down:
I have one loop to show 12 study trials with different stimuli, which means these study trials will have the same tags? When processing the data with enrichment, the marked image will be the same across these study trials, right? Is there a way to get around this? thx!
Yes — unless you change what's presented during each loop iteration (see PsychoPy’s Method of Constants), the same AprilTags will be shown across all trials. That means Marker Mapper will treat is a single surface. That’s why I was suggesting that you present different markers on each trial/stimulus. This way, you’d end up with one enrichment per stimulus, and you can run your enrichment across the full session — the system will automatically detect the markers and only include the trials that contain them in the corresponding enrichment image.
To do this, you’ll notice that marker_id is also exposed in Builder, so you can set the markers there. But to assign 4 different markers per trial, you’d need to create a conditions file that updates them across the loop.
Setting up the loop, routine, and conditions file is more of a PsychoPy-specific task, so I’d recommend checking their forums if you need help on that side.
If all of this sounds too complex, you can also keep the same tags across all trials and just: - Create one enrichment per stimulus in Marker Mapper - Restrict each enrichment based on events instead of marker IDs
This would mean sending events like stimulus_A_onset
and stimulus_A_offset
for each trial, and then selecting those timeframes when creating the enrichments.
Hope that clears it up!
Hi mgg, I set up the loop and routine. But I realize that I couldn’t change the IDs in the tagFrame plugin because it has to be constant. I cannot change it to set every repeat
btw, during my experiments, i notice that some participants eye glasses kept falling down, and the recording shows that the eyes are not captured in the center. Is there a way to avoid this?
May I ask what frame are you using? While our frames are designed to fit as many facial physiognomies as possible, it is not possible to have a frame that fits everyone. That said, typically you can overcome this by using a headstrap to secure them on the face
fantastic! i know how to assign different markers per trial using loop routine and conditions files. May I clarify that the four tags for the same trial could be the same or they should be four different IDs included in a list like [1,2,3,4]?
For each trial/stimulus you would like to have a list of IDs, if you show the same stimulus on various trials, you would like to use the same IDs on those.
I am using the Neon glasses. I think some people have small noses, so it falls down pretty easily. I am thinking of using a headstrap. Do you guys have this product? Or any similar items could work
Neon is the whole product, but there are multiple frames for it that can be exchanged (see https://pupil-labs.com/products/neon/shop)
If you have the Just Act Natural frame, we do offer a head-strap here.
Got it. Thanks!
Hi everyone, I am using Pupil for the first time. I am trying to run the Data + Timeseries Video on SOMA reality's software to measure Cognitive Load. But it doesn't detect any biomarkers or gaze videos. Can someone please help me out?
Hi @user-015ff7 , I've moved your post to the 👓 neon channel.
The Soma Reality team is better positioned to answer this question, but in the meantime, may I ask if you have also tried loading the Native Recording Data into their software?
If you'd like, you can also share the recording data with us [email removed] for inspection.
is it possible that you guys might have like a demo psychopy program with changing IDs for each trial? thx!
Hi Team, for a jsPsych program I have written, I have set up different AprilTags across trials. But the enrichment does not seem to update the stimulus-enriched image for each trial automatically. Could you please give me some instructions on how I could achieve that? Sometimes, the AprilTags are not properly detected in Cloud (0:03') shown in the video here: https://hkustconnect-my.sharepoint.com/:v:/g/personal/yyangib_connect_ust_hk/EcEmkTTC1fZPjI0rtMmtAIABmzXCq4wrULQeADW5erG0Ig?e=YB6cEL.
currently, i am following this guide: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/
Hi @user-9a1aed 👋 ! I don’t currently have an example script that demonstrates this exact routine in PsychoPy, but I’ll check if any of my colleagues have something alike to share.
That said, building custom scripts is something we can support through our paid consultancy services. Let me know if you’d like to hear more about that option!
Regarding the Marker Mapper, from the video you shared, it looks like the markers are being recognised, but they haven’t been selected when defining the surface. To do this, when creating an enrichment, navigate to a frame where all four markers are clearly visible, click each one to turn it green (if it isn’t already), and then select Define Surface.
Just to confirm: are the same four markers defined in both recordings for that enrichment?
Psychopy and Apriltags frame & Marker Mapper.
Hello! I'm trying to estimate the distance of a fixated object using the 'eye ball center' and 'optical axis' information with the Pupil Labs Neon. I have two questions about this:
1) From the Neon Player, how can I obtain the 3d_eye_states.csv file? So far, I've only been able to get it by downloading the timeSeries folder from the cloud.
2) I tried calculating the vergence angle, but the values I got were incorrect. I thought it might be an accuracy issue... So instead, I attempted to find the intersection of the optical axes and then calculate the distance between the intersection point and the midpoint between the eyes to estimate the distance of the fixated object. However, the result was much smaller than expected. What am I doing wrong? Should I be using different data? Is this an accuracy problem?
Thank you very much!
Hi @user-a4aa71 , I'll answer point-by-point:
You can use Neon Player to export those data. You need to enable the Eyestate timeline
plugin.
Vergence angle and intersection of optical axes are not really a robust way to calculate distance to the gazed object. They are generally unreliable over about 1m distance. Have you considered using Tag Aligner?
Hi @user-a4aa71 ! You might wanna try https://github.com/apple/ml-depth-pro
Hi team, I fail to install the Pupil Lab package on PsychoPy. It says no such file or directory. I tried to reinstalled Psychopy and python on the windows computer, but it did not work out.
Hi @user-9a1aed ! Were you trying to install the wheel I sent you or the official package? And how were you installing it, do you have any additional logs?
Could you share the version of Psychopy and the logs that are displayed when installing the package?
Hi mgg, thx! The version is 2024.2.0. Although the image shows that the package is installed, the effects are not shown in PsychPy. I could not find the AprilTags component.
From the output it looks like using python3.8 as in the Compatibility installer, could you try the py3.10 Psychopy installer? https://www.psychopy.org/download.html#do-i-need-the-compatibility-installer
Ahh. Ok.
Hello i used Neon Player because i dont have any storage on my Cloud left. I planned on downloading the data from the Cloud and modify it in Neon Player. However i am confused about Neon Player. I can add plugins and using "e" export data (then the export directory appears) but for example i miss the "blinks.csv". Generally can i do the same in Neon Player like in the Cloud (e.g. simple download of data with the same .csv data)?
Hi @user-5ab4f5 , have you made sure to activate the Blinks plugin before pressing E
?
Generally i get this information. This only means the storage for Cloud regarding Neon data is full, right? Not the general storage capacity on the Companion Device for Neon? I found this information right on the Cloud Website.
Yes, that only refers to storage on Pupil Cloud. The storage remaining on the Companion Device is found by pressing the info button ((i)
) at the top middle of the Neon Companion app main screen.
For Neon eye tracker, where can I extract the pupilometry measurements like Pupil diameter? Looks like it has recorded the data from the guide here. https://docs.pupil-labs.com/neon/data-collection/data-streams/
Hi @user-9a1aed ! I had a look at the recording, the markers seem a bit small in the field of view. I’d recommend trying slightly larger ones. Also, make sure the margin around each marker is at least twice the width of the smallest white box inside the AprilTag.
Regarding pupil diameter: if you’re using the Timeseries format (CSV), you can find that data in the 3d_eye_states.csv
file, as described here.
I encountered an issue with the detection of tags, I understand that it should be large enough and with good lighting to be detected on Cloud. For this recording, it looks like the tags are pretty clear but they could not be detected. May I know how I should adjust the program? https://hkustconnect-my.sharepoint.com/:v:/g/personal/yyangib_connect_ust_hk/Ed5CQJox35xPrtkwoV8FTL0B8-neFS703hFtK2-5MipeIw?e=q5nRBP
@user-f43a29 Yes i did activate it. Which is odd to me. At the screenshot you can see how it looks liek. It seems as if there are no blinks but on cloud there are.
By the way, if i delete something from the cloud. Can i re-upload it later again on the from the device to the Cloud? I tried to find the option but did not
Hi @user-5ab4f5 , thanks for sending the images. May I ask what version of Neon Player and the Neon Companion app you are using?
And aside this the Neon Player has the same functions (in theory) as cloud?
Neon Player has overlapping functions with Pupil Cloud, but Pupil Cloud is more full-featured, offering tools like the Reference Image Mapper, Face Mapper, and useful data logistics, such as Workspaces and Collaborators.
@user-f43a29 I use Neon Player v5.0.4 and Neon Companion vers: 2.9.8-prod
Thanks for the clarification. With v2.9.0-prod
and later, you want to have Compute eye state
enabled in the Neon Companion app settings. Then, you will have blinks in the Native Recording Data that is loaded into Neon Player. Otherwise, they will be computed for you on Pupil Cloud, as you have noted.
If you had Compute eye state
enabled and still there is no blink data, then please open a Support Ticket in 🛟 troubleshooting .
Problem is we only have the current Neon device and we collect data first and later want to see how to process it. Maybe we need Cloud then so we can't really delete any kind of data if we cannot re-upload it on Cloud again. If its possible to re-upload recordings from the device at a later point again this would resolve the issue. It just seems Neon Player might be not enough for our plans as they are still not fully thought through. We are running a more exploratory thing right now.
I see and sorry for missing that question in your earlier post.
It is not possible to re-upload recordings to Pupil Cloud. So, if you need to delete a recording from Pupil Cloud, then consider having a local backup first.
If you need more analysis space on Pupil Cloud, then I recommend looking into the Unlimited Add-on.
I see, but if i disable uploading and enable it later new recordings will be uploading like usually?
Yes, the initial upload can happen at any time. It is rather re-uploading that is not a supported workflow.
Thank you for your response. Compute eye state was indeed disabled
Hi. The GPS integration with neon app is what i have been waiting for long. But presently, the app when used in smart phone does not record data and behave as expected. For example , i dont see any red icon. Also after the recording, the csv file is empty.
Hi @user-15edb3 , glad you can benefit from it! Could you share a screenshot of the app as it looks on the phone, both before and after clicking the white start button? Note that you might need to wait about 10 seconds for the GPS sensor to fully initialise and start collecting data.
Concern about the quaternion data documentation. The documentation is very clear that the scalar part comes first (w, x, y, z), however, after running into some problems during real time streaming with the Python API I found that treating the scalar as last is correct by printing the Euler angles Rot.from_quat(quat, scalar_first=False).as_euler('ZXY')
and rotating my head in each direction and seeing the trends in the numbers. I also dug into the code and found this: which suggests that scalar is indeed last. However, if I download some video data from the cloud, the order is scalar first. Any thoughts or clarification?
Hi @user-eebf39 , you are correct. The Real-time API provides the IMU's quaternion data with scalar last, as documented here.
To have it be scalar first, you can also index it as:
quat_scalar_first = np.array([
quat_from_rt_api.w,
quat_from_rt_api.x,
quat_from_rt_api.y,
quat_from_rt_api.z,
])
Hi Team, this is part of my gaze contingent program. I noticed that when participants fixated at the cross, the gaze’s red dot illustrated in the program is not at the center of the red circle represents their gaze from the phone’s view ? It seems that the gaze’s dot in the program is by the side of the circle - not sure what that means? thx!
Do you have any offset applied?
This video captured a PsychoPy program. I built it from the gaze-contigent demo program.
Also, I have an issue with the detection of the Tags. I have increased the size and also the width of the margin, but it still fails to detect. Thx!
For the psychopy program also. thx
Hi @user-9a1aed ! That's odd, they are easily recognizable on the first one.
│ 171 (241, 414)
│ 327 (221, 161)
│ 348 (736, 153)
│ 524 (726, 400)
And in the second one, the markers are detected, but the bottom left is not selected.
Are you sure you selected the right markers during the enrichment creation?
Ahh, I see! That's correct. It seems that I did not select the bottom left tag for the second recording.
However, for the first one, I checked again that the tags were not selectable in the enrichment.
@user-9a1aed for the first one, I would recommend checking if you have any latency issues, and otherwise openning a ticket with the version of Psychopy and companion app.
As for the non-selectable, is it not selectable on any frame?
Yes. That's correct. It is not selectable on any frame. However, that program is not on Psychopy. It is a jsPsych program and I added the tags around the corners as you suggested instead of using printed physical tags. thx! I do not observe any latency issues from the eye tracker.
I have a short question about the Reference Image Mapper. I understand i do a video and a picture for mapping. In the end however my team thought about implementing something that signifies if the participant looked at a piano keyboard while playing or not. if i make a video of the piano and its keys, can i simply indicate false/true for e.g. white and black keys or the whole piano. how does it look like in the data? and for example if the whole piano is cut off and only specific party (keys) are seen would it still map to "fixating the piano" or not?
Hi @user-5ab4f5! You can run the Reference Image Mapper enrichment using the full piano keyboard as reference image and then using our AOI Editor tool you could identify smaller areas of interest within the reference image (e..g, black keys vs. white keys).
Please note that mapping accuracy might be affected by the occlusions caused by the person playing the piano. Please refer to this section for more details on the effects of occlusions.
That said, even if you identify fixations that are not mapped correcty, you can still manually correct those fixations using our Mapping Correction Tool
Regarding the second recording, I can select the four tags now. But the surface image looks very off. I am not sure why the top edge is red. I wonder if this means that the surface is not detected properly with the tags? or the fixation csv files should be accurate even though the surface image might be off.
Is it possible to modify the Pupil Player code to implement fixation detection directly during recording, in order to better filter for Quiet Eye durations?
Hi @user-e0026b , are you using Neon or Pupil Core/Invisible?
Hi @user-f43a29, I am investigating the Quiet Eye in golf. I have already recorded several participants using the Neon eye tracker. Some show a stable fixation, where the final fixation is easy to analyze. Others display a number of very short fixations, raising the question of whether these can be interpreted as one long fixation according to the Quiet Eye criteria (100 ms duration and within 3 degrees of visual angle). Thank you for your help.
Hi @user-e0026b , then if I understand correctly, it is not so much that you necessarily need to run a different fixation detector in real-time, so long as you have a custom fixation detector for your use case, be that also post-hoc? In principle, a detector with different parameters?
Dear all! Hope this is the right spot to ask this and if not I'd appreciate if you direct me to the right space. We just bought a new neon system. I’m in the process of evaluating ocular safety (ANSI Z136.1) . For accurate calculation of corneal irradiance, I need the following specifications for the IR illuminators: Optical power output per IR LED (in mW), Beam divergence or spot diameter at typical eye-cornea distance (~20–30 mm), Confirmation of how many IR LEDs are emitting concurrently per eye. These info will help me compare it with the MPE to show to my IRB. Thanks much you for your help in advance. Meryem
Hi @user-90e380 , can you send an email to info@pupil-labs.com about this request?
@user-f43a29 thanks will do!
@user-f43a29 What’s important for me is that the fixation detection works in such a way that I can still reliably identify the exact moment of the final fixation before movement initiation — even when using adjusted parameters tailored to Quiet Eye criteria. If this can be achieved post-hoc, that would be totally fine. I just want to make sure that the detection method allows for accurate timing of that critical final fixation. Thanks again for your support!
@user-e0026b , thanks for the explanation. Let me confer with my colleagues on Monday and update you then.
In the meantime, if you could share a Neon recording that illustrates what you mean, then that would be helpful. If possible, you can share it with data@pupil-labs.com
Hello team, I would like to ask why I have this situation many times when collecting data, which often interrupts my experimental process, I would like to ask what is the reason for this, is this situation not able to collect data normally, how to solve this problem? I don't want the data collection of my subsequent experiments to turn into a lucky event, thank you for your understanding.
Hi @user-5a90f3! May I please ask what version of the Companion App you're running?
Hi @nmt ,I'm running version 2.9.8-prod
Thanks for confirming! In that case, please open a ticket in 🛟 troubleshooting and we can collect some further information to look into why this is happening 🙂
Okay, I did
@user-f43a29 Just sent an Email with the NativeRecording-Data, thanks for the support
Hi @user-e0026b , thanks, we responded.
@user-480f4c Thank you! And does this work also for invisible. Or only for Neon? e.g. i have recordings with Neon and Invisible. Do i need to do the video for the mapper with invisible or neon (or does that not matter)?
The enrichments on Pupil Cloud can be applied both on Neon and Pupil Invisible recordings.
thanks!