Hello. I have a question. I would like to know how to output only a specific range of data (e.g., 3 minutes out of a 10-minute measurement period) uploaded to Pupil Cloud.
Hi @user-b16d4b! Which data are you looking to output? Are you referring to raw timeseries data, such as gaze, IMU data, etc. Or are you referring to enriched data, such as those generated by Reference Image Mapper, or other enrichments?
Hi, @nmt ! Thanks for your reply! I would like to perform analysis targeting only specified ranges of raw time series data such as gaze and IMU. Additionally, I would like to output video from the scene camera for specific ranges.
[email removed] (Pupil Labs) ! Thanks for your reply! I would like to perform analysis targeting only specified ranges of raw time series data such as gaze and IMU. Additionally, I would like to output video from the scene camera for specific ranges.
Hello - I recorded a session and it has been spinning (processing) in pupil lab for more than a week - I have a feeling there is some sort of error in the file? Is there any way to check on it and get it processed? Thanks - Dan
Hi @user-796c70 ! Can you open a ticket in our 🛟 troubleshooting channel and we'll assist you there in a private chat!
Hi, I'm trying to calculate the visual angle, and for that, I attempted to compute the azimuth using the definition: ψ = arctan2(x, z) However, I'm a bit confused because the values I get don't match the azimuth values provided in the CSV file from Pupil Labs. Could this be due to an error in my calculation, or is there another explanation? Thanks in advance for your help!
Hi @user-2b5d07! You function takes x and z components, which indicates you're inputting Neon's eye pose measurements. Is that correct?
Yes, exactly — I’m using the eye pose measurements from Neon (x from the optical axis x and z from the optical axis z). I calculate the azimuth separately for each eye using arctan2(x, z), and in principle, the average of the two should represent the overall gaze azimuth. However, this is not the case
Thanks for clarifying. Actually, they are not expected to match. Check out this message for an overview of why https://discord.com/channels/285728493612957698/1047111711230009405/1351378673445634068 🙂
Thanks a lot for the detailed clarification — that really helped! In my project, I’m trying to compute the visual angle, so I’m a bit confused about the best way to go about it. Would it make more sense to calculate the visual angle using the gaze azimuth directly, or should I rather compute it separately for each eye based on the optical axis? Thanks again !
Hi @user-2b5d07! That really depends on the research question/use-case. Could you elaborate a bit more on that and I can try to offer the pros and cons for each approach.
I would like to know how to use the marker mapper data enrichment, if there is any recommendation specifications (i.e., how far the eye-tracker is from the AprilTags). Furthermore, do I need to download the pupil core for the Neon if I would like to do manually post-hoc analysis not just relying on the pupil-cloud?
Hi @user-a0189b! A lot of this depends on the specific use-case or research question. What is it exactly you're looking to achieve through the use of Marker Mapper?
Please let me know..!!!
Thanks for the video, @user-9a1aed. Two points: * Your markers are too small. Doubling their size would be appropriate * More markers will help too. When showing the fixation cross you have 10 markers, but when showing your stimulus you only have four. With that screen/environment, the glare/reflection of the light against the display completely obscures some of the markers
Hi Dom. Gotcha. Thanks a lot. I thought the markers were presented only to mark the screen's dimensions. Why would the size and the number of the markers matter? I wanted to minimize them on the screen so that they are not distracting.
The markers are presented in known positions on the display. Neon streams its scene video to our PsychoPy plugin, which detects the markers in each frame of the scene video. The corresponding points (specifically, the corners of each marker) between the scene frame and the known marker positions are then used to calculate a homography matrix. This, essentially, tells us the position and orientation of the display relative to Neon's scene camera. We can then use that information to map points from the scene-camera-space to surface-space.
Computing that homography matrix requires known, corresponding points in both spaces (scene-camera-space and surface-space). Our PsychoPy plugin already knows the surface-space points of the markers, but it has to detect them in the scene camera image. More markers = more points = more accurate mapping
Got it. Thanks a lot for your explanation! For the gaze data, is there a way I can segment each trial? The current csv files downloaded from pupil cloud contain IDs but they are not specified enough to link the trial's image to the fixation data of that image.
Hi all, Is there a way to ask the app to 'export' a session via API? i.e. I do these manual steps after I stop a recording via API > https://docs.pupil-labs.com/neon/data-collection/transfer-recordings-via-usb/#export-from-neon-companion-app so is there still a programmable way to effect the exporting of the current recording to Documents/Neon Export
directory on phone?
The Neon Companion app doesn't currently have a network API for transferring recordings, but please submit a 💡 features-requests. We use the feedback there to help us track and meet customer needs
You'll want to use the PLEvent Component in PsychoPy builder to add events to your recording (e.g., trial-start
, trial-end
, etc). These timestamped events will appear in Pupil Cloud and will be available in CSV format as well. You can then compare your data timestamps to the event timestamps to filter the data for your needs
Thanks!
For my case, I would like to place the monitor that will display the robotic movement for the teleoperation. Then I would like to track the gaze movement and pupil diameter. So, I just attached the marker on the screen and recorded. But I can't define the surface in the enrichment. Furthermore, participants also rely on the 3D environment instead of screen by seeing the robotic movement. For this case, I guess it might be useful to use the reference-image based enrichment. If you need more information to be clarified, just let me know.
Thanks for elaborating. Let's start with the screen mapping. You would need more than one marker - default would be one in each corner of the screen. Using these, it will be possible to generate the surface. I'd recommend trying this next. In terms of marker size, you can see an example of what works in this Alpha Lab article. Let me know if you get that working.
In terms of the 3D environment, could you please share a photo of the setup?
Hi! I'm new to this discord channel so please let me know if I should be posting questions at a different channel but I was wondering how to turn off the audio before recording?
Hi @user-60878d! This is the right place. You can disable audio capture by toggling the microphone icon in the top left of the Companion app home screen 🙂
Thank you!
Hi questions for the Neon real time API: we are able to stream real time gaze and pupil diameter following the steps outlined in the documentation using python and are looking to expand it to stream fixation,saccades, and blink rate. 1. Are all three metrics able to be streamed real time, i.e. line by line for a time series data or has to be done post-hoc? 2. Is the real time examples code on the website the best place to find how to do so? At the moment we tried using the sample code to get the data to stream but to no avail! Would love some help and insights on this! Thank you!
Hi @user-3ee243! We’re currently working on updating the documentation for the Realtime API.
That said, fixations, saccades, and blinks are already available starting from Neon Companion App version 2.9.0. If you’re not on that version yet, please update and ensure that these computations are enabled in the app’s settings.
You can then use the following examples to stream eye events:
Let me know if you run into any issues!
Is there a maximum distance Pupil Lab Neon module's eyetracker can handle? For example, there are other mobile eyetracking glasses from other companies that say they can only calculate up to 4 meters distance
Hi @user-60878d , the gaze ray that Neon estimates is derived from the eye images, so it is independent of any surface that might be in front of the person. Accuracy for eye tracking is usually reported in degrees of gaze angle; that is, the deviation of the gaze ray estimate from the direction in which the person is actually looking. For Neon, the accuracy is 1.3-1.8 deg.
Technically, the gaze ray is of infinite extent; it simply points in the direction of the wearer’s gaze. Although, what a person can see is of course limited by human optics and the visual system’s capabilities. The ray can be projected/mapped onto arbitrary surfaces, at arbitrary distances, provided you have the necessary 3D info to localize the wearer in the environment. To that end, our Marker Mapper and Reference Image Mapper Enrichments are very powerful tools that do that for you. You can estimate the point/region they are looking at well beyond 4m. You may also want to check out the Tag Aligner tool.
However, please note that when it comes to projecting the gaze ray onto surfaces in the environment, quantifying mapping accuracy in terms of "meters" quickly becomes tricky.
It sounds like the eye tracking systems you refer to may be using vergence angle to estimate the 3D position the person is looking at. So, are you potentially looking to use Neon for XR or AR applications?
Hi Team, you guys have helped me with the eyetracker's offset, etc. However, I still found that the eye-tracking results were not ideal. I have tried the demo program provided by PupilLab. Would you please take a look? It is very difficult for participants' fixation to land on the AOIs during eye tracking. If this happens during the experiment, it would be frustrating and tiring for participants. Are there any ways to improve my setup? https://hkustconnect-my.sharepoint.com/:v:/g/personal/yyangib_connect_ust_hk/EcPxM3QX32JJuFagkR_0mzgB0_ryJbWT6WAv6fon7rEjaA?e=BUNsht Thanks a lot in advance!
Hi @user-9a1aed! Thanks for sharing the screen capture. The gaze measurements look much noisier than expected. I think it would be beneficial for us to look at a raw recording. Please open a ticket in the 🛟 troubleshooting channel and we can coordinate their 🙂
Thanks a lot. I have opened a ticket there
Hi Team, I wonder if there is any documentation on converting screen-based gaze coordinates defined by AprilTags using scripts? I found this package from previous chats https://github.com/pupil-labs/real-time-screen-gaze but struggled to use it for data processing. I have tried Marker Mapper in Cloud, but the processing was a bit slow, especially if I had a large amount of data to process.
Hi, @user-9a1aed - if you want to do surface mapping without using Pupil Cloud, have you considered Neon Player?
Thanks! Is there a more detailed workflow? It seems like a software you guys built, too?
Yes, this is our software as well.
If you are new to Neon Player, you may find the general overview to be helpful as well. Otherwise, if you have specific questions, feel free to ask here 🙂
Thanks a lot! Does it mean that if I want to convert to screen-based/AprilTag-defined surface coordinates, I have to use one of your softwares? I worried about the processing efficiency if we have hundreds of participants.
In the fixations.csv data downloaded from Marker Mapper. https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/#export-format Normalized x and y coordinates from the average of all mapped gaze samples within the fixation. Would you please clarify this a bit? If I want to know the x and y coordinates of a particular fixation on the surface from the top left (the x coordinates from the left and y coordinates from the top), I need to multiply by the pixel size of the surface, right?
It's difficult to compare Cloud-based processing times to offline processing simply because jobs submitted on Pupil Cloud are subjecting to queuing. Jobs are taken in the order they're received, so if there is a lot of work scheduled before your job, so it may take a little while before your job even starts. Once it does start, though, Pupil Cloud is pretty fast.
Running the analysis offline (e.g., using Neon Player, the screen-gaze
package you found, or rolling your own solution) will, of course, run immediately without having to wait for anyone else's tasks to complete first.
I have to use one of your softwares? I worried about the processing efficiency if we have hundreds of participants.
We provide the software to help make your analysis easier, but you certainly don't have to use it. We use a pretty standard method of doing marker-based surface mapping - anybody with the right skill could implement their own surface mapping without using any of our software, but it would probably be difficult to be much more efficient without significant effort/skill.
Ahh. I see. Thanks a lot! I will try with Cloud and Player and see which one fits the best
Normalized x and y coordinates from the average of all mapped gaze samples within the fixation. Would you please clarify this a bit? A fixation occurs over a period of time. During that time, many gaze samples will have been collected (200 samples per second), and each gaze sample will be at a slightly different position. To produce a single point to describe the fixation, all of the gaze samples that are part of that fixation are averaged together.
If I want to know the x and y coordinates of a particular fixation on the surface from the top left (the x coordinates from the left and y coordinates from the top), I need to multiply by the pixel size of the surface, right? Yes, assuming you have aligned the bounds (corners) of your surface with the corners of the renderable area of your display.
Thank you very much, Dom!
Hi Dom, if I want to save gaze data into hdf5 files, do I have to add the code? I am using the demo provided here, https://docs.pupil-labs.com/neon/data-collection/psychopy/#data-format, but my local hdf5 files do not save any data. I can download the gaze data from Cloud tho.
You need to make sure that the HDF5 data is enabled in your experiment settings and initiate the recording using the PsychoPy Builder ETRecord component
Okk. Thanks!
Hi Pupil Labs team,
Today I made a recording which resulted in an error. After 30 minutes of recording, the companion device started to vibrate. I didn’t receive any error messages at that moment. I touched the screen, and the recording seemed to continue. When I stopped the recording after 1 hour and 8 minutes, I received two error messages:
1st: Sensor failure – The Neon Module has stopped providing Eye Sensor data! Unplug your Neon device and plug it back in. If this behavior persists, it may indicate a Neon Module or frame issue. Please reach out to our support team for help.
2nd: Sensor failure – The Companion App has stopped recording extimu data! Stop recording, unplug your Neon device and plug it back in. If this behavior persists, it may indicate a hardware issue. Please reach out to our support team for help.
The recording has been uploaded to the cloud, but it is not possible to view it.
The Neon Module and the companion device were already sent in to Pupil Labs in February 2024 after my former colleague Jonas Priedemann and I experienced similar problems with recordings. Is there any possibility to restore today’s recording? What can I do to avoid these kinds of error messages in future recordings?
Thanks in advance, Lara
Hi @user-ef94b7! These sensor error messages can occur for a variety of reasons, such as transient USB disconnects, or even hardware issues. For us to learn more, we'd need to collect some further details. Please open a ticket in 🛟 troubleshooting, posting the serial number of your module, and we can coordinate there 🙂
Hi, I am using Pupil Cloud and created an enrichment (image reference map), I uploaded a scan video and added two sessions to the project and processed the enrichment. I had since recorded and have uploaded 2 more videos. Do I need to setup and run a new enrichment to include the additional videos and process all 4 videos together? Or is there a way I can process just the 2 new sessions using the original enrichment? If that makes sense. Sorry if this is explained already in the docs, I haven't seen it.
Hi @user-f7408d! You can add the recordings to your project and subsequently, you should be able to run them as part of your existing enrichment.
Hi, we try to connect NEON with the SW on the companion-Mobile. And we try to grab the data via IP and port from the device to store these data in a simulator-software. - but the sim Software did not find the device via IP. how can we connect the NEON to the companion Device AND the sim-software (via socket-connection)?
Hi @user-a5d1a8! Are you able to successfully use the Neon monitor app on the same computer that runs your sim software?
Hi Team, if I want to control the psychopy program using the phone so that the experiment is controlled by the experimenters, how can I achieve that? Do you guys have some docs for this. Thanks!
Good morning. My team has asked me to reach out and ask about Neon and the Inter-Eye-Distance measure.
On the “Measuring the IED” tab on the pupil labs docs website, there is mention that the accuracy of pupil diameter measurements can be enhanced by doing this measurement for each user.
My team was wanting to know if you could provide any more details on what that means? Like, what is changing and if there is some significant range of error that could come from using the default 63 mm?
Hi @user-04e026! Neon’s pupillometry measurements are provided by NeonNet, which combines machine learning and model-based approaches. For more details, be sure to check out our whitepaper. Entering the wearer’s IED brings the pupillometry measurements closer to the true physical state. In particular, it will enhance robustness to the effects of pupil foreshortening, e.g. caused by different camera angles, such as those resulting from headset slippage. In general, thus, we recommend setting the wearer-specific IED, and this might be especially important if you're doing data collection in an environment conducive to headset slippage.
Hi, @user-9a1aed - PsychoPy doesn't have any built-in functionality for remote control. Since it runs on Python though, you can do just about anything you can imagine.
One could, for example, run a web server within your PsychoPy experiment and load the web page in the browser on the companion device. We don't have any examples or documentation for such a workflow, but you may have better luck asking for something like this within the PsychoPy community.
I see. Thanks a lot!
@nmt Hmmm. I am not shure - the sim-computer with data acquisition-socket has 10.1.2.3 ant the companion-app shows 10.1.2.215 (SMTP is working ;-)). if I try to connect from 10.1.2.3 to 10.1.2.215:8080 it shows me a blank page - same from an other computer (- no-sim but same network). no firewalls, no restrictions to IP and ports BUT also NO internet-connection. The installation is like an island. BUT If I use the companion's-WLAN-hotspot it is possible to connect to an other mobile (via 172.x.x.x and start/stop/close - no problem). so the functionality is OK, but not for the 'normal' network we must use.
It sounds to me like there's something about your local network that's blocking the connection. Can you try connecting your sim computer to the Companion phone's hotspot?
Hi, I've reached 2hours storage capacity so I tried to delete some recordings. But on my Pupil Cloud page, it says all the time that "Storage is full. Manage recordings or upgrade your storage with an Add-on." Do I need extra step to empty trash bin or somehow to free the space? And by this moment can I still make new recordings to upload to the cloud?
Hi @user-d16dec! Yes, you can empty the trash and free up space. Recordings uploaded to Pupil Cloud will subsequently be available if they're within the 2 hour free limit.
Where can I empty the trash? It only says "Recordings in trash will be automatically deleted in the future."
Select the recordings you want, right-click, and press delete
@nmt thank you for your idea. All the SIM computers have no WLAN/BT. I checked all the connections - no firewalls, no restrictions anywhere (that is the reason why it is a closed system)
Hi @user-a5d1a8! I think it would be helpful if you could elaborate on your set up a bit. How are you connecting the Neon Companion device to your sim computers if they don't have WLAN?
Hi Team, for the visualization method in Cloud, is there a way I can create heatmaps for each image stimulus? In my study, participants will view various images on the computer screen. When I use the Marker Mapper, the computer screen is a surface, thus, the visualization is the cumulative heatmap across all images. I have used plEvent to label the onset of each image stimulus. Thanks!
Hi @user-9a1aed. In this case, you could create a new enrichment for each stimulus. Use specific events corresponding to the start and end of each image to define each enrichment.
Sorry. One additional question about the plEvent label. When I processed the data, I found that some fixations span across two events (starts during "recording.begin" and ends after "study1StartplEvent" begins). Why is this the case? In this case, how should I assign this fixation to? Thanks!
Fixation 24: Fixation period: 1746608329.95 - 1746608330.34 seconds Assigned to event "recording.begin" (started at 1746608295.25 seconds)
Fixation 25: Fixation period: 1746608330.38 - 1746608330.52 seconds Assigned to event "study1StartplEvent" (started at 1746608330.05 seconds)
It's not really possible to assign a fixation in this way. A fixation can occur at any given moment, depending on when and where the participant is looking. This can happen regardless of any events assigned to the recording. My question is, why is this an issue for your study?
Hi. We are using the Neon device. My colleague recently used my device and I used his device for some lab recordings. I would now like to transfer the data from his device to my cloud. How can I do this?
Hi @user-d3192b 👋 You can choose the workspace where your recording will be uploaded directly in the Companion App — just set it before starting the recording.
Once recorded, there are a couple of ways to access the data:
Local export: You can transfer recordings via USB. Here’s a step-by-step guide:
Transfer Recordings via USB
Cloud access: You can also be invited as a collaborator to the workspace. The workspace owner just needs to click the toggle next to the workspace name, select “Invite Collaborators,” and add your email and role.
Just a heads-up — transferring recordings across workspaces isn’t currently supported. There’s a feature request open here, feel free to give it an upvote!
Let me know if you need help with any of the steps.
Hello! I am currently using Neon, I captured the data in the companion device and exported the video and files to my computer. Is there anyway to process/export the .raw data without downloading neon player on my computer?
Hi @user-89e2c0! Have you looked into using Pupil Cloud ? This is the easiest way to work with recordings as they can be automatically backed up, processed and exported.
Hello @user-f43a29 , what are the axis(X,y,z) refer to in eye tracker camera ( Not Imu, Not scene camera) ?.
Hi @user-6c6c81 👋 Could you clarify which values you’re referring to and in what context you’re seeing them?
The (x, y, z) values could correspond to different things — such as eyeball centers, optical axis, gyroscope readings, or specific coordinate systems.
You might find these references helpful:
Feel free to share more detail and we’ll help narrow it down!
Hello, when automatically exporting blinks, gaze positions, and eye state, using neon player, I see that timestamps within the blink time have reasonable values for gaze positions and eye state (i.e., gaze position and eye state timestamps during blinks are saved). May I ask if gaze position and eye state are automatically interpolated by the NeonPlayer between the start and end of a blink? (not sure I expressed myself clearly)
Hi @user-f4b730 👋 ! NeonNet would always try to infer gaze, eye state, and pupil size from each eye image — even during blinks, squints, or when the glasses aren't worn.
So these values aren’t interpolated; they’re estimated frame by frame. This means you may want to filter out those parameters for your analysis. We do not filter them, in case you have a different definition of blinks than the one from our blink detector.
If you haven’t already, I’d recommend updating to the latest version of the Companion App. It now includes real-time fixations and eyelid aperture data. With that, not only are blink detections more accurate, but you can also decide whether to include full or partial blinks, depending on your criteria.
Let me know if you’d like help accessing or interpreting those parameters!
@user-d407c1 it is respective to optical axis
In that case this graph can help you as well. Do you have any concrete question?
And gaze directions.
@nmt the driving simulator is a huge installation with about 25 computers (210° view angle projection, simulated mirrors, driving dynamics configured as a closed system - no internet, all Win10 SP2), a real mockup (without motor, of course). All the computers are connected via LAN, static IPs, and there is a transparent WLAN connection (DHCP address starting at 10.1.2.210 -> .240). the companion device connects to the WLAN correctly. but if I point from a sim-computer to the companion's IP with :8080 there is a blank screen.
Hi @user-3bdb49 👋 ! I hope you don't mind if I step in. May I ask, does ping
to the IP works from your sim computer?
Is it possible to send simple messages via UDP/TCP to the companion device (start of simulation, stop of simulation, passed-by a POI, etc.) to be integrated as "Marker"? The use of the companion's "Marker" -button is not sufficient in timing. we need the marker to segment the recording. The companion-IP is 10.1.2.215. Whick protocol/port I should use to send these messages?
@user-d407c1 Hi Miguel, of copurse, be welcomed! ping is working
Sim Computer Network
Hi everyone, I'm Dario, and I'm new to Pupil Labs Neon. I'm currently working on analyzing pupil data, but I'm a bit confused about how the different .csv files relate to each other.
Specifically, I noticed that the fixation column in gaze.csv contains many of the same values as in fixations.csv, but with some gaps—i.e., some rows in gaze.csv have missing or blank fixation values.
Is this due to the difference in sampling rates between the gaze data and fixation data? Does gaze.csv simply align with fixations by interpolating or marking segments based on timestamps, resulting in some rows without fixation IDs?
Any clarification would be greatly appreciated!
Hi, @user-652793 - the fixation data is computed from the gaze data. It's not technically accurate to think about fixations as having a sampling rate - they're more like events whose occurrence comes from the gaze data.
Every fixation consists of a set of gaze points, but not every gaze point will below to a fixation. That is you see repeated fixation IDs in the gaze data as well as the gaps.
Hello! We are running the Matlab/Python integration on macOS. In order to test synchronization, we created a sanity check in which both neon devices are recording a TV screen with a 60hz refresh rate. The TV shows a clock counting up by milliseconds. We hypothesized that if our two devices were properly synchronized, the simultaneous events recorded on both devices (using the realtime API corrections) would point to the exact same image on the TV screen on both devices. Like expected, we found that while the recorded events on both phones match the TV image (e.g., when navigating to event 1 on device 1 and event 1 on device 2, the image on the TV matches within ~20ms), they do NOT match in the output events.csv file in the downloaded timeseries data (e.g., event 1 on device 1 is >500ms different than event 1 on device 2).
As such, we dug deeper into the Matlab code and found what we believed to be an error: when we manually calculated the mean of the time_offset_ms.measurements, this provided a different calculation than the mean calculated by the code provided (time_offset_ms.mean). Thus, when calculating the corrected_time_ns_in_neon_clock, we subtracted the current time from our manually calculated mean. With our correction, the output changed. Now, we found that the event timestamps were matched in the output files downloaded from pupil labs, but the events no longer corresponded to the same image on the TV screen. Going further, we looked into the Device.m Matlab file and discovered that the estimate of time offset measurements seemed to be taking the mean of the roundtrip duration instead of the time offset (Line 247). When we edited the Device.m code to calculate for time offset instead, we came back to the original problem where the event timestamp is aligned in our “ground truth” and the events correspond to the same image on the TV screen, but the corrected timestamps in the events.csv files do not match.
Can you please help us better understand this discrepancy and what we can do to best ensure synchronization? Thank you in advance for your help, and please let me know if I can provide any additional clarification.
Hi @user-87d763 , sure, we can of course help you work through this.
One thing to keep in mind is that the clocks of your two Neon devices are not expected to be synchronized in general, unless you have synchronized them already to an NTP source. The process is specific on modern Android, so may I ask how you did that?
If both devices are not synchronized from the start, this is in principle fine, but then it is expected that Event 1 on Device 1 will not match with Event 1 on Device 2. It would then require also calculating the clock offset between Device 1 and Device 2. One way to synchronize both devices "manually" or post-hoc is to play a simple sound waveform and find the onset of this sound in the audio recording from each device. Cross-correlation can help automate this process. You can then take this as the common "zero" point in both Device's time axes.
Note that you would also want to periodically measure these offsets to account for clock drifts.
To clarify about the MATLAB code, it is not the MATLAB integration that is doing the calculation. Rather, the MATLAB code uses the output of the Time Echo Implementation from our Python Real-time API.
Also, that part of the MATLAB code does not exactly "use" the estimates. Rather, it provides them to you, already including the mean time offset, so that you can easily send offset-corrected Events, as described in our Documentation and as shown in this example.
However, I do see the typo you mention at line 242. It additionally provides all of the "raw" time offset samples, if you wanted to calculate the mean yourself, but accidentally provides the roundtrip estimates there. It is now corrected, so you can update the integration, but fortunately, this "convenience" feature had no impact on the process above, nor does it effect the working & tested example code linked above.
Hi! I am hoping you could help me solve the issue of the screen not showing the footage in pupil cloud. Today I had it in multiple recordings: sometimes the screen is completely grey, sometimes partly. What could be the reason for this and can it be fixed? Thank you.
Hi @user-468437 , could you open a Support Ticket in the 🛟 troubleshooting channel?
Hi Team, if we put AprilTag around the corners of the laptop's screen, we are worried that it might interfere with their central vision. I wonder if we are able to print the AprilTag out on paper/board ,and we can stick it on the back of the laptop so that they do not appear on the screen. Has anyone tried that to avoid distraction of the computer screen? Thx!
Hi @user-9a1aed 👋 ! Yes, you can print them and position them on the edge if you wish. For convinnience, you can find the markers ready to print here.
@user-9a1aed - to add to what @user-d407c1 says, using off-screen AprilTag markers with our PsychoPy plugin requires a little extra effort.
The PsychoPy plugin needs to know the position of the markers in relation to the display. When you use on-screen markers provided by the plugin, then it already knows these positions. With off-screen markers though, it has no way of knowing where the markers are in relation to the screen.
So to make that work, you need provide the positions of the markers to the plugin. Probably the easiest way to do this is to use the individual AprilTag marker component in PsychoPy builder (rather than the marker frame component). For each off-screen marker you have, add an on-screen AprilTag marker with the same ID, set its size so that it's displayed size matches the size of your physical marker, and then change the position of the marker so that it's in the same "place" (off-screen) as your physical marker.
It's a bit unorthodox, but if you size and position the virtual markers correctly, it will work. Having said that, I think it may be worth a small pilot study to actually determine whether the on-screen markers actually cause an effect versus physical, off-screen markers
@user-cdcab0 That helps, thank you so much !
Hey All, Our lab is looking to purchase a Neon eye tracker and we need a quote to send in to finance it. Can someone please help with the same?
Hi @user-658b57! Please reach out to [email removed] in this regard or simply select the items of interest from our shop and fill out the quote request form: https://pupil-labs.com/products/neon/shop
Hi Nadia,
Thank you for the quick response. We are planning to collect data of how people view images on a screen, is the "Bundle - Ready set go!" the most accurate one for this task in your opinion? Although it is not intended,there will be some head movement. We are upgrading from the Pupil Core where we used have major issues with slippage due to head moement. Looking for your opinion on the best device for this use case
Our android phone upgraded to android 15 - We are using the one plus 10 phone, which from what I have read requires android 13 - is there a way to rollback the phone to android 13 for the one plus 10?
one plus 10 pro that is
running neon companion app 2.9.4-prod
Hi, I want to use the PupilLabs Neon to investigate about the Quiet Eye in Golf. How are fixations defined in the Pupil Labs Neon system in terms of minimum duration and angular deviation thresholds (in degrees), and are these parameters configurable by the user?
Hi @user-e0026b , you can find a description of the fixation detection algorithm in the associated whitepaper. The parameters have been optimized for a number of use cases, including sports, and accounts for the effects of head movements, which you can find detailed in our publication.
On Pupil Cloud and in the Neon Companion app, the parameters are not configurable. It might be worth it to first see how the defaults work for you, but you can modify the parameters in the pl-rec-export tool.
@user-a0189b And, if you are open to hanging static AprilTags in your environment, then you could use the Head Pose Tracker plugin of Neon Player. To intersect the gaze ray with the robot arm would again require knowing the pose of the robot arm over time.
I tried to use the april tag and used the enrichment but it was hard to click the qr code during the enrichment. How to do this easily
Hello @user-f43a29 , thank you so much for your response. the two neon devices and the computer running the code are all synched to the same NTP server (time.android.com).
In terms of calculating the clock offset between device 1 and device 2, I am still struggling to wrap my head around why the events are not matched. Both devices have been corrected to account for the offset to the source computer. shouldn't correcting this offset in theory make the timestamps on both phones identical?
Hi @user-87d763 , may I first ask how you did the NTP sync process, as in what steps did you follow?
With respect to the Events being "not matched", they actually are in principle matched and are "happening at the same time", since you accounted for the offset to the source computer and have validated that they correspond to the same physical process in the external world. I ask about the NTP sync steps, because if there was any discrepancy in the NTP process, then the clocks of both Devices will not be in sync and this would propagate down to the Events.
Hello, I’m testing out the Neon for the first time to pilot a new study. I’m noticing that during recordings where I’m walking from one point to another while wearing it, on Player some of the fixations show the green dot drifting away from the yellow fixation circle, when the green circle should probably stay within the yellow. Is this due to potential lag, headset slippage, and/or anything else? Is this cause for concern? Any tips would be appreciated.
The fixation circle "glues" itself to the features within the image by calculating the optic flow for the duration of the fixation. So the fixation visualization has a sort of temporal offset that doesn't apply to the gaze point which is instantaneous
@user-f43a29 every time we make a recording, we reset the date and time on the android phone and then make sure that the "set time automatically" is toggled on so that it is getting the lowest possible drift from the server. based on my research, with this function on, the android phone automatically connects to the NTP server time.android.com
Hi @user-87d763 , could you instead try the following?
On both Android phones:
- First, restart both phones before initiating a sync.
- Next, when you turn off Set time automatically
, then set the time one hour wrong.
- Wait 5 seconds, then re-enable Set time automatically
On the Mac computer:
- Open System Settings
> General
> Date & Time
- Click Set
next to Source
- Reset the time server to MacOS's default
- Then, disable Set time and date automatically
- Click Set
next to Date and time
- Set the time to be one hour incorrect
- Wait 5 seconds
- Re-enable Set time and date automatically
- Wait 5 seconds
- Open a terminal and run sudo sntp -sS time.apple.com
That sntp
command is necessary on modern MacOS at least to ensure it has fully synced correctly.
After doing that, could you run a brief test and see how the Events look on both Android devices?
Also, please note that of the three major OSes (MacOS, Windows, and Linux), MacOS has the most trouble with keeping accurate time. At least on modern Macs, it's clock can drift more quickly and regularly exhibits sudden step changes in time, on the order of 50 or 80 ms. If you are looking for more stable timing, then you could consider Windows. With Linux (e.g., Ubuntu), testing shows that you will have the best timing results.
In all cases, it would still be advised to periodically measure & update the clock offsets. Some references recommend doing this at least once every 5 seconds.
Hi! I'm curious if there is a noticeable comfort difference between Just Act Natural vs Is this thing on vs Ready set go; similarly I'm curious if there is comfort difference between All fun and Games vs Crawl walk run because I got a feedback that All fun and games was not comfortable to wear for a long period of time and I'm curious if "lightweight" frames like Is this Thing on, Ready set go, Crawl Walk Run would be more comfortable to wear for ~ 1 hour or so. If any have experience with these and have thoughts on wearer comfort please let me know!
Hi @user-60878d 👋 ! Comfort can be quite subjective — we do our best to accommodate a wide range of facial physiognomies through different frame options, but how the glasses feel can vary from person to person. Some of the lighter-weight options may feel more comfortable, specially for people who aren’t used to wearing glasses, but ultimately it depends on the individual.
We’d really appreciate your feedback! If you could share what specifically made the glasses uncomfortable for your subject, please drop us an email at info@pupil-labs.com and we will forward it to the relevant team.
We're always looking to improve and make things more comfortable for everyone.
Hello everyone! I have a question regarding recordings in the cloud or recordings in general. When we record outdoors, it isn't always possible not to have short outages of the sensor. Participants jostle the device unintentionally and the USB-C connection briefly cuts out. As far as I know, this then makes it impossible for the cloud service to process the recording.
Can I do anything after the fact, to make analysis possible? If not for the entire recording, could I split the recording somehow, so the outage isn't part of it?
Hi @user-1391e7 ! While it's not typical for the USB connection to come loose unless a significant force is applied, both the Companion App and Cloud are designed to handle this scenario.
If the cable gets disconnected during a recording, the signal will drop — but once reconnected, the glasses will resume normal operation without needing to restart anything.
In Pupil Cloud, you'll notice gray frames in the timeline during the disconnection period. Here's a screen recording where I disconnect and reconnect the glasses multiple times, so you can see how it's handled.
Thank you for the reply! does this also work for recordings done in early december 2024? did you make changes in the recordings themselves, or should that work in any case?
Hi! I noticed that the camera module got hot after 3 minutes of recording -- is that expected or is the module malfunctioning?
Hi @user-60878d! The module itself might warm up but should not overheat. However, this should not affect the Neon App functionality. Did you get any errors?
Also, which frame are you using? Does it have a heatsink? Our current frames for sale come with an aluminum heatsink.
The heatsink helps dissipate heat from the module to the front of the frames, ensuring the module gets warm but not hot.
See also this relevant message: https://discord.com/channels/285728493612957698/1047111711230009405/1265679380471087246
Hello, my lab recently purchased the Neon eye tracking glasses and even as a freshman undergrad in EE, I am disappointed by the mechanical design of the glasses. If you’d like, I can help design a brand new frame for the glasses that you all can ship as the standard or sell separately.
Hi @user-4ef9ea 👋 ! Thanks for the feedback — we really appreciate you taking the time to share it.
We designed the frames to accommodate a wide range of facial shapes, securely house the sensor module, and use materials that balance durability with affordability. That said, we're always looking to improve.
We’d love to hear more about what didn’t work well for you. Feel free to email us at info@pupil-labs.com with your suggestions or complains— we're always open to improve.
That’s fair. I will email some suggestions. Otherwise, the product is great and really shines with its software.
Thanks looking forward.
FYI, we also have open-source on GitHub, the module, nest, and some frame geometries, such that anyone can design their own. 😉
It seems I copy pasted wrongly the neon geometries link, my colleague made me aware. I have modified the message, but it is also here (https://github.com/pupil-labs/neon-geometry)
I wouldl ike to know how big qr code is, how close it needs to be between the eyetracker and qr code
Hi @user-a0189b , a good rule of thumb is that they should be visible to you in the scene camera image. It sounds like perhaps they were too small in this case. If you can share a screenshot, we can provide feedback.
Hi @user-a0189b , those markers look too small. You can print them out larger and paste them to the edges of your monitor, in case you are looking to save screen real estate.
Also, your scene camera looks potentially blurry. Could you try carefully wiping the scene camera lens with the provided lens cloth and see if that improves it? Let us know, if not.
After both of those steps, you can then submit some images again for review, if you'd like.
Just a note — the snippet is fully standalone, so if that’s exactly what you need and you have uv
installed, you can run it like this:
uv run -s name_of_the_snippet.py path_to_timeseries.zip
That’ll give you the count and average duration of fixations in the specified intervals.
Also it might look a bit more complex than it actually is — most of that is just boilerplate to make it run standalone. The core logic starts around the events_sorted
line.
Hello everyone, I have a quick question. What is the usual delivery time for delivering to the netherlands? I'm doing my thesis using the ready set go glasses and want to get started as soon as possible
Hi @user-eee2af , we also received your email and will follow-up there.
Hi, can the Samsung S25 Ultra be used as a companion device?
Hi @user-bed573! Thanks for your question. We're going to add this to our list of supported devices as 'experimental'. We believe it should work with Neon, but can't make complete guarantees. If you have one already, it's definitely worth trying. For reference, our supported devices list can be found in this section of the docs.
I would like to know if the scene is changing due to the head movement without eye-movement, then the gaze data will be the same?
Hi @user-a0189b , if you were to keep your eyes in a specific orientation and turn your head, such that your head turns and your eyes stay fixed in the head (in other words, not eliciting VOR), then yes, gaze x/y coordinates will essentially remain at the same point. This is because gaze data are specified in the coordinate system of the scene camera.
Do you need the gaze data to rather be in a world-referenced coordinate system?
I used the surface tracker feature and compared the data with the fixations data. In some cases the fixations in and off surface combined are less than the total fixations detected. I would like to know the reasons for that. Is it possible to figure out which of the fixations were omitted when using the surface tracker?
Hi, @user-ab3403 - are you using Neon Player? The number of fixations should be the same in both, but perhaps there are some edge cases. Regardless, the fixations are assigned an ID, so finding missing fixations is simply a matter of looking for missing IDs
Hello again! I had a quick question regarding Neon terminology. I am using the realtime API synchronization. Are the timestamps that are recorded and collected as part of the timeseries data (e.g., in the gaze data) reflective of the device time (the companion device) or the external clock time? Furthermore, in the world_timestamps.csv output, again can you please clarify whether these timestamps are reflective of the device time or the external clock time? Overall, I guess my question is whether the outputted timestamps are device specific or if they reflect a system time. Thank you so much in advance for your help!
Hi @user-87d763 , when you say „external clock time“, do you mean the clock of the receiving computer running Python/MATLAB or some other clock?
To clarify the other part, the timestamps from Neon‘s real-time API and in the data files, including CSV, for all sensors, are from Neon‘s clock. Those timestamps all come from the same clock source. In other words, the time axis of the Neon device. They are also technically a system time. They are in UTC format and count from the Unix epoch.
Hi, in January I bought "Better safe than sorry - Neon eye tracking bundle". Unfortunately, the project got discontinued and I would like to sell the glasses with the phone. The glasses have barely been used. I would like to sell it for 5600 Euro. Contact me if you are interested [email removed]
The colors of the circles are
Hi @user-f43a29 - yes, by external clock time I was recievering to the recieving computer that is running the script.
As for the timestamps, thank you for clarifying. I was curious as how they might be meaningful and relevant across devices. For example, is a timestamp of 1747875211899127575 in one clock equal to the same timestamp in a separate device? Or, since they are two completely separate devices, we cannot equalize the timestamps without taking into account the offset correction provided by the real-time API code? I hope this line of reasoning makes sense! Thank you again for your help.
Hi @user-87d763 , since it is not possible to perfectly synchronize any two clocks, such that they tell time exactly the same, one usually defines an acceptable „error“ threshold.
With the NTP sync method that was posted above, you can typically expect an estimated difference of 5 to 20 ms between the two clocks on average, at least for ~20 minutes, depending on your OS.
When taking clock offsets into account, it depends on the precision of said offset. There’s always some jitter, but on modern computers and the Companion devices, it’s often negligible for many use cases, including high-precision neuroscience. If it is of concern, you can estimate it for your two devices by periodically collecting their relative offsets over a time window, say 5 minutes. If the error is acceptable, then yes, you can consider the timestamps effectively the same, provided you’ve either recently synced them both via NTP and/or you’ve used the clock offset to convert them to a common time axis.
To be clear, many applications can even accept more error. It all depends on the time scale of the mechanisms in question.
Hello, is there a way to look at the pov camera and the synced eye cameras to check if there are actually blinks? I am doing smooth pursuit tracking during a sports task and I am getting blinks during bat ball contact. I would like to manually check if blinks are being registered because participants are blinking or if it’s due to occlusion of the pupil because they are looking downward. Thanks!
Hi @user-f0ea5b! You can use Neon Player to do this - it has an eye video overlay plugin.
Hello, a question about offset calibration. Is the correction applied in pixels or in angles? The distance between the wearer and the target affects the perceived offset (and its correction). however, since pupil does not read the distance between the wearer and the target I assume the offset is based on pixels, right?
Moreover, is there a distance you recommend to check if calibration is needed?
Finally, I have seen this page: https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433 Is there something similar in NeonPlayer? Thanks
You're correct - it's in pixels. In Neon Player, you can adjust the manual offset by going to the Gaze Data settings
Note that the unit there is technically "norm" units, but that's just multiplied by the resolution. Effecitvely, it's still pixels
the new Tobii glasses look like half a Pupil Neon at twice the price and with most functionality locked away
Hi Team, for computer-based tasks, do u guys have a recommended viewing distance from a laptop? I could not find any article about the viewing distance. I understand that most Neon users may not conduct computer-based tasks. I appreciate any help/information!
Hi @user-9a1aed and @user-1391e7 , if it helps, Neon is designed to be used at any distance, both indoors & outdoors. It works on different principles from remote, desktop-mounted eyetrackers. See this message for more details: https://discord.com/channels/285728493612957698/1047111711230009405/1371897435781337200
Gaze offset correction is also independent of viewing distance. The offset correction is a wearer specific parameter. It is something that, if it is needed, you essentially set one time and it is applicable at all distances, rather than being something you set differently for different viewing distances.
When it comes to screen-based tasks and/or using AprilTags, the following tips can be kept in mind:
I'm just a dev as well, so take this with a grain of salt :). afaik, the model was trained on larger viewing distances, but you can perform an offset correction. viewing distance from a laptop, wouldn't you always be placed ~60-100cm away anyways? just by nature of the screen and the resolution and what you do to interact with it?
Thx a lot! This is my first time using a laptop to display the stimuli since we might want to carry the device around. I wanted to use the smallest viewing distance so that the visual angle could be larger cuz I want to minimize the AprilTag's size. For desktops, it should be around 60-100cm...but I was thinking 30 cm for laptops. I feel like people sit closer to the laptop's screen
Hi ! I have a question. I would like to get the x and y coordinates of the gaze in px of the screen that is in front of the person during the experiment. They will perform various tasks on the computer, looking at the screen. I'd like to know if it is possible to get the data from neon, transform it into pixel into the screen's plan that is in front of the user, all of that in 200Hz ? I tried to use the demo available on Github with the virtual apriltags but my supervisor told me that obstructing the screen during the experiment is an issue. I had issues on detecting edges to detect the screen as well bc the script didnt detect the screen at every frame so the frequence of data acquiring dropped significanlty. Do you have an idea of what the best solution could be for this problem ? Thanks a lot !
you could attach a frame of printed out tags to the screen. that way, the screen is not obstructed, and the tags are still in view of the video
Hi Oscar, if u search for keywords "off-screen AprilTag markers," you can see Dom's reply to the printed-out method. I had a similar issue
I'm having issues finding the neon device via local network. I am almost 100% sure it is due to a firewall restriction. is there a range of UDP ports I need to have open, on top of 5353, to allow the neon device to be found?
Hi @user-1391e7! Could you describe the network are you using? Is it an institutional network?
Hi Pupil Labs team, I am running a study using Pupil Labs Neon where participants are free to explore an exhibition space viewing multiple objects. What is the best approach to create the 3D model using April Tags? I am assuming I need to create multiple RIMs but not sure how many would be sufficient as the exhibition space contains multiple areas, and each participant is free to decide their own route. Many thanks!
Hi @user-535f1a , it sounds like you aim to do a combination of these two guides?
That should work and then, technically, you can merge all the "tag aligned" recordings into one 3D model of that exhibition space, afterwards.
The process for taking the April Tag recoding for Tag Aligner remains the same -> it just needs to be a recording where the April Tag was hanging in the region that was scanned. It does not need to be a recording where a participant did the task, nor does it need to be part of the Scanning Recording for Reference Image Mapper, although that can save a bit of time. It also does not need to be long, but it will help to have some movement during the "Tag Recording".
Hi, I have a cloud registration for the Invisible. Can I register in the same cloud with Neon, or if this cannot be done, can I combine the two separate clouds?
Hi @user-3c26e4 , yes, you can use as many Pupil Invisible and Neon eyetrackers under the same Pupil Cloud account as you like. You can even mix Pupil Invisible and Neon recordings in the same Projects and Enrichments.
Thanks @user-f43a29 for your prompt reply and info, very helpful, will look into the guides!
You are welcome!
Hello, question about the model of the mobile device. Does the OnePlus brand only work with these two models of mobile devices: On OnePlus 8 and 8T: Android 11 On OnePlus 10 Pro: Android 12 and 13 , is it okay with OnePlus' latest phones?
Hi @user-5a90f3 , if you want to use a OnePlus phone, then only those OnePlus devices with those Android versions, as specified in the Documentation. Using newer OnePlus devices is not supported with Neon.
Thanks @user-f43a29 for your prompt reply!
Dear Team, the USB-C port recommended by your company is not available in my region. May I check if you have an idea if this one works for data transfer and power? Thanks a lot in advance!
Anker 553 PowerExpand 8-in-1 USB-C PD Hub $640 x2 USB-C Port, 2 USB-A, 2 HDMI, Ethernet, Micro SD/SD 100W PD
https://www.amazon.com/Anker-PowerExpand-Adapter-Delivery-Ethernet/dp/B0874M3KW4?th=1
Hi @user-9a1aed , we have not tested this hub, so we cannot say one way or the other. If you have success with it, please let us know!
Hello, Does the new Motorola meet the requirements for mobile devices, such as the Moto edge 60 pro?Thanks
Hi @user-5a90f3 👋 !Only the devices listed here have been tested and are officially supported.
Hi team. I've opened a support ticket. I'm having trouble with my companion device. Could someone help me out. We are in the middle of a research project
Hi @user-37a2bd , thanks. We've taken a look. If it is only a security update for Android 13, then you can do that. As listed in our Documentation, Android versions 13 and 14 are supported on the Moto Edge 40 Pro.
Otherwise, it can be helpful to disable Smart Updates to prevent accidentally updating the device to an unsupported version.
Dear Team, I’m using Pupil Labs’ Realtime Python API to retrieve data from Neon in my Python code. After upgrading the Realtime API from version 1.4.0 to 1.5.0, the log message "No cached eyes video frames available for matching" started appearing frequently. Why is this happening? Can you reproduce the issue on your end as well? Thank you!
Hi @user-2d96f8 , could you describe what your code is doing and how you are using Neon when you see that message?
Hello. We have run into a problem in our data, and we were hoping to get some insight into possible ways to salvage it. Our experimental setup is such that we are running two Neon devices at the same time, and we need to ensure their synchronization for data analysis. Unfortunately, our problem is that we miscalculated the device offsets using the realtime API code, and thus we do not have synchronized events in our data. As such, we have been trying to brainstorm ways to post-hoc sync the data and then verify that the correction is correct.
In order to brainstorm ideas for offset correction, we recorded a “test” video that is not affected by the miscalculation bug. The test video involves both eye trackers recording a dual audio/visual event of a physical clap. With this video, here are some things we tried and our observations:
Based on these tests, we have a few remaining questions. First, how can we be confident about the timestamps and the auditory offset alignment? Is there a way we can see/prove to ourselves that the timestamps are fully synchronized both in the audio and visual mediums? Is there any chance that the audio/visual alignment is not perfect, but we can prove to ourselves that the timestamps are synchronized?
If you have any ideas/feedback on how we can move forward to salvage this data, that would be greatly appreciated. We are more than happy to share our test recordings, if that would help.
Hi @user-87d763 , if you could share a sample recording from a session with both devices, that could be helpful. You can share it with data@pupil-labs.com via Google Drive, if you'd like.
With respect to "regenerating the video", are you using OpenCV to open and inspect the scene camera video? Since you already have synced audio at item 1, then it might be easiest to first narrow down the issue in that case, before moving on to items 2 and 3.
Hi there! We're having trouble getting our brand new Neon to work with psychopy.
- We managed to get the Neon connected via ethernet and recognized by the Windows 10 computer, by using the recommended tp-link router (itself connected to the recommended Anker hub), so far so good.
- We installed the pupil-labs toolbox from the "get more" component in Psychopy, hope this was the right way to do this (couldn't get the pip -install thing to work but we're probably doing it wrong).
- We can get a recording to start on the phone from psychopy :
from pupil_labs.realtime_api.simple import discover_one_device
device = discover_one_device()
device.recording_start()
- Any attempt to send any other command (e.g. a simple "recording started" label with device.send_event) then throws a WinError 10048: only one use of each socket address is allowed ('', 9036). The traceback shows that this occurs when psychopy tries to run start_iohub_process.py.
- Trying to use the embedded PsychoPy components (such as Eyetracker Record with default parameters) either does not start a recording at all, or creates a recording with null duration, or throws the same error.
- Trying to setup things using the coder example (https://docs.pupil-labs.com/neon/data-collection/psychopy/) leads to similar consequences.
Any help is appreciated ! 🤗
Hi @user-a4e164 , the PsychoPy team no longer recommends Coder for eyetracking experiments. If you want to use code, then you can run a normal Python virtual environment with an install of our Real-time API and PsyhcoPy, and then use the API for interacting with Neon and PsychoPy for displaying the stimuli. This lets you side-step Coder.
With respect to the issue, what version of PsychoPy have you installed?
To install the Pupil Labs plugin, you want to use the Tools > Plugins/Package Manager
menu item, rather than Get more
. Also, just to note, you don't typically use PsychoPy's self-managed pip to install the plugin. It will do that process for you internally. It might be worth it to re-install the plugin by:
psychopy3
folder in %APPDATA%/Roaming
(just copy-paste that into Windows Explorer)Otherwise, may I ask, when you hit that error message, was that from the very first time that you tried to run the code or after an initial crash of PsychoPy? Or, did the code use launchHubServer
together with discover_one_device
?
Thank you, will try a clean reinstall during opening hours. In the meantime if you have a minimal working example of script available somewhere that would be very helpful.
Hi @user-a4e164 , if you go the route of doing it all in Python code, then you can change our Coder example to instead directly use our Real-time API. Just take note of what that means in terms of how you save the data. Also, you may find referencing this tutorial helpful.
Hello @user-f43a29 ! I just sent the data files to [email removed] Please let me know if you have any issues accessing it, or if you have any other follow-up questions. To answer your previous question, we used MATLAB to identify the temporal offset, and then used ffmpeg to stitch the two videos back together with the offset correction accounted for. However, we understand that this might not be the best approach.
Thanks, @user-87d763 . We will take a look in the morning. ffmpeg
is a good tool for such a task, but it has many command-line parameters. It may just be an option that needs to be tweaked.
Hello, I forgot to tick Pupil Cloud unlimited storage when I bought a device, and I want to ask how to buy it? Thanks
Hi @user-5a90f3 👋 ! You can find the different options at the bottom of this page, simply add them to the cart and request a quote.
If you prefer you can simply send an email to sales@pupil-labs.com requesting it.
Ok thanks! Why do I keep downloading failed when downloading time series data from pupil cloud, stuck at 14.1 MB and can't complete the download. I changed the network and asked a friend to help download it and it was still stuck at 14.1 MB, what is the reason for this situation and how should it be solved? Thank you
Hi @user-5a90f3 ! Could you please open a ticket on the 🛟 troubleshooting and share the recording ID that is giving you issues to download?
Hi! I had a question about the gaze-offset. It says the gaze-offset is only for that recording. Is there an option to make the gaze-offset universal for that participant? I was thinking something like Meta Aria's personalized calibration process.
Hi @user-60878d, yes, you set it in the Wearer’s Profile in the Neon Companion app. We recommend making a separate Wearer Profile for every participant who wears the glasses.
Just to clarify, this is different from a calibration. Neon is calibration free (eg, you can take the glasses off and put them back on and you are automatically provided with accurate gaze data), but for a subset of participants, there will be some offset in its gaze estimates. Without offset correction, you can expect a gaze accuracy of ~1.8 degrees over a sample of the population, as covered in our Neon Accuracy Test Report.
In principle, you only need to do this once, if it is deemed necessary. The saved offset correction in the Wearer Profile is automatically applied to all future recordings. You can bring the participant back later, load up their Wearer Profile, and simply start recording.
Also, if you have not yet had your free 30-minute Onboarding, then just send an email to info@pupil-labs.com
Hello, how long does it take to process a thirty-minute video with Reference Image Mapper, what does the processing speed depend on, and how to speed it up?
Hi @user-5a90f3 ! Processing time depends on a few factors — mainly the total duration of the recordings and the current queue of submitted jobs.
You can find more details in this message:
https://discord.com/channels/285728493612957698/633564003846717444/1251121902265827418
To speed things up, you can limit the analysis to specific segments where the reference image is actually visible. This is done using enrichment sections based on events — more on that here:
👉 https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections
That approach reduces processing load and can speed up the enrichment!
Hello, may I know if I can collect eye movement data for a program that runs on jsPsych? I do not see any plugins for jsPysch. But I guess it is possible to treat the computer monitor as a physical object? Then I can use off-screen Apriltags to help convert the fixation to computer-based? Or is there a more efficient way to collect eye movements? thanks a lot in advance!
Hi @user-9a1aed 👋 !
There’s currently no official JavaScript client for the Network API or a plugin for jsPsych
.
Could you share a bit more about what you're aiming to do? Are you looking to use the data in real time, say for example, in a gaze-contingent paradigm?
If so, you can write your own JS client for the Network API or use our ready to use python libraries like this to get gaze on screen coordinates and then relay it to jsPsych
via WebSockets, for instance.
If that sounds a bit too complex, you might want to explore PsychoPy together with our real-time Python client, which already handles that workflow.
Alternatively, if you're mainly looking to start/stop recordings and send events at specific points, that can be done directly in jsPsych
using simple HTTP requests. Just note that for high-precision timing, you’ll need to account for the time offset between the device and your local clock.
From there, you can post-hoc analyze the data using the Marker Mapper.
Hi, is there any tool or code available to automatically detect the target recommended for the core pupil labs eye tracker in a neon/invisible tracker's scene camera?
In my experiment, I included a 5 point validation since I needed to understand how accurate the specific tracking was for a specific recording and the the normal offset correction was not possible (due to the experiment timing, multiple recordings with multiple eye trackers at the same time, different eye trackers (Neon, Invisible, other brand) etc.). Now I need to calculate the exact validation error, but doing this manually is very time consuming and our automatic approaches did not work so far. When speaking with some of the Pupil Labs representatives who joined the project recording last September (in London), they mentioned, that there might be some code/solution available to do this automatically, specifically identifying the pixel corresponding to the center of the target in the scene camera frames (which can then be used to calculate the validation error). If there is any automatic solution or code like this availble? This would be extremely helpful! Thank you very much.
Hi @user-33134f 👋 ! Yes, you can leverage Pupil Core's open source code with its Circle Detector for this.
Copy the circle_detector.py to your directory.
Then:
from .circle_detector import CircleTracker
tracker = CircleTracker()
...
# Here grab the frame, however you want, could be using pl-neon-recording, could be using Cloud,...
# Undistort the frame and gaze.
...
for frame in ...:
markers = tracker.update(img=frame.to_ndarray(format="gray")) # note that frame.to_ndarray or so would completely depend on how you get the frame
for marker in markers:
# each marker detected would have a
# "marker_type" and "img_pos"
if marker["marker_type"] == "Ref": #is a calib marker, not a stop calibration marker
x_marker, y_marker = marker["img_pos"]
Hi! I'm having some trouble with enrichments. In my experiment, subjects look at multiple pictures on a screen, and I need to get data on these pictures individually. I tried using Reference Image Mapper, but it doesn't see the pictures in the recordings. I considered using marker mapper (I have used markers on the screen during the recordings) and use the temporal selection since every picture triggers a specific event, but I can't because the stimuli were randomized, so a picture is followed by a different one every time, which means I don't have anything to select under "to event". Is there anything I can do?
Hi @user-e33073 , may I ask what you mean by „it doesn’t see the pictures“ and what was your naming scheme for the Events?
I tried using the original image as a reference, but when I run it Pupil Cloud doesn't recognize the image in any of my recordings
I see. So when Reference Image Mapper is asking for a Reference Image, that means a picture of the surrounding context as well. So for a monitor, the Reference Image should also include the desk, keyboard, etc. Then, you can mark the monitor as an Area of Interest. It works by building a 3D model of the environment and then mapping gaze into a stable 2D Reference Image of that environment, so an image of just the picture will not be enough for it to do its work.
It essentially needs some static stable features that are constant in the environment. And the Reference Image should resemble the scene that was scanned during the Scanning Recording.
The Events are all called after the file titles, which are based on the type of painting it is (they're all paintings). For example, IMR1 is an impressionistic painting
I see. Did you name them like „IMR1.start“ and „IMR1.end“ to mark the beginning and end of stimulus presentation? If not, there are solutions, but just so I understand.
Unfortunately, no
That is okay. So, when using either the Reference Image Mapper or the Marker Mapper, the routines also give you the eyetracking data in the coordinate system of the Reference Image. These are found under the Downloads tab with the green icons and the same name as the corresponding Enrichment. So, for your current situation, you can then open these data in Python or Matlab and, using the Events timestamps, filter to only the time window for a specific stimulus. You would essentially find which Event is „IMR1“, let’s say it is Event 7, and then analyze the data up to Event 8. This allows you to side-step the absence of an explicit end Event.
For future experiments, when working with randomized stimuli, you could call them „beach.picture.start“, „castle.picture.start“, etc. and „beach.picture.end“, „castle.picture.end“, and so on. Then, it does not matter what order they were presented and can be used to set Enrichment Sections.
Alright, thank you!!
Hi @user-f43a29 , I hope you are well. I am analyzing a neon recording and have a question about gaze.csv that is obtained from the pupil cloud analysis of the recording. Specifically, I'd like to clarify if there are conditions in which an entry/row/trace in gaze.csv can have a fixation id and a blink id i.e., it is declared part of a fixation event and a blink event at the same time. If so, could you say a little bit about what those conditions might be? My basic assumption was that there are three cases 1) fixation 2) saccade 3) blink that are distinct from each other. Therefore if fixation id is populated in gaze.csv, i'm in a fixation (and blink id should not be populated). If a blink id is population then fixation id should not be populated. If neither blink id nor fixation id is populated, then those gaze traces correspond to a saccade. The data don't seem consistent with this expectation and I'm trying to figure out how to make sense of rows where blink id and fixation id are both populated.
Hi @user-97c46e , thanks, I hope the same for you.
The fixation detector and blink detector run as parallel & independent processes. Although rare, there can be moments when they both return a positive classification. Rather than impose a decision, we provide you directly with the raw outputs from these detectors, so that you have maximal flexibility with your data analysis. For example, some groups will simply filter all data during a blink, others will also exclude some data pre and post blink, whereas others might first inspect the eye videos for the edge cases you mention, to see what was it exactly.
May I ask how often you see this classification overlap? If often, what do the eye videos look like in those cases?
Also, if you’d like more details, you can read about the fixation detection algorithm in the respective white paper:
Note that the Blink detector in the latest app release takes into account eye openness
From the one recording i'm analyzing - I see 37745 instances of fixation and blink overlapping and 132872 instances of saccade and blink overlapping out of 931866 entries.
4.05% of the time - blink and fixation overlap and 14.2% of the time saccade and blink overlap, for about 18% of the time total.
Is it fair to infer loss of eye-tracking when blinks are registered, and if so, does that imply that the gaze coordinates, azimuth and elevation readings during the blink events are some kind of interpolation/extrapolation made by the cloud fixation detector? maybe with something like a kalman filter running underneath?
to clarify, i've looked at the algorithm as detected in the whitepaper, and i just want to make sure i'm tracking it correctly. your guidance will help me decide which approach i should take of the three you mentioned above
There is no interpolation or filtering of the gaze data. Rather, NeonNet analyzes each eye image and makes a gaze estimate. Similar to before, we offer you the raw output. You then have the freedom to take & reject what is appropriate for your research context. Naturally, the gaze data will be less accurate when the eye is closed, but NeonNet still can provide its best estimate.
For example, many researchers filter out the data during a blink. The type of filtering they do varies from group to group.
Thank you for sharing those numbers. I can check with the team on Monday to be sure I give you the best answer. In the meantime, what do the eye videos look like around the moments that exhibit overlap?
Sorry to be a bother @user-f43a29 , could you give some insight on "the gaze data will be less accurate when the eye is closed, but NeonNet still can provide its best estimate."
NeonNet is the deep-learning powered gaze estimation pipeline that enables Neon‘s calibration-free nature. It uses the whole eye image to come to an estimate. Hence, it takes an image and provides a gaze estimate. It is not a dark pupil detection method, if that helps make it clearer.
if i'm following correctly, NeonNet provides a gaze estimate for each image but there is no smoothing over time for these raw estimates. the fixation detector in the white paper may do some smoothing on these raw estimates (with the LPF mentioned in the white paper - but that is to delineate the start/stop of each fixation. Am I interpreting reasonably?
Correct. The raw gaze data, themselves, are not smoothed.
The low-pass filter is the first stage and in combination with the subsequent stages, you get fixation events and their time windows. Then, the gaze data in that window are used to calculate fixation x/y coordinates, for example
In that case, if i have a couple of gaze traces at the beginning of a fixation also labeled as blink, then it might just be the blurring introduced by the low pass filters (edge effects). would that be a fair way to interpret what i'm seeing?
Ah, gaze is not an input to the blink detector.
Sorry, I am not familiar with dark pupil detection necessarily. But i do follow that NeonNet will give a gaze estimate with whatever part of the eye is visible (even if less of it is visible or if it happens to be closed?). i am assuming that those cases (below a certain threshold) will be marked as blinks?
The details are more nuanced than that. For example, closing the eye for too long is not a blink. Similary, closing and opening it slowly is not a blink.
Hi, I have a question regarding the real time data on the Neon- I can see clearly now. Does that inlude degree angles? In addition, what is the standard adult sized glasses specifications? Are there any neons that work with different populations.
Hi @user-7c5b51 , may I ask what you mean by „degree angles“? Do you mean azimuth/elevation of the gaze ray?
With specifications, do you mean what prescription lenses does it come with or rather the size & weight?
There is only one Neon module, which is the eye tracker. It is modular and can be fit in many different frames for different use cases. It is, in all cases, the same eyetracker and gaze estimation & analysis pipelines.
See here for a guide to our frames: https://discord.com/channels/285728493612957698/733230031228370956/1376859240836108299
but the question I was trying to clarify above are what happens after the Neon Gaze estimate has been obtained. I assume that the fixation detection algorithm from the whitepaper is run on those estimates and it has a low pass filter (the SG 3rd order polynomial). I'm seeing blink and fixation overlap in many cases on the early traces of the fixation and on the last few traces of a fixation (this is not perfectly true). Could that overlap be attribute to the edge artifacts of the low pass filter that the eye gaze estimates are put through?
I can ask the team on Monday, but we do have open source implementations of the detectors that you can experiment with as well.
it is a little odd that the categorization itself might be blurred because the algorithm is only taking raw data, but i can imagine a setting where it works out that way
aha... ok. gotcha...
perfect, i think i am following you now
blink detection is thresholding the eye openness signal only?
It’s using the eye openness signal. We will soon put up a description of how eye openness is now taken into account.
sorry, for the blink i just got directed to a paragraph in the documentation not a whitepaper.
Apologies, my mistake. The documentation is in the process of being updated for the new blink detection description.
i got a whitepaper for the fixation detection
no problem. i think the blink whitepaper you mentioned might be the missing piece for me. I might be able to work through what I should do upon reading it.
@user-7c5b51 Also, may I ask what populations are you working with? Neon has been designed to work with as many as possible, and has a monocular estimation mode that can be helpful in clinical research.
You can learn more about how we assessed & validated Neon’s gaze estimation accuracy across a sample of the population in the Neon Accuracy Test Report
Sure, so that is not provided as a base real-time stream, but we give you all the elements needed to compute it in real time. You can:
Do you have a specific latency requirement?
Also, if you would like to see this added, feel free to open a 💡 features-requests
Thanks Rob! One more question, Is the main difference between the pupil core and the Neon the "no need" of calibration?
Hello there! I'm trying to catalogue my neon devices and frames, i have multiple frames and some are the same model. I would like to confirm they have the same Frame ID? For example if I have two Just act natural glasses, it shows they are the same. I thought they would have a different ID.
Hi @user-0001be 👋 ! If you’d like to catalogue your Neons for inventory purposes, the best way is to note the serial number, which is shown in the settings when connected or readable on the matrix view behind the module.
Additionally, each Companion device has a unique IMEI, which can also be useful for tracking inventory on your end.
If you're using LSL and simply need to differentiate between outlets, note you can also change the device name in the settings.