Hello so i have been using this GPS features launched recently but i dont see a way to download the map video. Can it only be played online ? Screen recording causes synchronisaiton issues
Hi @user-15edb3 , you mean you want to make a video like is shown in the GPS Alpha Lab guide?
yes.
Ok. We had simply used the built-in screen recording functionality of Ubuntu and MacOS to record the map playback for that video. If that does not work for you, then you could also try a tool like OBS Studio.
If that also results in synchronization issues on your computer, then you could:
Ok but pls consider adding a download option.
Hi @user-15edb3 , I have noted the request, but just to clarify, the Alpha Lab tools are essentially provided as-is, showcasing how to go further with Neon, to spark ideas.
They serve as a basis for users to see what is possible and to build their own tools. If an issue is found in an Alpha Lab tool, we do investigate that to fix it, but further development to add new features to published Alpha Labs is not usually planned.
If you would like a customized version, then that could potentially fall under a Support Package, such as a Custom Consultancy.
Hello! Our team was wondering if locking the phone while recording with the Neon glasses can cause any errors with recording over time
While testing stuff out, we had the phone locked while recording for several minutes.
After roughly ten minutes, the glasses began flashing a red light and the phone vibrated. We unlocked the phone and it displayed a sensor error. Our data stream seemed to still be receiving data from Neon.
Is locking the phone the potential cause of this? Or might there be something else?
This was on the Motorola Edge 40 Pro
Hi @user-04e026 ! Locking the phone screen shouldn't cause any issues, in fact, it’s the recommended workflow to help reduce battery drain and avoid phone's temperature increase.
The sensor error you encountered likely stems from something else. Do you recall the exact error message that appeared on the phone? And is the recording saved? That info could help us pinpoint the issue.
In the meantime, I’d recommend double-checking that the app and phone have all the necessary permissions enabled. If the issue happens again and you catch the exact error, or you have the recording stored, feel free to open a ticket in 🛟 troubleshooting so we can dig deeper.
Sadly we didn’t save any of the recordings nor did we write down the exact error message. However, there is likely to be more testing like that this week so I’ll reach back out if we are able to record any of that.
Thanks!
hello, is it possible to download audio data from pupil cloud? I now see that in the cloud is displayed the audio trace.
Hi @user-3e88a5 , yes. The audio track is in the scene camera MP4 video that you can download either via the Timeseries + Scene Video
option or the Native Recording Data
option.
You can also use the scene camera MP4 video exported from the phone.
If using the Native Recording Data
or the raw data exported from the phone, then you may be interested in our pl-neon-recording
Python library that makes it easier to extract the audio stream and work with it. See this script for an example.
@user-3e88a5 You can also see how we worked with the audio from the MP4 in the Timeseries Data download in our Audio-Based Event Detection With Pupil Cloud and OpenAI's Whisper Alpha Lab guide.
Surface-mapped fixation validation
Good morning, if I used the monocular option while recording with pupil neon is there any possibility to change it from left to right or viceversa in the cloud or in the pupil player?
Hi @user-3e88a5 , it needs to be set at the time of recording. Changing it afterwards in Pupil Cloud or Neon Player is not a supported workflow.
Thank you very much. Sorry, another question, with the neon player is there any way to analyze and export more recordings contemporaneously, something like a batch export? or do I need to download the source code from github and make some modification myself?
Hi @user-3e88a5 , no problem, feel free to ask.
Neon Player does not support batch export, but you can achieve this, in the same Data Format, at the command line with this script from the pl-neon-recording Python package.
Perfect thank you very much!
Hi I'm looking at Neon product. I would like to know it there is any intrinsic and extrinsic information of two eye tracker near the nose pads.
Hi @user-802786 , do you mean that you would like the intrinsic & extrinsic camera parameters of each eye camera? In other words, values related to the camera calibration and the camera's position relative to the eyes?
Hi @user-f43a29 . Kind of. Not all of them. I'm more instested in two eye camear optical axis(vector) or where it looking at. We would like to play it with different poisition but we would like to make sure, both eye pupils are clearly shown in both camera(not clipping).
Hi @user-802786 , I can interpret this in a few ways, so I provide each potential answer:
We can also provide example intrinsics for a Neon eye camera, including the Field of View (FoV), if you want.
Since you mention "optical axis of where it is looking", there is also the possibility that you mean the optical axis vector of each eye with respect to each eye camera? If so, then yes, you can also measure this with Neon. It's deep-learning powered pipeline, NeonNet, provides the 3D Eye Poses, so the extrinsics of each eye in scene camera coordinates, and again knowing the 3D geometry of the Neon module and the intrinsics of an eye camera, you can transform that to the coordinate system of each eye camera.
Since you mention "clipping", then you could also be referring to the intensity of pixels in each image? Neon works equally well in pure darkness to bright sunny beaches. I refer again to this page showing examples (make sure to watch the videos at the bottom to the end). The eye cameras have auto-exposure and handle such wide variations in dynamic range fine, so you don't need to be concerned about "clipping" of pixel intensity values, if that was being asked.
Hi @nmt When they are combined, there are still gaps between the temples and the frame, and if you 3D print them with nylon material, they will still fall apart. I would like to ask if the original file of the glasses itself is made up of three parts (one frame and two temples), and these three parts are not connected and not integrated.
Correct. You'll need to fuse them in your cad software.
We can also provide dummy frames for prototyping - reach out to info@pupil-labs.com if that's of interest 🙂
Okay, thank you very much!
I set some event buttons in my Neon Monitor but if I open the monitor on another device connected to the same network they do not appear. How can I reuse these?
Hi @user-ffef53 , please feel free to post this as a 💡 features-requests so we can pass it onto the team and keep track of it.
Hi @user-ffef53 👋 Just to add to what @user-f43a29 mentioned, events created in the Monitor App are saved locally in your browser’s localStorage
.
That’s why they don’t sync across devices (PCs). It’s definitely worth posting a feature request in 💡 features-requests if you'd like to see that change!
That said, there is a quick (and a bit hacky 😅) workaround if you want to copy events over manually:
F12
(or Cmd+Option+I
on Mac / Ctrl+Shift+I
on Windows) to open Developer Tools . Note: This might change depending on your web browser.const myEvents = [
"Event1",
"Event2",
"Event3"
];
localStorage.setItem('plConstMonitor_presetEvents', JSON.stringify(myEvents));
Thanks guys!
Hi, I am a PhD student looking for an eye tracker for our user study. The project requires an eye tracker to return the 3d eye pose and pupil diameter of users in real time. I'm having trouble deciding between the neon and core trackers. Could I have some help understanding the differences between neon and core?
Hi @user-ff00f7 , briefly, Pupil Core is our original eyetracker, first introduced over 10 years ago. While Pupil Core is powerful, Neon takes everything we have learned from Pupil Core and improves on it in every way, while being easier to use, modular, and truly mobile.
Neon's deep-learning powered & calibration-free pipeline help to enable this. With Pupil Core, you must calibrate every time you put on the eyetracker, as well as when it has slipped. With Neon, you don't need to think about this. You simply put it on and you are accurately eyetracking.
For more details, please see these two messages for a comparison of both eyetrackers:
Hi I'm a PhD student working with the Neon device, I was wondering if there's a way to capture the data from a laptop rather than the companion device? Like the pupil capture. Thank you!
Hi @user-14fea1! To make use of Neon's native recording format, it must be tethered to the Companion phone for operation. But you can easily stream all the data to your computer in real-time using Neon's real-time API and/or Monitor app, if that's of interest!
Hi team! If my phone storaeg is low and each recording has a cloud sign with a tick in the middle, is it safe to say it's been uploaded to the cloud and I can delete the local phone recordings? This won't also sync and delete the recordings in the cloud?
Hi @user-0001be 👋 Yes, that means they’re backed up in Cloud.
And no, deleting them from the phone won’t remove them from Cloud. If you’d like, you can also transfer them locally first as an extra backup before removing them from the device.
Hey guys. Apologies if there ends up being multiple messages from me in a row, my last ones weren't showing up after pressing send.
We are having an intermittent problem with the neon companion app on the Motorola edge 40 pro where the app freezes when you press stop recording. When this occurs, the android OS will pop up with a notification that the app has stopped responding, and you can choose wait/stop. We've tried waiting several minutes for it to respond, but it seems the only way to recover is to hit stop or switch out of the app. When we reopen it, the recording is lost...
I've tried searching the Documents/Neon directory on the phone to see if any of the data was saved, but I couldn't find anything matching the timestamps of the recording. Any idea if there's a way to recover this data? If not, is there any way to prevent this?
Hi @user-d086cf 👋. Thanks for your message. Sorry to hear you're encountering app crashes.
Please try the following:
Press and hold the Companion App icon in the app launcher (home screen), then select:
App Info → Storage & cache → Clear cache
You can also clear storage.
Does that solve the issue?
Here's the log for the crash...
(Note that this won't delete any recordings)
Hello, I am an instructor and researcher at a pilot training & research facility.
Due to strict information security policies, our environment cannot connect to the external internet.
I am looking for an eye tracker that can be operated completely offline (standalone).
My questions:
1. Can Pupil Neon be used entirely offline for recording and analysis (no internet connection at all)?
2. Is a Pupil Cloud account required for data capture and/or analysis?
3. Are there any other Pupil Labs products that can operate in a fully standalone/offline mode?
Background:
- I previously used Pupil Neon during my graduate studies (internet connection available at that time).
- Now I work in a facility where external internet access is not possible.
- I was initially considering Tobii Pro Glasses 3 (wireless version) + Pro Lab perpetual license package,
but Glasses 3 will be discontinued within a few years.
- My use case is to measure the gaze of pilots during flight simulator sessions (e.g., GL-4000, AIR-FOX motion platforms).
- Desired features: live gaze monitoring from a nearby control room during sessions,
post-flight debriefing with gaze video overlay, and offline analysis later for cognitive/psychological research.
If anyone knows whether this is possible with Pupil Neon (or other Pupil Labs devices),
or can recommend a suitable configuration for offline use, I would greatly appreciate it.
Hi @user-f578a4 👋. Great to hear you're considering Neon for this work! Since you're familiar already with Neon, you'll know about the Companion phone. You'll need to create and log into a Pupil Cloud account once when you set up the Companion app. But after that, you can use Neon for recording, live viewing, and data analysis without ever connecting to the internet. The workflow is straightforward: 1. Recording: Data is captured using Neon and recorded onto the companion phone. This process is self-contained and does not require an internet connection. You can disable Cloud uploads from within the Companion app. 2. Data Transfer: After a session, transfer the recording from the companion phone to a computer via USB. 3. Analysis and Playback: For offline analysis, we provide Neon Player, a free desktop application. With Neon Player, you can load and visualize recordings, play back the video with the gaze overlay for debriefing, and export raw data, including fixations and saccades, for more detailed research. 4. Live Gaze Monitoring: For live monitoring from a control room, you can connect the Neon companion device to a local network (no internet required) and use Neon's monitor app to see live gaze data. If you'd like to discuss options in more detail, we can also schedule a demo and Q&A session via video. Just reach out to [email removed]
new user: i recorded a pitcher throwing but the target only shows prior to the throw
how do i get the ttacking
tracking for the actual throw
Hi @user-2d1b95! Thanks for your message. I'm not sure I understand what you mean. Could you elaborate?
Hi again! Ran into an issue with my blink files. We ran an experiment in the morning and in the afternoon. The morning recordings look fine and nothing is missing. The afternoon files i exported contain empty blink files. I tried to re-export from the companion device, but still getting a zero bytes file. Any idea what the problem could be with this?
Hi @user-ffef53 ! Could you kindly create a ticket at 🛟 troubleshooting so we can have a look?
Hi, i am using the pupillabs neon for a research project. I just noticed there is a limitation of 2hours. I´ve deleted some old coverage but the used minutes doesn´t reduce. Are there any tips to expand these without the upgrade ?
Hi @user-e0026b ! Yes, you can manually delete older recordings from Pupil Cloud to free up space. Just make sure to also empty the trash folder to permanently remove them. Once space is available, any newer recordings that were previously blocked will automatically become accessible. Please note, however, that recordings cannot be re-uploaded to Pupil Cloud once deleted
Alternatively, you can work completely offline, transferring your recordings from the Companion Device and using Neon Player or pl-neon-recording to analyse it.
Hi, In pupil cloud, I can see the graph for pupil diameter and blink graph. How can I get certain part of the graph with proper scales with X and y axis? Is downloading CSV file is the only way? I'm not a programmer btw but I need to analyse the graph in certain parts of events that I marked.
Hey @user-c704c9 👋 On Cloud, the graphs show two things:
- The current value at the playhead position
- The value at wherever you hover
They’re plotted with mean (right/left) values on the y-axis and time on the x-axis.
If you want more granular control over the plot, it’s often easier to build it yourself, you don’t need to be a programmer for that.
Are you familiar with Excel? If so, you can:
1. Download the TimeSeries data from Cloud
2. Open the 3d_eye_states.csv
file in Excel
3. Plot the values against the timestamps
That way, you can zoom in, filter, or format the graph exactly how you like.
So there is no other way than using csv to plot. I thought I could get easier way to plot faster because I had to convert the data to plottable parts and sometime it become slow as I add sheets with different recordings. Thanks for reply.
Right, Pupil Cloud doesn’t have turnkey tools for fine-tuning plots. It’s not designed to replace dedicated charting or full suite analysis tools.
That said, if having more flexible plotting options directly in Cloud would be useful for your workflow, it’s worth adding it as a suggestion in 💡 features-requests so the team can consider it for future updates.
Thank you. I'll ask in suggestion. It will be a lot helpful to plot things faster and also for presentations. Also, how can I record the video with fixations and also those graphs under. I tried recording in OBS while opening the pupil cloud site in browser. But the fps drop and the fixations points and video doesn't match.
Hi @user-c704c9 , apologies, I did not see that you also want the graphs in the video export. That is currently not supported. It would be recommended to make a custom visualization for that purpose.
To export the videos as they appear on Pupil Cloud, make sure to use the Video Renderer Visualization.
Also, if you have not yet had your free 30-minute Onboarding call, be sure to send an email to info@pupil-labs.com with the original Order ID to organize that.
Yes, I want to show the video as the play head is moving to exactly show what is happening in certain parts. I'll just have to use Video renderer just like you said and record the graph from the cloud and stitch them together I guess.
I don't know about onboarding call because the owner is my boss and I'm just assigned to do the work. Thanks for reply @user-f43a29 @user-d407c1 .
You are welcome. Your boss is free to organize the Onboarding and they can invite the whole team to the call, if they like.
I’ve downloaded the Pupil Core Bundle onto a Microsoft Surface Pro but neither the Workd View camera or eye cameras feeds are coming through.
Hi @user-057596 , the Pupil Core Bundle is not built for ARM processors, which are found in many models of the Microsoft Surface. You could try running it from source (see this message: https://discord.com/channels/285728493612957698/285728493612957698/1354350692688592947), but may I ask, are you trying to use that software with Neon?
Hello, I have a pupil neon. I am trying to get gaze fixation in real-time on a marked surface. However, I don't see any way to define the surface on tue companion app. I know how to do it in post-hoc but that it not what I want.
Also, I tried to connect Neon to the pupil capture software on windows, it fails to detect the hardware.
Is there anyway I can get real-time fixation data on a surface from Neon?
Hi @user-4ba9c4 , you want to look into our real-time-screen-gaze package for Neon.
Hi, I would like to get some recommendations:
Our teams is using the eye tracker for a user study for prolonged duration, roughly 2 hours or more. I am wondering if anyone can recommend any way for us to make sure that we are not running out of battery. We have been thinking to use a wireless powerbank that sticks at the back of the phone, but when tried that, the phone gets really hot. So I am wondering if anyone has any recommendation on the method/powerbank brand/cable splitter for powerbank + tracker that works that would be helpful. Thanks!
Although you can record for longer than that with Neon, if you want extended battery life, then here are some suggestions:
I see. Thanks for the recommendation. Also, since I have previously had error with the sensor, could it be due the overheated phone with the wireless charging?
@user-f43a29 Hi Rob, no we are using it with Capture this time in a 2 days Mastery Learning course for regional anaesthesia as our analytical software is only designed as yet to connect to the Core pipeline and we are holding a worlds first clinical skills completion based on qualitative data so we just wanted to have the new Surface Pro as a back up but if it’s not compatible then that’s fine. Thanks for getting back so promptly.💪🏻
Has anyone else faced the issue of not being able to install the Neon Companion App?
Hi @user-613332 , what phone are you using?
Samsung S25
Did you initialize Android with a basic Gmail account?
I did, I set up the phone with a gmail and samsung account. Interestingly, as soon you messaged me, it started downloading. Which is great! Will keep you posted.
It worked! How do I schedule the 30 minute demo?
Hi, I'm working with Mr. Ax's parabolic flight group - we conducted some research using the Neon device. We're aware that it is possible to elucidate blink completion metrics from the data and wondered how to compute these?
Hi @user-b98aae 👋 ! Neon does capture eyelid aperture since April on the latest app version.
If you had "Compute Eye state" enabled that metric should already be there, if the recording are made with an older version they won't have it.
Hi Miguel! I see, the recordings were made in 2023/2024, during the parabolic flight campaign, so eyelid aperture was not automatically selected. Is there a way to re-run the analysis to retroactively see eyelid aperture in our data?
Is there a way to estimate how much time a recorded video will take to process in the background? Especially for videos that have the anonymization add on.
Hi @user-613332 , there is no set time. It depends on not only on the length of the recording, but also on where it sits in the processing queue, which is variable.
Hi, I would like to get some suggestion about preparing AprilTag for Maker Mapper.
I'm using AprilTags which is distributed on the Pupil Labs website for surface detection following, but tags are not detected properly when I process of Maker Mapper for some reasons. There are multi monitors at the data collecting environment, and these monitors are right next each other and very dense. I would try to detect each monitor surface with 4 tags per monitor. The user stands in 2/2.5m apart from monitor, and tags become small on the scene camera. For tag calibration, I have some questions and would ask you some tag recommendations:
What I did so far: - Change scene camera exposure mode (works a bit, but didn't change that much) - Adjust the tag size slightly (but cannot make much bigger due to monitor restriction)
Thanks for your support!
Hi, @user-c529ed - tag size is a very important factor in tag detection and localization, and from the sounds of it, this is definitely what you should focus on first.
You mention that the tags can't be made much bigger because of the monitor restriction, but would the configuration in this image work for you (where the pink squares are AprilTag markers)
Hi! I was attempting to get blink data but after viewing the data in the cloud, there is no blink data for my last two trials. A trial I ran a couple days earlier has blink data. The only difference is that the trial with blink data starts recording after the glasses are put on, while the ones without that data start recording a few minutes before the glasses are put on. I'm wondering if this is the cause of the issue, and if there's any way to rerun the blink analysis but starting from later in the recording i.e. when the glasses were worn.
Hi @user-ad7ce5! Please open a ticket in 🛟 troubleshooting and share the recording ID. We can look into it there!
Hi Pupil Labs, we were working with the IMU data and found that the number of samples recorded for each second are not consistent. Shouldn't there be 110 rows (or nearby) for each second in the dataframes?
I also observed this. According to neon (as im keeping 120hz) i should receive 120 data frames per second, but for whole 10 seconds im receiving only around 500-600. I assumed it is ommiting the non-tracked data (like eyes closed, pupils not found etc).
Hi @user-ccf2f6 👋 ! Could you also kindly open a ticket at 🛟 troubleshooting so we can further investigate it?
Is the automate_custom_events package still working? I am getting a "ValueError: invalid literal for int() with base 10" error no matter what prompts I use. Thanks!
@user-d407c1 i did not get your last question. I am using Motorola 5g for neon which came with neon. And im using neon xr for unity and tracking and recording realtime time series data such as (pupil diamenter, pupil centers, gaze point, gaze origin, gaze vectors, eyelid angles, etc and saving into CSV file
@user-6c6c81 Just to clarify, the sampling rate you’re seeing is based on the received data from NeonXR, not the recorded data, correct?
If so, this is expected because the current library uses WebSockets and batches packets before sending (buffering). There’s a new beta version that uses UDP directly, which lets you send and receive the full sampling rate. You can try it here: UDP branch.
Since this is specific to 🤿 neon-xr and to avoid mixing with other topics, let’s continue the discussion there if you have more questions.
Ok thank you
Hi team, if the phone cannot connect to the wifi, is it possible that the setting of the phone's settings have some problems? I have three phones, two of which cannot connect to the wifi
Hi @user-9a1aed , were these two phones provided by us?
Also, if I want to record the session, do I have to store the recording to the cloud to analyze the data? which means I have to purchase the addon service, right?
Hi @user-9a1aed , while you can still use the 2 hour free space on Pupil Cloud, it is not necessary to use Pupil Cloud to collect or analyze data. You don’t even need an Internet connection. All the raw data collected during a recording are saved on the phone. You can then export them via USB cable and load them into Neon Player or our pl-neon-recording Python library.
However, our most powerful analysis tools, such as Reference Image Mapper are only available on Pupil Cloud. If using Pupil Cloud, you also benefit from the data logistics it provides, such as Workspaces and sharing. It is designed to streamline the process from data collection, to qualitative review, to quantitative analysis & visualization.
Is there any capacity to adjust the settings of the scene camera? For example, is it possible to increase on-sensor binning (thus reducing spatial resolution) in exchange for a higher frame rate?
Hi @user-dc9cc7 👋. This isn't possible with Neon's scene camera. But if you don't mind a bit of post-processing, what you can do is map gaze onto a third-party scene camera, e.g. a specialised camera with higher frame rate, using this Alpha Lab guide: https://docs.pupil-labs.com/alpha-lab/egocentric-video-mapper/
yes. thanks!
hey community! I'm using neon glasses and I read its data stream using OpenVibe aquisition server. I strem triggers in paraller and OpenVibe is suppsed to merge the triggers with the data. Both streams are visible in the aquisition server. But once visualized in OpenVibe designer the triggers don't appear and they are not recorded either. What is strange, is when I replace data stream from Neon by data stream from Muse, keeping the same trigger stream, everything works perfectly fine. So it is not the problem of the trigger stream format. Have you ever experienced such a problem?
Hi @user-8e9bb0 ! Just a quick note, there’s no official support for OpenVibe, so it’s a bit outside what we normally test. Could you explain a bit more about how your setup works or how you connect it to OpenVibe?
It might also be worth trying to stream the data directly via LSL (LabStreamingLayer) instead of routing through OpenVibe first, just to confirm whether the issue is specific to OpenVibe’s handling of the stream.
Is the blink detection in neon player calculated the same way as it is in the real time API? Watching the video in the player and checking the blink detection bar, I’m noticing some discrepancies (ie missing blinks). Luckily this is not throughout the entire recording and most blinks are registered, just noticing some gaps
That depends on what version of the Companion App and Neon Player you're using. With older versions (before the app supported real-time blink detection), Neon Player used an older algorithm that's computed post-hoc. Now that the Companion App supports real-time blink detection, Neon Player simply loads the blinks that are detected while recording (and makes no attempt to detect blinks post-hoc)
I was wondering if I could get any support on using adb to sync time? Whenever I do adb shell dumpsys time_detector I get mLastAutoTineClockSet=null
Additionally doing the command in the documentation adb shell cmd network_time_update_service force_refresh hangs and returns false
Hi @user-a84f25 t! 👋 A couple of quick notes that might help:
Seeing mLastAutoTimeClockSet=null
isn’t necessarily an error, it usually indicates that Android’s automatic time detection hasn’t updated the clock yet. You can dump the full time_detector
state with adb shell cmd time_detector dump
to confirm if auto time is enabled and which sources (e.g. network ) are active. Android Open Source Project
The adb shell cmd network_time_update_service force_refresh
command may hang or return false
depending on your device or Android version, it isn't guaranteed to succeed in all cases. More info here If time sync is the aim, the easiest option would be to rely on the Companion Device’s NTP auto-sync, which generally works when the device is online.
Regarding the Python API note: - The “no RTP received for 10 seconds: closing” message means the real-time data stream isn’t arriving, this can be due to network issues, the API not being closing properly. - Since the time echo works fine, your connection setup is likely functioning, but note that this protocol uses TCP. Here's what to try next: - Restart the device and reconnect to the network. - Make sure automatic time update is enabled in Settings so NTP can sync. - Force-close the Companion App, clear its cache, and try running one of the basic Python API scripts again. - After testing, let us know what the time offset result is, this will help narrow things down further.
If the issue persists, could you share a bit more about your setup (device model, OS, and whether you’re on WiFi or USB tethering)? That would also help
I'm also sometimes not able to get any data from gaze/eye/world sensors through the python API. The API claims that no RTP is received for 10 seconds: closing
The time echo estimate always works fine however
Hi! I have used the Neon glasses for data collection, and I will submit a paper in which I would like to cite Pupil Labs. Is there a relevant publication I can cite?
Hi @user-42cb18 👋! You can find it here: https://docs.pupil-labs.com/neon/data-collection/publications-and-citation/.
And once your work is published, feel free to share it in 🔬 research-publications , we’d love to see it! 😉
Miguel: For some reason, sometimes it works with internet and sometimes it doesn't. I'm not sure what changes between when the RTP streams are captured when the phone has internet vs when it doesn't. I'll need to do more testing In terms of another question: is there any audio retrieval methods in the realtime api?
@user-d407c1 Have you ever ran into the problem that the eye state, gaze, and video RTSP streams work fine, but the time estimator dies or freezes after a couple of minutes? Do you know what might cause that?
Hi @user-a84f25 ! Thanks for your patience, I’m currently at a conference so my responses may be a bit delayed.
Could you let me know how many times you’re calling the time offset estimator? We haven’t encountered such issues in our tests, but this detail might help us narrow things down.
Regarding your question, audio signal is included in the RTSP signal, although currently our python realtime api client does no have support for it.
@user-d407c1 Thanks! I'm running it on 20 samples, as often as I can (I'm using the async API)
Hi, all! Are there any other files besides wearer.json that the identity of the wearer could be revealed?
Hi, @user-42cb18 - in the info.json
file, you'll find a field named wearer_id
. This value will match the one you'd see in wearer.json
"Hello, I've confirmed that the neon-player repository is on your GitHub. I was wondering, which repository is for real-time capture with the Neon device, similar to the 'Pupil Capture' software for the Core product?"
Hi, @user-fce73e - the development version of Pupil Capture on GitHub has experimental support for Neon Linux and Mac, but this isn't officially supported. The only recommended way to use Neon is with the Neon Companion app, available for free in the Google Play store
Hello! Is there a way to improve and/or stabilize the detection of surface markers for the surface tracker (I'm processing it via Neon Player)? Because I have some frames where the markers are probably too small and are not detected, so instead of being a rectangle, the surface becomes a weird polygon. The marker size was fine for the Core, but the Neon struggles more to detect it reliably. Thanks!!
Hi, @user-a4aa71 - I'm afraid there really isn't anything you can do post-hoc to correct for small markers
How does the timing between the gaze and the eye videos compare? Should the gaze timestamp refer to a specific frame in the video, or is gaze calculated from multiple frames? Also for using the TimeOffsetEstimator, how do you go back and recalculate the time of the events? Which estimates go with which time points? Do you do some sort of moving average assuming the timeoffsetestimator is reporting at a consistent rate over the recording?
Hey, @user-a84f25 - * Eye video frames and gaze estimates are 1-to-1. A gaze timestamp will exactly match the timestamp of the eye video frame from which it was generated * I don't think people generally use time offsets retroactively. Rather, after calculating a new offset you should assume that for all future events until another offset is calculated
Additionally, I have tried recording gaze through the pupil labs realtime api and through just recording on the phone. I was comparing timestamp_unix_ms from the realtime stream and ts from the recorded stream and I cannot find time correspondences?
The Realtime API streams data over an established data protocol that which defines a lower precision for timestamps than what we save in recordings, so your observation is correct and these timestamps will never exactly match up, and the difference will likely appear sporadic/inconsistent. Timestamps from the Realtime API are still correct - just not as correct as the ones in the recording
Hello, I wonder if anyone could help with pyNeon? I am following this tutorial, but it shows this error. https://ncc-brain.github.io/PyNeon/tutorials/surface_mapping.html
Hi @user-9a1aed 👋. I think you'll have to reach out to the authors of that tool - it's a third-party tool that we haven't used before 😅
Hello, @user-cdcab0 , I am running the same script on two laptops, the script is in Psychoy 2024.2.4, on one laptop the events are correctly stored. in the other laptop they are all stored at the end of the recording (i.e., recording.end). Any idea why the same script produces different behaviour? Thanks
I noticed that the same script with pupil plugin version 0.7.7 has that problem, but if I revert back to 0.7.6 the problem disappers and events are annotated at the right time. May I ask if 0.7.6/7 work with a specfici version of the NeonApp/NeonPlayer
Hi, @user-f4b730 - could you share your script?
Yes, I can send it via email as it is very long (combines motion cpatrue, eye tracker and a robotic head). Can u remind me of the email?
Can I share it via email, as it very long and complex and combines motion capture, pupil and a robotic head?
@user-cdcab0 for the gaze and eye-video timestamps, on the copy recorded to the phone its one-to-one, but I cant find matches on the streamed copy? I i merge on the nearest time matches, I get as large a time difference as 1 sec
I'm a little unclear on what you're comparing. Are you looking at streamed gaze timestamps versus recorded world timestamps? Because any streamed timestamps aren't going to be comparable to any recorded timestamps. You can only really match streamed timestamps with other streamed timestamps or recorded timestamps against other recorded timestamps.
Also, FWIW, the real-time API has a convenient function for matching scene and gaze timestamps already
@user-cdcab0 I didn't know about that function, that would be very helpful! I am grabbing gaze and eye-video through the async python realtime api, and then comparing the times of the eye-video and the times of the gaze. They don't match up exactly, but I can merge on the nearest timestamps
Should they match up exactly?
You know, I expected they would, but my own quick tests here seems to indicate they don't actually. I've posed the question to our Android engineering team and will let you know what I learn, but it does seem that you will need to match by finding the closest values
Hello everyone, I carried out an experiment starting the recordings with the Pupil Core and then I moved to Pupil Neon. Both have their own fixation detection algorithm, and the Neon also includes a saccade detection algorithm, both configurable in Pupil Player/Neon Player. Regarding fixations, are the results comparable? For example, it seems to me that in Pupil Player (with the Core) you can set the minimum and maximum duration as well as the dispersion for fixations, while the Neon Player uses a different type of algorithm and I don’t think it allows parameter adjustments. Should I, for instance, compare the number of fixations in an initial recording (done with the Core) and a final one (done with the Neon), or is there perhaps some repository/scripts that I could use with both datasets to detect saccades and fixations?
Thanks a lot!
Hi @user-a4aa71 👋. That's a good question. The Core fixation detector does indeed have configurable thresholds. This means you can find different fixation detection results based on which thresholds you set. I found this to be useful in the past when tuning the filter to a given task. Differently, Neon's fixation detector is more advanced, as it compensates for head movements, and we have tuned the parameters to provide accurate results across different tasks. Now, can these be comparable. I would say, in principle, and in certain more controlled tasks, without head movements, it's possible. I think it would be more challenging with dynamic movements. That said, I wouldn't haphazardly take an old dataset recorded with Core and assume they're going to be comparable to Neon. Rather, I would recommend somehow comparing fixation results with Core and Neon to validate things. Unfortunately, we don't have a repo or scripts that you can run on different datasets to so.
Hello! My cloud storage was full, so I deleted some recordings, and also then deleted them from the Trash.
However, the other videos are still showing a yellow triangle with an exclamation mark, and when I try to add them to a project it says "Recording is processing, has errors, or blocked. This recording can not be added to the project."
How long does it take for the cloud to register that it has space available and allow me to access the other recordings?
Hi @user-fa126e 👋 It should be instantaneous. Can you try refreshing the website. You can also check https://cloud.pupil-labs.com/settings/devices to check storage quota.
Thanks! I had refreshed, and it wasn't instant for some reason. it seemed to take at least 15 minutes (possibly longer but I was working on other tasks and I'm not sure exactly when it started working).
But it's working now!
Hi, using Pupil Neon with the Companion App I accidentally uploaded two recordings to the wrong workspace. Is there any possibility to bring them to the right workspace or create a project in the cloud using videos from different workspaces as I want to use the enrichment functions in the cloud? thanks for help!
Hi @user-827269 , we currently do not support transferring recordings across Workspaces. However, you can still use the Enrichment functionalities on those recordings, even though they are in a different Workspace.