Hi all, why i did not see "Eyelid aperture" and "Pupil diameter" in pupil cloud ?
Hello! I wanted to ask what type of battery Neon uses and it's capacity. I'm preparing technical documentation for an experiment and can't find that info on the website. Thank you
Hi @user-77aaeb , Neon uses the battery in the attached Companion device, so it depends on which phone you have.
So there's no inbuilt battery? And the Companion device could be any phone/tablet attached by cord?
There is no battery in the module. It simply contains the sensors and receives power via USB cable from the attached Companion device. You can see an explanation & technical overview here.
We only support specific devices and specific Android versions. We maintain an up-to-date list here.
Thank you! ๐
quick question about uploading recordings with pupil neon into pupil cloud. We had not realized we were out of storage so we are getting this in pupil cloud:
Hi @user-5ef6c0 ๐. The recordings you made on Friday are safely stored in Cloud. But it indeed looks like they over the free quota. To derestrict them, you can either purchase a Cloud plan, or alternatively, delete recordings from the free 2-hour quota to free up space. Note to permanently delete recordings from Cloud, you'll need to also remove them from trash. Be sure to have them backed up offline if they're important because Cloud re-uploads are not possible. We also have offline tools to work with your recordings - check out this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1431119082001793095
we just deleted lots of older recordings. 1) can we expect these recordings we did on Friday and today to be uploaded (the ones with the ! icon)? Or are they lost?
Hello, I am wondering why I don't see Eyelid aperture, pupil diameter, audio and blinks anymore. Please help. Yesterday everything was OK.
Can I ask for the MSDS of the 'is this thing on' model?
Hi @user-3c26e4! I have a few follow up questions to help understand why you're not seeing these data streams: 1. Are you working with recordings loaded directly into Neon Player, or in Pupil Cloud? 2. Did you have real-time eye state enabled in the Neon Companion app at the time of recording? This is in the app settings 3. Similarly, did you have audio capture enabled in the Companion app at the time of recording?
Hi @user-4c21e5 , 1. Pupil Cloud 2. Yes 3. Yes
However I tried Alpha Lab - Dynamic AOI Tracking With Neon and SAM2 yesterday. It didn't work. Could it have spoiled these channels? As I told you before that all channels could be seen.
Here is a screenshot
Hi @user-3c26e4! Nadia here, stepping in for Neil. ๐ I'd like to request some additional information:
Just to clarify: any issue you might have experienced when trying to run the Alpha Lab tutorials does not affect the data streams in your Neon recordings.
OK, then it should be a kind of server problem of yours. I will share one ID of one recording but it applies for all recordings I have with neon. Where should I share it? Can you open a ticket?
By the way, do I have a defect neon glasses because I alway have to set the exposure manually, because balanced or the other automatic exposures lead to this. It is impossible to see the scenery. If I do it manually it doesn't react to different lightning conditions or daytime if I am driving long. Pupil Invisible were way better regarding the automatic exposure. Is my Neon defect?
yes, feel free to open a ticket in our ๐ troubleshooting channel.
I shared an ID.
Your Neon is not defective. In high-contrast environments, it's expected that you may need to manually adjust the exposure for optimal visibility.
Also, just to let you know, we recently released a new feature on Pupil Cloud that lets you adjust playback brightness directly within the platform. The settings are saved per recording, making it easier to fine-tune visibility during review. You can find the summary of this feature here: https://discord.com/channels/285728493612957698/733230031228370956/1433650414041043025
But how come Pupil Invisible change automatically the exposure so good? If I make it darker one still can't see anything. This is at 0.2:
For very bright illumination, like sunight coming in through a car windscreen, if you want visibility out of the windscreen, you should try the 'highlights' autoexosure mode. It will optimise for this environment.
But one should consider that a recording on a real road can take 3-4 or more hours so you can't adjust the exposure for the time of the recording. What can be done for that?
If the scene is already overexposed, then it's unlikely post-hoc brightness settings will help. It's important to choose the right exposure mode at the time of recording.
The highlights is an autoexposure mode. Meaning it will change to optimise for outside the windscreen. That said, if there is a significant change, e.g. it goes dark outside during the drive, you might need to change to another mode.
Can I do this without stopping the recording?
Hi all, do you know why i did not see "Eyelid aperture" and "Pupil diameter" in pupil cloud anymore ?
Same with me!
Hi @user-af95e6 @user-3c26e4 ! We are aware of this issue affecting the plot visualizations, we are currently working on it and we will let you know as soon as it is resolved.
Hi @user-937ec6 ๐ ! may I ask what kind of computer youโre using? From our internal tests, the delay is typically minimal, but there are a few factors that can influence it. Could you check that your audio drivers are up to date and let us know what kind of audio interface or sound card youโre using, as well as your CPU utilization during playback? Thatโll help us get a clearer picture.
As I mentioned before, achieving zero latency isnโt currently possible in real time due to the sampling rate at 8 kHz AAC (1024 samples), as capturing one AudioFrame takes about 128 ms. With the scene camera running around 30 FPS (~33 ms per frame), youโre effectively getting almost four video frames per audio frame.
Thatโs why I suggested relying on the recorded data directly on the device, which is captured at 48 kHz and already synchronized, giving you clean alignment without the need to post-hoc delay or mux the streams.
Keep in mind that delaying and storing the data, even when threaded, can increase CPU load and potentially become a bottleneck for real-time playback.
By the way, I noticed you offset the scene video frame by 0.3s in your code, but then you loop matching and discarding audio frames until one is captured after the first video frame which is contradictory.
Instead of manually offsetting and using incremental counters , you should keep the time_base for each stream, pick a zero time and calculate PTS relative to that time, using the timestamp in unix sec already provided with the frames.
E.g. AudioFrame has an av_frame and timestamp_unix_seconds, so the pts for muxing the audio stream should be int((audioframe.timestamp_unix_seconds - start_time)* 8000)
I am using a computer that I built myself of no particular make or model running Windows 11. - CPU: Intel 12700k - RAM: 32 GB - GPU: RTX 3080
Note that the live audio delay seems the same whether I output audio to the graphics card or onboard sound.
CPU utilizationI want to be clear and elucidate the conditions under which I am running.
CPU utilization is nominal and in the 4-15% range. When the video encoding is more dynamic, the utilization increases as one would expect. When the scene is relatively static, CPU utilization is less than 5%.
DriversI do believe the drivers to be up to date for both the onboard sound as well as the graphics card.
LatencyMy main concern isnโt even the live audio delay, although less delay would, of course, be nice! Rather, itโs synchronization in the saved video file.
Relying on device recordingsI have a bunch of custom software, built atop the API, that does various processing including video overlays, data logging, and LSL input/output. Thus, and as I have said in prior posts, I cannot rely on device recordings for synchronized audio.
Observed audio delayEven after matching the underlying timestamps of the audio and video frames, I still see a readily visible audio delay when playing back the saved video. Hence offsetting (delaying) the first video frame.
Calculating presentation timestampsYes, one could calculate the presentation time stamp from the unix time stamp. I agree that this is mildly superior to incrementing a counter since itโs possible that the encoder could fall behind. Thank you for the suggestion, and I did adjust my code accordingly.
Hi I just got a neon can i ask a really basic question. If i want to monitor and edit the stream on a pc is the only way to do that by streaming the data from the phone/ neon companion app to the pc over wifi?
I've worked with a core before and we had that connected directly into a pc and was able to set up recording directly and edit events
In the real-time Async API Audio examples on Github there is the following text:
- On the simple API examples you can also find how to use the audio for Speech-to-Text using the whisper library.
When I browse to the Simple API Audio examples on Github, I cannot find the example referenced above. Can someone kindly refer me to the right place for the Whisper example?
Hi @user-937ec6 ๐ ! Thanks for following up. That's a more than capable computer, I highly doubt your CPU is being the issue here.
Letโs focus on synchronization after receiving the streams live, although is a workflow that I would typically not recommend as your also subject to network jitter. I will recommend buffering at least 250ms so there are no frame drops (but note that if you use the same buffer size for preview, there will be more latency on playback).
The key step is to derive presentation timestamps (PTS) from the capture timestamps, not from incremental counters. This isnโt just slightly better, itโs the only correct way to maintain sync, as it ties timing to when each frame was actually recorded, not when it arrived. Did that already solved the delay you observed?
For reference, hereโs the Whisper example you mentioned: examples/simple/audio_transcript.py, note that this is not optimized at all, is just meant to show the possibilities.
Hi @user-8fe6ae! Your understanding is correct. Neon needs to be tethered always to the smartphone (Companion Device) which runs the Neon Companion App. The Neon Companion App allows you to collect data and stream to other devices. For streaming, you can use one of the following:
1) the Neon Monitor App - a web app allowing you monitor your data collection in real-time and remote control all your Neons from another device.
2) the Real-Time API - offers flexible approaches to access data streams in real time, remote control your device, send/receive trigger events and annotations, and even synchronise with other devices.
Note that for every Neon purchase we offer a 30-minute Onboarding Workshop to help you set up and walk you through our software ecosystem. If you haven't already booked yours, feel free to reach out at info@pupil-labs.com and we can schedule a meeting ๐
Ok thanks for the message. We have restrictions on what we can do on our network at the university. Ideally we to just have the eyetracker unit directly connected to a pc that we can start/stop recording and send events into. Is this only possible with the core system?
@user-af95e6 @user-3c26e4 The timelines visualisation should have been restored, please refresh the page and let us know otherwise.
Hi all! Weโre conducting a banner ad study using Neon glasses. Iโm getting an average time to first fixation of 21 seconds for one AOI. Does this mean that, for about 20 seconds, the respondents didnโt look at anything within that AOI? Please help โ TIA!
Hi @user-67b98a , that is in principle correct. They did not fixate the AOI for the first ~20 seconds of the recording on average. This means that some respondents looked a bit earlier and some a bit later, depending on the amount of variability in your dataset. You can look at the data on a per respondent basis, if you need to know more than the average.