Hi all! I hope you are well I am writing to you regarding an issue we are facing
My team is using the Neon Glasses with PsychoPy on Windows 11
We are using the following versions PsychoPy 2025.1.1 Plugin 0.8.1 Python 13.3.2
*But the experiment does not work for us, it works with MouseGaze but not with Neon Glasses. We also tried removing PsychoPy, Pupil Lab files, etc from the laptop, didn't fix the issue. We also tried resetting the laptop, still didn't fix the issue. We have tried the gaze contingent demo provided by the Pupil Labs team and also some of our own experiments, but none of them work.
It was working in the past with PsychoPy 2024.2.4 and Plugin version 0.7.7, but then stopped working and now when we install the older version of PsychoPy we don't even see the eye tracking components like April Tags and Record on PsychoPy.
Also, when selecting the eye tracker in the "Experiment settings" we get multiple options like "Pupil Labs", "Pupil Labs (Neon)", "Pupil Labs (iohub)", etc. Sometimes double for each, like two options with the same title.
I wonder which versions work together the best. And has anyone encountered this issue before?
Hi @user-f4f9e1 , thanks. Since we already started communication in your Support Ticket in π troubleshooting , is it alright if we continue there for now?
Hi everyone, Iβm trying to use Pupil Labs Neon with LSL streaming in a university environment, and Iβm running into network issues. The Neon app shows βStream over LSLβ as active, but my Windows PC cannot detect any LSL streams (LabRecorder). Both devices are connected to the university WiFi, and they get IPs in the same range , but the PC never sees the Neon streams. Our IT says the firewall isnβt blocking anything, but I suspect the WiFi network is blocking local peer-to-peer traffic, multicast, or UDP port 16570, which LSL requires.
Has anyone dealt with LSL not working on enterprise/university WiFi networks? Any advice or experience would be super helpful β Iβm trying to run experiments and need to solve this. Thanks!
Hi @user-20283e. University WiFi networks often don't work. You're best bet is to use a dedicated local router. See this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1431416744962031627 π
Hi I am trying to use a wired connection. I have both the phone and the headset connected via usb adapter which seems to be connected ok. I have this adapter then connected via network cable to PC (both set to be on the same network) but when i run the discovery functions the pc cant find the eye tracker. Does anyone know how to troubleshoot this? thanks
Hi @user-8fe6ae! Have you already share your connection from the PC to the phone? If on Windows (Internet Connection Sharing): - Open Control Panel > Network and Internet > Network Connections. - Locate your active internet source (e.g., "Wi-Fi"). - Right-click it and select Properties. - Go to the Sharing tab. - Check the box: "Allow other network users to connect through this computer's Internet connection". - In the dropdown menu below that, select the Ethernet adapter that your phone is plugged into. - Click OK. Your PC will now act as a router for your phone.
Note that exact instructions could vary across versions of Windows.
On Your Phone: - Turn off Wi-Fi and Mobile Data to verify the connection works. - You should see an Ethernet icon ( looks like <...>) in the status bar. The phone will treat this connection just like high-speed Wi-Fi. - Enter the Companion App and navigate to the network tab top-right, and check whether there is an IP shown there.
You can use that IP to connect directly https://pupil-labs.github.io/pl-realtime-api/dev/methods/simple/connect-to-a-device/#connect-directly-via-ip-address
Hi @user-8fe6ae , just building on my colleague's post, you can find detailed steps for Windows 11 here. Please note that Windows 10 does not have the necessary features to support a direct connection, but it will work with a router.
Also, please note that Windows does not always maintain a stable connection when you plug the Ethernet cable directly from Neon into the PC. It sometimes randomly re-assigns the IP address to the phone, breaking the connection. Forcing Windows to give a static IP to the phone should resolve that.
Overall, it is easier and recommended to use a wired connection to a local router:
This will work for all versions of Windows.
You should not lose any appreciable transmission latency and auto-discovery will work. Windows users have reported success with this router out-of-the-box, for wired connections.
Hi I want to know what is the recommended observation range for Neon and Core eye trackers, and is there a range? Thanks
Hi @user-bd5142 , may I ask for more clarification about "observation range"?
I want to know the effective tracking distance of Pupil Core head mounted eye tracker. Assuming the subject maintains a fixed position without moving, within what range of target objects (in meters) can the best quality eye movement data be obtained? I also hope to understand whether it is affected by the resolution of the scene camera, lighting conditions, or target size, as well as the changes in tracking accuracy at different distances.
Hi @user-bd5142 , the viewing distance is in principle unrelated to eye tracking quality. For Pupil Core, it will ultimately depend on the quality of calibration that is done for each participant. It is usually better to calibrate at the distance that they will be viewing, although it can extrapolate outside of that region reasonably well.
For Neon, you do not need to think about this, since it is deep-learning powered and calibration free. Please see this message for an explanation: https://discord.com/channels/285728493612957698/1047111711230009405/1371897435781337200
The tracking quality for our eyetrackers is only "loosely related" to scene camera resolution. For Pupil Core, the calibration targets are usually large with clear centers. Neon's algorithms have been optimized for the chosen scene camera resolution. For both, we have chosen cameras with wide FoVs and high spatial resolution.
To clarify, Neon's FoV covers the human visual field and its images are 1600 px wide by 1200 px tall. This pixel density provides high resolution to the underlying eyetracking models, supporting its out-of-the-box ~1.5-1.8 degrees of gaze accuracy. You can see live examples here. Similarly, the eye cameras each have a 192x192 pixel resolution, giving clear images of the eyes. You can also see examples of Neon's eye images at that page.
That said, Pupil Core can show some reduction in performance when used outside in sunlight. Neon is again much improved and robust against sunlight. It functions equally in pitch black and bright sunlight on the beach. Since Neon is calibration free, you similarly do not need to think about calibration target size, since it is not necessary.
Thank you
Hi! The chargers for our Motorola Companion Phones got lost in our lab- have you tested any chargers that I could buy to use with the phones+ devices? I'm located in Germany.
Hi @user-d5a41b ! The Motorola Edge 40 Pro is compatible with any USB-C chargers, but depending on their characteristics it may charge slower.
You can find here the official recommendations from Motorola.
And here you can directly purchase it form Motorola if you prefer https://de.motorolachargers.com/product/turbopower-27w-wall-charger-usb-c/ (if available)
Hi @user-d407c1 , could you please explain the difference between Wearer Reach and the new Recording reach? I see the explanation line, but cannot understand the algorithm. It works with neon, but not with invisible, right?
Hi @user-3c26e4 ! This applies regardless of whether the data comes from Neon or Invisible, itβs simply a new metric calculated on the AOIs.
You can have multiple recordings from the same wearer, so both metrics are valid depending on what you want to measure:
If you are interested on how many wearers looked at that area of interest, that would be Wearer reach. If you are interested on how many recordings they looked at that AOI, independently if the same person was recorded multiple times or not, that would be Recording Reach.
What's the added value? E.g. You can see 100% of your participants looked at that area of interest, but looking at the recordings reach, it may drop if they did not look at that AOI on every recording.
Hello
We are currently evaluating your eye tracking solutions for a potential purchase. Before proceeding, we would like to clarify a few important points:
1-) Does the device require any specific software license to operate? If yes, is the license included in the purchase price or sold separately?
2-) Are software updates included, or do they require an additional subscription or maintenance fee?
3-) Do you provide any form of training or onboarding for first time users? If yes, is this service included or billed separately?
4-) Is there any ongoing support or service fee after the purchase?
5-) Are there any regional restrictions or differences for usage in Germany or the EU?
Hi @user-0b1519 π! Thanks for reaching out and for outlining your questions, happy to clarify them.
1) Software Licenses Fees: If you plan to work offline using Neon Player or pl-neon-recording both are free and open-source - there is no fee to use these. Everything needed to record and access your data is included with the hardware, including the Companion app and streaming libraries if needed. Have a look at our ecosystem here.
We do offer additional paid plans for Pupil Cloud, but these are optional.
We can certainly explore what you need over a video call if you prefer.
2) Software updates: Updates for the Companion App, Neon Player and other libraries are included at no extra cost.
3) Training / onboarding: A 30-minute onboarding call is included with every purchase. If you need more in-depth or ongoing training, we can arrange that separately.
4) Ongoing support: General support via email and Discord is included. More advanced / dedicated support packages are also available through consultancy services if needed.
5) Regional restrictions: There are no usage restrictions specific to Germany or the EU. Everything functions the same across regions.
If you'd like a demo and Q&A session, you can book one directly through our website: https://pupil-labs.com/
Thank you π
Hi, I need some clarification if the data can still be stored on the companion app without using the cloud.
Also, we recently got error messages sometimes during recordings. It happened after about 30 min of recordings (not always). What can we do to prevent this? Is this a problem if heat if the phone?
Is 2.9.23-prod the newest update of the companion app?
@user-6c24c0 thanks for reaching out here as well. Yes, the 2.9.23-prod is the latest version of the App. We saw you opened a ticket and sent an email regarding the issues you're encountering. I'd recommend keeping the communication in the Discord Ticket for now so we don't lose track of information across channels.
Hi, I have a quick question. I used Neon by Pupil about years ago, and I recently noticed that my cloud plan has expired. I have a couple of questions:
If I purchase the Unlimited Analysis Plan, will I be able to access my originally recorded data?
My department will place the order on my behalf to receive an academic discount. In this case, should the contact information on the quote be mine, or do I just need the activation key to register the cloud plan after they purchased an order for me?
Thank you for your help!
Hi @user-b5a8d2 π. Responses below: 1. Yes, that's correct. All of your existing recordings will be available to work with after you've activated an Unlimited Analysis Plan. Note that even now, you can still download your recordings and work with them offline. See this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1431119082001793095 2. You just need the activation key to register the Cloud plan. Once purchased, they will receive a confirmation email with the key. You can find more instructions in this section of the docs.
Hello, We are interested in the Neon All Clear Eye Tracking System. We need some information regarding the device and its features.
Questions:
Does the device have a CE certificate?
Can results, data, and documents be exported in CSV or EDF format? βa) If not, in which formats can they be exported?
Can AOI, Dwell Time, Revisits, Heatmap, Fixation, and Saccade data be exported? βa) In which format can these data be exported? βb) Are there any additional features available in the software besides these data?
Is there any time limitation for unlimited data export starting from the date of product purchase? βa) If there is a time limit (for example, 3 years), will we still be able to manually export the data afterward? βb) Will any fees be charged after this period?
Is there any fee for collecting data via the cloud or for operating multiple devices under the same cloud account? βa) If yes, how much is it? βb) Is there any duration or usage limitation?
What are the warranty conditions?
Do you provide training and technical support services? βa) What is the duration? βb) Is it paid?
Hi @user-0b1519 - I saw you also reached out via email - I reply here as well, but you'll find more details in the email.
Yes, Neon has a CE Certificate. I shared it via email.
All data collected with Neon can be exported as CSV files (see here).
All AOI-based metrics can be exported when using our analysis platform, Pupil Cloud in CSV files. Pupil Cloud offers powerful gaze mapping tools that allow you to map gaze data from the eye tracking recordings onto screens or areas of interest in the real world environment. You can then extract detailed fixation metrics (e.g., dwell time, number of fixations), and create scanpaths. You can read more about the analysis and visualization tools here.
Neon always allows manual raw data export via USB. If you plan to use Pupil Cloud, note that with every Neon purchase, you get access to all the features of Pupil Cloud and up to 2 hours of recording storage, such that you can try it out. To unlock unlimited features on Pupil Cloud, we recommend theΒ Unlimited Analysis Plan (see here). As said before, Cloud-based analysis is optional. The raw data are always accessible locally by transferring recordings from the phone to your computer and run your own analysis pipeline with no dependency on Pupil Cloud.
Neon comes with a 1-year hardware warranty covering manufacturing defects.
In terms of support, every Neon bundle includes a 30-minute onboarding workshop to help you set up and guide you through our software, but you can always reach us via email or on our Discord server. For more tailored support (e.g., training seminars, custom analysis pipelines), we offer consultancy packages.
Is there a way to manually edit an event timestamp in Pupil Cloud? I want to add an event that is precisely 5 seconds after another event.
Update: I was able to do this by manually setting the playhead and then using "Move event to playhead"
Hi, question about exporting Neon data without using Pupil Cloud: we recorded a short session using the Companion App + Neon, and weβre trying to extract the gaze metrics and timeseries data (CSV) locally. Our current approach was to connect the phone via USB, copy the recorded folder from the Neon Export directory to our PC, and then drag it into the latest Neon Player. However, Neon Player briefly loads the folder and then immediately closes. Is there a recommended workflow for obtaining gaze/timeseries CSV data locally without relying on Pupil Cloud downloads? Thank you!
Neon Player will do what you need, and it's meant to work just how you're doing it, but it sounds like there is a problem. Are you able to collect and share the Neon Player log file and/or the recording you're attempting to load?
Hi Neil, we're using marker mappers on pupil cloud. After that we've been trying to use the Automated AOI Masking tool described here https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/. The eye tracker is the Neon.
Hi @user-1baa3e , thanks for the clarification. I moved your question to the π neon channel.
Do I understand correctly that those 18 boxes are the only AOIs presented during the experiment?
Hi, what mDNS parameters are needed for this service (airprint) to function? My institution is trying to connect the neon android phone to the university network.
Hi @user-7c5b51 , first, please note that we don't recommend University or work networks for connecting to Neon. Even when IT makes Neon accessible that way, the large volume of traffic from other people on the network can interfere with Neon's signal. We typically recommend a local, dedicated WiFi router.
If you want to try it anyway, then you need these ports open to access Neon:
Also, note that if your PC and Neon are on different sub-networks (e.g., this is usually the case when the PC is connected via Ethernet to the Uni network and Neon is connected via WiFi), then you might not be able to connect to Neon, even with those ports open. Your IT team might be able to find a solution for that, though.
This is an example of an airprint service which uses mDNS similarly to the neon software (according to IT).
This is another reference (chromecast).
Are there any reference to other mdns services their software works off of? For example does it match up to chromecast, airplay, miracast, ect?
Hi @user-7c5b51 , do you mean you want to stream the Neon scene camera with gaze overlay to a monitor or TV via a service like Chromecast?
Hi @user-7c5b51 ! Adding to my colleague's response, the Python Realtime API utilizes zeroconf, which is built on top of Bonjour and Avahi. You can find more info here.
@user-4c21e5 , @user-f43a29 and others from the amazing pupil labs team, we (at the Intelligent Biomedical Sensing Lab here in Berlin) are working with the PupilLabs Neon (we love it!), and are also customizing the frame for various experiments. Since the default frame and also the "is this on?" frame are not always suitable for our setups where we need to combine with various head gear, we build solutions like the one attached.
The ideal solution would be one where we can also modify the frame itself (e.g. by removing the lower part). The problem is that the files that are openly available for the frame do not include a heat sink, and we also did not find a file for the heatsink geometry itself. We tried to 3D scan it, but its too finicky for a proper scan.
The question: Would you be willing to share a CAD/STL file of the frames with heatsink OR the heatsink file, so we can build custom frames that work with our Neon's heat sink? We are happy to openly share the results back with you and the community! My PhD Student Bilal ( @user-39ee45 ) is a real pro when it comes to 3D design and printing, so the quality will be decent π
Hi @user-fcdbb6 π ! Really cool frame there! Have you already seen the Ready Set Go! frame? Seems similar to what you are trying to achieve.
That said, yes we can share those! We will update the repository next week to include new files π
thank you @user-d407c1 ! thats great news! We saw it, its a great solution. For what we need to integrate with (whole head neuroimaging sensors) we will need to further customize, so really a huge thank you for sharing next week!
Thank you so much @user-d407c1 ! Really appreciate you taking the time to share those files. Excited to check them out!
@user-39ee45 Sorry for the delay getting back to you. Could you send us an email to info@pupil-labs.com about this, so we can send you the files?
Hi everyone, I am comparing the fixations I got from pupil Cloud and pupil Player. It seems, that pupil Player may concatenate fixations if i understand the docs correctly https://docs.pupil-labs.com/core/terminology/#offline-fixation-detection (and also according to my data). Is there a paper that explains the algorithm for pupil Player in more detail? I found the whitepaper for the one Pupil Labs fixation detector (Algorithm description - 3.6.24), which I assume to be the one for Pupil Cloud?
Is there a way with which I can extract the same fixations offline as in the cloud?
Best, Tim
Hi @user-0cdc2d , may I first ask if you are using Neon or Pupil Invisible?
Hi, I aim to use gaze_mapper.py on real-time-screen-gaze and surface_tracker via Git to capture gaze on the monitor in real time. However, the coordinates obtained are being interpreted as the monitor's boundaries, as shown by the red line in the attached image. Is there an API we need that we're missing? Or is there another cause?
Hi @user-53b146 , may I ask some clarifying questions first:
Hi! I have a question related to real-time streaming:
Can you stream the data of the eye tracker directly from the companion device to your computer, without saving anything on the companion device? I know there are streaming and recording features in the real-time API here (https://pupil-labs.github.io/pl-realtime-api/dev/methods/async/remote-control/) but wondered if we can control storage location.
Hi @user-ffef53 , yes, recording is not a pre-requisite for streaming. With the Real-time API, you can
including more.
You can even do all of this while using the Lab Streaming Layer integration at the same time, and multiple computers can connect simultaneously to the same device to access all of this functionality.
You can also run Neon Monitor at the same time as recording & streaming, similarly with PsychoPy and other programmatic integrations.
With respect to storage location, you mean you want the data not saved immediately to the phone's harddrive (or not saved at all to the phone?), but immediately saved to the harddrive of some other computer?
Hi
I hope you are well
I just received my product
How I can adjust the strap
Hi @user-910ceb , I moved your posts to the π neon channel. Great to hear it worked out. Yes, you just pull/push the two tabs at the back to adjust the strap. Let us know how it goes and if you have any other questions.
Fixed π
This is very useful thank you!
Also another quick question regarding real-time, has anyone tried using a hotspot on the companion device and connect the monitor PC to the companion device's hotspot? previously we were using a local network, but we'd like to know if it's feasible to get cell service on the companion device and then make a hotspot so we can still use the monitor
Hi @user-ffef53 , it is not necessary to have cell service to create a hotspot. Many phones create hotspots by re-purposing the WiFi chip in the phone. This approach of running the hotspot on the Companion device has been tried and is used by some, I believe.
Hi devs! I have a question regarding how to use the time_offset_ms and roundtrip_duration_ms generated by device.estimate_time_offset() in the realtime API.
If I understand it correctly, time offset is the offset between the two clocks. However, it's unclear to me whether this offset has been corrected for the roundtrip time.
If so, I think it makes a lot of sense to send events using neon.send_event(event_name, int(time.time_ns() - offset)).
If not, however, users should be wary that the time offset also contains the time required for the ping signals to travel between the devices, and thus should do neon.send_event(event_name, int(time.time_ns() - offset + roundtrip_duration))
In our recordings, we observed that the events are often timestamped earlier than they should be, suggesting the latter is more possible. But I want a confirmation from the team on this matter. Thanks a lot in advance!
Hi @user-a55486 , clock offset is independent of transmission latency, but you are correct that it is the offset between the two clocks. Also, roundtrip time is the amount of time it takes for the Time Echo data to reach Neon and be returned to the client; not necessarily the amount of time it takes for an Event packet to reach Neon. The offset and roundtrip time provided by that function measure two different things and you do not want to subtract or add them to each other. Please see this post and let us know if that clears things up: https://discord.com/channels/285728493612957698/1047111711230009405/1438132613775622244
Regarding the timing discrepancy that you see, it will be helpful to see the snippet of code that manages the offset measurement and Event sending.
We used a simple psychopy experiment, where we presented stimuli (e.g. fixation cross). And then I how well the timing of events aligns with stimulus onset in the video. For example in this recording, when "fixation cross" event is logged, the actual fixation hasn't been shown yet. It is only after another ~250ms the fixation actually appears.
the PsychoPy code is:
fix_cross.draw()
device.send_event("fixation onset", int(time.time_ns() - offset))
win.flip()
I considered if send_event would take some time so that win.flip() was delayed during its execution, but in your source code send event is async, so I guess it's not likely?
Thanks. Trying to time the send_event to happen in correspondence with the flip like that is not guaranteed by PsychoPy, unless you use their callOnFlip method. Could you try wrapping that send_event into a simple function that is passed to callOnFlip as such?
def send_fixation_event_psychopy():
device.send_event("fixation onset", int(time.time_ns() - offset))
win.callOnFlip(send_fixation_event_psychopy)
...
fix_cross.draw()
win.flip()
The image upload failed in dc, trying again. The diamond event is the logged "fixation onset"
In your original messages, it sounded like you wanted to know the coordinates on the monitor, but in your newest message, it sounds like you do not want coordinates on the monitor? Am I correct?
Can you also explain how you generated that red boundary in the image here? https://discord.com/channels/285728493612957698/1047111711230009405/1449701237774880922
I want to know the coordinates on the monitor. The red line in the image was drawn by me to indicate that the monitor position is displayed incorrectly when the code I created is executed.
Hi! Iβm planning to run an experiment on which I use the position of the eye in real time, in order to generate a change/decision on the task. Iβm using matlab and psychtoolbox for the task. Is possible to access the livestream of eye position for this type of eye-computer-interface?
Hi @user-98b2a9 , yes, this is possible. Please check our MATLAB wrapper for Neon's Real-time API and let us know if you have any questions.
Hello! I have a question regarding timestamps. I'd like to obtain UTC (i.e., epoch time) timestamps for the video frames from the sensor/eye cameras. However, based on my understanding, the first sample in the gaze.csv timeseries doesn't align with the first eye-camera video frame. Correct me if I'm wrong.
If this is the case, what is the best way to determine the UTC timestamp corresponding to the first eye-camera video frame?
Hi @user-2e5a7e π ! If you do not mind using Python and the Native recording format, you can simply use our library pl-neon-recording .
Simply install it pip install pupil-labs-neon-recording and:
import pupil_labs.neon_recording as nr
recording = nr.open("YOUR_PATH_TO_NATIVE_REC")
print(recording.eye.time[0])
would get you the first eye camera timestamp.
PS: If you have astral's uv , you can open a python shell with the library by using
uv run -p 3.12 --with pupil-labs-neon-recording python and copy the above.
Hi, I have a question. In my study, children will wear the Crawl Walk Run model and sit on the floor, and play with toys.
I noticed from the video that the center of the gaze is at the lower bottom of the screen (like the picture I sent, the child is playing with the toy, and they should be fixating on the toy that is at the bottom of the screen). I think this is due to children looking down while playing, and the scene camera waw recording the front, and the vertical field of view is not wide enough.
Any suggestions on how to set up the eye-tracker so we can record their hands better? Thank you!
Hi @user-5b6ea3 g π
Just to make sure I understand correctly, what you report is that their hands are falling outside the scene cameraβs vertical field of view (~80Β°)?
If thatβs the case, one option to extend the FoV, would be to use an additional third-party camera with a wider field of view and map the gaze data, as shown here:
https://docs.pupil-labs.com/alpha-lab/egocentric-video-mapper/
On that example, we used an Insta360 which could be more minimal than a traditional GoPro, but there might be even other options in the market
Hello, I am working on a project to study gaze vector during gait tasks. I will synchronize motion capture and eye tracking data, and will buy Neon Glasses for this purpose. Which Neon version or frame type is the most indicated for this kind of measurement?
Hi @user-e4d004 π ! If you have a Motion Capture system that uses passive reflective markers, you would probably want the frame Every Move You Make or I Can Track Clearly Now depending on whether you want to use prescription lenses or not.
Hello, I am currently trying to get your Web AOIS application up and running. Everything works fine, but when I try to create a heat map from the data, it is completely off. In addition, after some runs, the data tables are entirely empty. I think this might be because the April tags are not being recognized correctly. I have already tried playing around with the exposure, screen size, distance, and everything else I could think of. Is there anything else I can try?
Hi @user-9131cb π ! Do you have your recordings in Cloud?
If so, you can invite us [email removed] to your workspace and we can have a look and give you feedback.
Okay thanks. I have send you an invite.
Hello, we are experiencing an issue when controlling the Pupil Labs Neon via a PC (Windows or macOS) or when starting a recording. After two to three recordings that were successfully started and stopped, it is no longer possible to start a new recording, as a network error is displayed on the PC.
We have tested this both within a shared network (PC and device connected to the same network) and via the hotspot of the supplied Motorola smartphone. We have also tried using the Chrome and Firefox browsers.
Could this be a device-specific issue, or are there certain browser settings that could resolve this behavior? Alternatively, we would like to ask whether it is possible to use the Pupil Capture software with the Pupil Labs Neon.
Hi @user-e0026b , do I understand correctly that you are using Neon Monitor?
May I ask:
Regarding Pupil Capture, there is only experimental support for using Neon that way and it requires running from source on Linux or Mac. This will also mean that you use Pupil Core's gaze estimation pipeline, which requires calibration. Why would you like to use it this way?
Dear Sir I apologize for troubling you when you are busy, could you kindly respond to the following questions?
Q1 Could you please explain the differences between the following products? For System β‘, is the only difference from β the appearance (for example, is the bare metal part of β‘ less noticeable to others)? Regarding β’, since it seems to include multiple lenses with different diopters, does this mean that users with poor eyesight can use it without wearing their own glasses?
System β Just act natural β‘ Nothing to see here β’ I can see clearly now
Q2 Do all of these systems come with a smartphone as standard (i.e., is the companion device a smartphone)?
Q3 When purchasing only the frames for each system, is it necessary to purchase the 6,090-euro bare metal unit as well? Alternatively, is it possible to detach the bare metal part from a NEON unit that we already own and attach it freely to different frames?
Hi @user-5c56d0. Iβm happy to clarify:
Q1: Regarding the "Just act natural" and "Nothing to see here" frames, yes, the only difference is cosmetic. Specifically, it affects how visible the Neon module is to others. For reference, the "Bare Metal" is not the Neon module itself, but the nest PCB, which can be purchased separately if you want to develop your own custom frame. The cosmetic difference between "Just act natural" and "Nothing to see here" can be useful in applications such as consumer research, where researchers want participants to wear a very inconspicuous eye tracker to ensure natural behavior, as if they were wearing normal glasses.
For "I can see clearly now", you are correct. This frame comes with a magnetically swappable lens kit ranging from -3 to +3 in 0.5 diopter steps. This allows researchers to fit the lenses needed for each participant so they can use the eye tracking glasses without wearing their own prescription glasses.
Q2: Yes, all Neon bundles come with an Android smartphone as the Neon Companion device. We are currently shipping the Samsung S25.
Q3: Every Neon bundle already includes the Neon module attached to the frame, meaning the system is ready to use. When purchasing a bundle, you do not need to buy anything extra, as it already includes the frame of your choice, the Neon module, and the Neon Companion device.
If you would like to swap frames depending on participant needs (for example, a "Just act natural" frame for participants who do not need glasses, and an "I can see clearly now" frame for those who do), you can purchase an additional accessory frame - not a full bundle - and simply transfer the module across frames. This is straightforward, as the module attaches to the frame with just two screws.
Hi, does the eyelid data available if I import the raw data directly from the Companion app over USB? I am particularly interesting in eyelid angles and eyelid aperture. I am looking at https://pupil-labs.github.io/pl-realtime-api/dev/api/async/#pupil_labs.realtime_api.streaming.gaze.GazeDataType, EyestateEyelidGazeData class, it seems it includes both, but still not quite clear from the docs. Could you also send the link to the available eyestate/eyelid fields doc for raw recordings imported directly from Companion app. Thank you.
Hi @user-912183 π ! Eye lid aperture, also referred as eye openness in our documentation is part of the eye state data.
Your are pointing to the realtime api documentation, and yes, it is available in real time if you enable "Compute eye state" from versions +2.9.0 or above of the Companion App.
But it seems you want to access the data post recording instead, and using the native recording rather than the Timeseries, is that correct? If so, you would probably want to access the eye state data using pl-neon-recording as shown here https://github.com/pupil-labs/pl-neon-recording/blob/main/examples/csv_export.py#L186-L217
Hello! I am curious about the difference between the output from Pupil Cloud vs Neon Player. Thank you!
Hi @user-743e57 π. Both Pupil Cloud and Neon Player allow you to process and analyse data recorded with our Neon eye tracking system. Both enable you to replay recordings and export data streams, such as fixations, saccades, and video overlays, to name a few.
For a full breakdown of analysis tools and data formats, please refer to our online documentation: - Pupil Cloud - Neon Player
That said, Pupil Cloud does offer advanced capabilities not available in Neon Player, including aggregate AOI metrics, workspace sharing for team collaboration, and powerful computer vision algorithms like the Reference Image Mapper. It also has optional anonymisation features, like face blurring.
Do you have a specific use case in mind or questions about any of these? Weβd be happy to arrange a demo and Q&A session to walk you through both platforms and determine the best fit for your project. Just let me know!
hey I'm working on pupil neon eye tracker for my Mtech thesis in IIT Madras, Chennai, India. In my pupil cloud it is showing cloud plan expired. So do i need to recharge to get my recorded videos and datas from eyetracker and how to do that
Hi @user-b4808a π. If you want to work with those recordings in Pupil Cloud, you can either purchase an Unlimited Analysis Plan, or download and delete recordings from the free 2-hour quota to free up space. Recordings will become available in turn. If you choose to work with your recordings offline, we have free workflows. For example, you can use Neon Player, our desktop software. Just drag and drop them onto the software to playback and process them. We also have a Python recording library if you want to work with the recordings programmatically.