Hello PupilLabs! I was wondering if your Neon headset would work with an embedded system running an Android OS?
Hi @user-ead788! Out of the box it will only work with the OnePlus 8 initially. Making it work with another embedded Android OS is possible in theory (depending on some details), but would require a lot of custom effort. If you have a largish application you think might be worth the effort and would want to collaborate with us, feel free to reach out to [email removed] to discuss the details!
What is actual release date for the Neon?
The release is set for February, but the precise data is not quite set yet. As soon as it is set we will announce it in this channel!
Is it possible to demo the neons or invisible?
Hi @user-ec0446. Please reach out to info@pupil-labs.com to request a demo!
@marc @nmt Dear sir Thank you for your help. Could you please tell me the approximate time of arrival of product "neon"in Japan? If I order product "neon" this week, when will the product arrive in Japan (my location)? The website states that the products will be shipped in order from February. Due to my university's budget execution at the end of the fiscal year, there seems to be a problem if the arrival time is too late.
marc 1955 nmt 1123
Hello there. I am looking for a eye tracking system suitable for two things: 1) dyadic eye tracking experiments (e.g., measuring eye contact etc.), and 2) being able to co-register with EEG (biosemi) mainly for fixation analysis (not for eye movements corrections!). I assume all of your products should be suitable for the former, but I wondered whether you have any solution for coregistration with EEG in any of the products?
Depending on how you record the EEG, I would suggest using the LabStreamingLayer to synchronise EEG and Eye-tracking
Can't wait for the neon to be out! Love the bare metal product for DIYs!
Any idea on the cost of the bare metal or not yet?
Hi @user-8619fb 👋. Thanks for your feedback! We're also pretty excited, and the excitement is growing the more we play/test out the new modules and frames. I'm unable to report a price for the bare metal just yet. But keep an eye out for the announcement on here!
Hello @&288503824266690561 . We are considering buying the Neon but are not sure what the main differences are with Pupil Invisible, particularly in terms of the data analysis pipeline. Is there any resource you can share with us to have a more informed discussion amongst ourselves?
Hey @user-5ef6c0 👋. Neon recordings are compatible with Pupil Cloud, and the existing enrichments currently available will run just fine. Check out this page for more information: https://docs.pupil-labs.com/invisible/enrichments/#what-are-enrichments Of course, Neon has a totally different form factor when compared to Invisible, offering more flexibility when tailoring to different use-cases, and it has more optimal scene camera position, not to mention improved accuracy. If you would like a demo and Q&A session which will help you make an informed decision, feel free to reach out to [email removed]
Are you guys developing additional prescription lens for Neon or we should not expect that this year ? And would it be possible to get a comparison sheet between Neon and Invisible ?
Hey @user-c1e127.Yes, prescription lenses are on the horizon! We don't have a comparison sheet just yet as a lot of Neon's parameters are still being finalised and/or benchmarked. Check out the website for the most up-to-date information: https://pupil-labs.com/products/neon/
Hi there, I understand that the Neon provides data on pupil size that, in principle, can be based on a 3D model of the eye as in the Core. (Note: this assumption is based on the answer to the question "Can I use the Neon module with the Pupil Core Software?" in the FAQ). How does the Neon's performance compare with that of the Core? I imagine that there must be differences given that the model has been developed for the Core's geometry? How does the native pupil size estimation of the Neon compare to the performance of the 3D-modeled pupil size? How robust is it to changes in gaze angles (see https://docs.pupil-labs.com/core/best-practices/#pupillometry )?
Hey @user-82bc7c ! You are correct that Neon's pupillometry can be based on a 3D model of the eye as with Core. This is because you can run the Core pipeline with Neon. This model is agnostic to eye camera positions, and so performance will be comparable to when using the Core headset, regardless of its different geometries. Neon's native ML-based pupillometry is based on the same assumptions as Core with respect to geometry and optics of the eye. We thus expect it to be as gaze-angle independent and overall, as accurate as with the Core pipeline. Note that we're still working on the implementation so don't have final figures to report just yet.
I am pre ordering Neon this week. I hope everything works out well. Excited to see the new device. 🙂
Great to hear! We're excited to start shipping. It would be worth mentioning that you're interested in prescription lenses when you make the order. You can either add that to the notes form if pre-ordering on our website, or contact sales@pupil-labs.com directly
Sure, will definitely mention that. Thank you @nmt
Hey Martin Rolfs 0417 You are correct
Hi @user-7ff310! Yes, at least initially the 3D pose data will only be available post hoc in Pupil Cloud and not via the real-time API. Down the line we want start making more and more data streams also available in real-time from the phone, but we do not have a concrete roadmap for that yet.
All data that is available via the real-time API should be accessible from a second app on the phone.
One option could be to use Neon with the Pupil Core software, which would inherit the limitations of Pupil Core, but provides 3D pose in real-time.
Marc, thank you for responding! I already use Pupil Core with the original hardware. However, Neon would provide some big advantages, mainly no need to calibrate and a fully mobile solution. Unfortunately, using in with Pupil Core software would not justify the investment for me. I will be following the development with the hope that eye pose will be added to real time API. Thank you and all the best!
Hi @user-82bc7c ! Gabe Diaz here. The 3D model fitting stage (py3d) has to estimate the 3D pupil position on the basis of a sequence of 2D images of the eye and, more specifically, of the ellipse that forms the pupil. The original math for this was described in a a publication by Swirksi and Dodgson, and later elaborated on by folks at Pupil Labs to account for gaze-dependent refraction at the cornea (https://perceptualui.org/publications/dierkes19_etra.pdf). The output of this model fitting process is an estimate of the 3D centroid of the pupil in eye-camera space.
It has been a while since I've read that one, but the abs says that Dierkes et al. used simulated imagery and real imagery to characterize the model's accuracy. My experience with and understanding of core algorithm (not the neon's algorithm) is this: the quality estimation process is going to be affected by factors like 1) image quality, broadly defined 2) pupil size in the low res eye images 3) the range of eye motion in the input sequence and 4) the distribution of gaze angles in the input sequence and 5) lots of things I'm not aware of. I don't think the investigation by Dierkes speaks to the role of those factors I listed, but I could be wrong. My impression is that obvious practical limitations have prevented any substantial testing of the model's estimate of eyeball location against a "ground truth" eye location for real human eyes.
I will add that, with the core, the quality of the 3D model is extremely variable, and this step seems to be the greatest contributor to variability in the final gaze estimates. The precision of those gaze estimate can be greatly improved if you're able to do an offline refitting with a long sequence of input imagery, rather than the on-the-fly fitting that the system defaults to. I assume that offers similar improvements to pupil size estimation. To really get all we can out of that post-hoc refitting, one of my grads has been playing with an additional offline 3D optimization step to improve the correspondence of 2D pupil locations in the eye image with the "reprojected" 2D pupil locations from the 3D model, but he hasn't seen great improvement yet 😦
Thanks, Gabe, I appreciate the detailed answer. I might ping you again about your student's progress in refitting. If this provides a real improvement, it would be a big service to the community.