👓 neon


Year

user-ead788 04 January, 2023, 22:46:38

Hello PupilLabs! I was wondering if your Neon headset would work with an embedded system running an Android OS?

marc 05 January, 2023, 08:37:59

Hi @user-ead788! Out of the box it will only work with the OnePlus 8 initially. Making it work with another embedded Android OS is possible in theory (depending on some details), but would require a lot of custom effort. If you have a largish application you think might be worth the effort and would want to collaborate with us, feel free to reach out to [email removed] to discuss the details!

user-ec0446 06 January, 2023, 22:18:43

What is actual release date for the Neon?

marc 06 January, 2023, 22:39:11

The release is set for February, but the precise data is not quite set yet. As soon as it is set we will announce it in this channel!

user-ec0446 07 January, 2023, 21:48:42

Is it possible to demo the neons or invisible?

nmt 08 January, 2023, 04:42:02

Hi @user-ec0446. Please reach out to info@pupil-labs.com to request a demo!

user-5c56d0 09 January, 2023, 01:16:07

@marc @nmt Dear sir Thank you for your help. Could you please tell me the approximate time of arrival of product "neon"in Japan? If I order product "neon" this week, when will the product arrive in Japan (my location)? The website states that the products will be shipped in order from February. Due to my university's budget execution at the end of the fiscal year, there seems to be a problem if the arrival time is too late.

user-5c56d0 09 January, 2023, 04:54:57

marc 1955 nmt 1123

user-5d299a 11 January, 2023, 11:39:02

Hello there. I am looking for a eye tracking system suitable for two things: 1) dyadic eye tracking experiments (e.g., measuring eye contact etc.), and 2) being able to co-register with EEG (biosemi) mainly for fixation analysis (not for eye movements corrections!). I assume all of your products should be suitable for the former, but I wondered whether you have any solution for coregistration with EEG in any of the products?

user-349f2c 12 January, 2023, 09:05:12

Depending on how you record the EEG, I would suggest using the LabStreamingLayer to synchronise EEG and Eye-tracking

user-8619fb 11 January, 2023, 21:48:10

Can't wait for the neon to be out! Love the bare metal product for DIYs!

user-8619fb 11 January, 2023, 21:50:37

Any idea on the cost of the bare metal or not yet?

nmt 12 January, 2023, 11:48:37

Hi @user-8619fb 👋. Thanks for your feedback! We're also pretty excited, and the excitement is growing the more we play/test out the new modules and frames. I'm unable to report a price for the bare metal just yet. But keep an eye out for the announcement on here!

user-5ef6c0 12 January, 2023, 21:14:57

Hello @&288503824266690561 . We are considering buying the Neon but are not sure what the main differences are with Pupil Invisible, particularly in terms of the data analysis pipeline. Is there any resource you can share with us to have a more informed discussion amongst ourselves?

nmt 13 January, 2023, 18:13:33

Hey @user-5ef6c0 👋. Neon recordings are compatible with Pupil Cloud, and the existing enrichments currently available will run just fine. Check out this page for more information: https://docs.pupil-labs.com/invisible/enrichments/#what-are-enrichments Of course, Neon has a totally different form factor when compared to Invisible, offering more flexibility when tailoring to different use-cases, and it has more optimal scene camera position, not to mention improved accuracy. If you would like a demo and Q&A session which will help you make an informed decision, feel free to reach out to [email removed]

user-c1e127 20 January, 2023, 13:42:39

Are you guys developing additional prescription lens for Neon or we should not expect that this year ? And would it be possible to get a comparison sheet between Neon and Invisible ?

nmt 23 January, 2023, 12:11:53

Hey @user-c1e127.Yes, prescription lenses are on the horizon! We don't have a comparison sheet just yet as a lot of Neon's parameters are still being finalised and/or benchmarked. Check out the website for the most up-to-date information: https://pupil-labs.com/products/neon/

user-82bc7c 23 January, 2023, 08:46:15

Hi there, I understand that the Neon provides data on pupil size that, in principle, can be based on a 3D model of the eye as in the Core. (Note: this assumption is based on the answer to the question "Can I use the Neon module with the Pupil Core Software?" in the FAQ). How does the Neon's performance compare with that of the Core? I imagine that there must be differences given that the model has been developed for the Core's geometry? How does the native pupil size estimation of the Neon compare to the performance of the 3D-modeled pupil size? How robust is it to changes in gaze angles (see https://docs.pupil-labs.com/core/best-practices/#pupillometry )?

user-480f4c 23 January, 2023, 16:00:37

Hey @user-82bc7c ! You are correct that Neon's pupillometry can be based on a 3D model of the eye as with Core. This is because you can run the Core pipeline with Neon. This model is agnostic to eye camera positions, and so performance will be comparable to when using the Core headset, regardless of its different geometries. Neon's native ML-based pupillometry is based on the same assumptions as Core with respect to geometry and optics of the eye. We thus expect it to be as gaze-angle independent and overall, as accurate as with the Core pipeline. Note that we're still working on the implementation so don't have final figures to report just yet.

user-c1e127 23 January, 2023, 12:13:58

I am pre ordering Neon this week. I hope everything works out well. Excited to see the new device. 🙂

nmt 23 January, 2023, 12:17:02

Great to hear! We're excited to start shipping. It would be worth mentioning that you're interested in prescription lenses when you make the order. You can either add that to the notes form if pre-ordering on our website, or contact sales@pupil-labs.com directly

user-c1e127 23 January, 2023, 12:18:33

Sure, will definitely mention that. Thank you @nmt

user-82bc7c 23 January, 2023, 16:37:43

Hey Martin Rolfs 0417 You are correct

marc 25 January, 2023, 03:49:07

Hi @user-7ff310! Yes, at least initially the 3D pose data will only be available post hoc in Pupil Cloud and not via the real-time API. Down the line we want start making more and more data streams also available in real-time from the phone, but we do not have a concrete roadmap for that yet.

All data that is available via the real-time API should be accessible from a second app on the phone.

One option could be to use Neon with the Pupil Core software, which would inherit the limitations of Pupil Core, but provides 3D pose in real-time.

user-7ff310 26 January, 2023, 18:44:45

Marc, thank you for responding! I already use Pupil Core with the original hardware. However, Neon would provide some big advantages, mainly no need to calibrate and a fully mobile solution. Unfortunately, using in with Pupil Core software would not justify the investment for me. I will be following the development with the hope that eye pose will be added to real time API. Thank you and all the best!

user-8779ef 28 January, 2023, 14:15:16

Hi @user-82bc7c ! Gabe Diaz here. The 3D model fitting stage (py3d) has to estimate the 3D pupil position on the basis of a sequence of 2D images of the eye and, more specifically, of the ellipse that forms the pupil. The original math for this was described in a a publication by Swirksi and Dodgson, and later elaborated on by folks at Pupil Labs to account for gaze-dependent refraction at the cornea (https://perceptualui.org/publications/dierkes19_etra.pdf). The output of this model fitting process is an estimate of the 3D centroid of the pupil in eye-camera space.

It has been a while since I've read that one, but the abs says that Dierkes et al. used simulated imagery and real imagery to characterize the model's accuracy. My experience with and understanding of core algorithm (not the neon's algorithm) is this: the quality estimation process is going to be affected by factors like 1) image quality, broadly defined 2) pupil size in the low res eye images 3) the range of eye motion in the input sequence and 4) the distribution of gaze angles in the input sequence and 5) lots of things I'm not aware of. I don't think the investigation by Dierkes speaks to the role of those factors I listed, but I could be wrong. My impression is that obvious practical limitations have prevented any substantial testing of the model's estimate of eyeball location against a "ground truth" eye location for real human eyes.

I will add that, with the core, the quality of the 3D model is extremely variable, and this step seems to be the greatest contributor to variability in the final gaze estimates. The precision of those gaze estimate can be greatly improved if you're able to do an offline refitting with a long sequence of input imagery, rather than the on-the-fly fitting that the system defaults to. I assume that offers similar improvements to pupil size estimation. To really get all we can out of that post-hoc refitting, one of my grads has been playing with an additional offline 3D optimization step to improve the correspondence of 2D pupil locations in the eye image with the "reprojected" 2D pupil locations from the 3D model, but he hasn't seen great improvement yet 😦

user-82bc7c 23 February, 2023, 20:55:06

Thanks, Gabe, I appreciate the detailed answer. I might ping you again about your student's progress in refitting. If this provides a real improvement, it would be a big service to the community.

End of January archive