🀿 neon-xr


Year

user-c9caf9 04 December, 2024, 03:00:34

Hi, when the calibration is done, the unity apps will save the matrix to the conifgs.json file. Does this matrix use Unity left hand coordinate system or opencv right hand one? Also I have the same question for the coordinates/vectors appeared in Player.log and Player-prev.log,

I want to know this cause I write the same algorithm in python and use the data from log file for testing. The matrix I got from it has already make the eye tracker work correctly(at least it looks correct when I visualized the gaze point), but I want to do some further validation and make sure all the steps are correct.

nmt 06 December, 2024, 10:37:21

Hi @user-c9caf9! I'll step in briefly here for Rob. It's the Unity coordinate system!

user-386d26 04 December, 2024, 21:16:37

Hi there, could I get a step by step guide for running the mount calibration on a meta quest 3? I am using the quest 3 add on module from Pupil Labs, so if there is a config.json that works for everyone using that setup that would also be great.

But I'm asking since I've been trying to get the MTRK3 calibration scene to run on both Quest Link and on an APK I side loaded onto the Quest. So is there a recommended procedure for someone who is using a Quest? Typically the way I'm integrating my existing Unity app with the Quest is via OpenXR, so I'm unfamiliar with what it takes to run an MRTK3 setup on the Quest.

nmt 06 December, 2024, 10:39:42

Hey @user-386d26! Can I just confirm, have you been able to successfully deploy the sample project or not?

user-a97d77 11 December, 2024, 09:25:59

Hi, what does the "tl x [px]" "tr x [px]" "br x [px]" "bl x [px]" mean in the "surface_positions.csv"? What is the max of these pixels? And is it the position related the cropped 2D surface?

Chat image

user-d407c1 11 December, 2024, 09:37:08

Hi @user-a97d77 ! πŸ‘‹ These parameters refer to the surface position, specifically the position of the top-left, top-right, bottom-left, and bottom-right corners in the scene camera coordinate.

You can find more detailed information in the documentation.

user-8bd1e2 11 December, 2024, 18:48:34

hi quick question! how does pupil neon's accuracy/data ouptut compare to pupil core?

I've worked with core in the past and I'm looking at neon for AR/VR data that's more transparent/accurate than meta quest pro's internal eye tracker

user-480f4c 12 December, 2024, 12:57:40

Hi @user-8bd1e2! Pupil Core is a calibrated system that requires a controlled environment like a lab (e.g., with constant lighting, and no headset slippage) to achieve the best results. Using the 3d detection pipeline (more info on the gaze mapping pipeline can be found here ), you can achieve accuracy of ~1°–2Β°.

For a more versatile solution, we'd recommend NeonXR (our VR solution for Neon). I recommend exploring also the NeonXR docs.

Neon employs deep-learning-based gaze estimation, which is calibration-free and highly robust in dynamic environments. It connects to a smartphone via USB, with the phone serving as both the power source and recording unit, ensuring full portability. In terms of accuracy, you can expect 1.8Β° uncalibrated (and 1.3Β° with offset-correction). See also the full list of Neon's specs here: https://pupil-labs.com/products/neon/specs

You can also find a high-level description as well as a thorough evaluation of the accuracy and robustness of the algorithm in our white paper: https://zenodo.org/records/10420388

user-8bd1e2 12 December, 2024, 15:14:48

this is really useful, thank you!

have there been any publications demonstrating how Neon-XR can be used realtime with Unity?

I'm ideating on gaze-dependent experimental manipulations

user-480f4c 16 December, 2024, 07:40:00

Hi again @user-8bd1e2 - You can find publications using our products in our publications list. Please note that, as NeonXR was launched just a year ago, there may currently be few publications specifically using NeonXR.

We maintain a Neon XR package for Unity. It is a real-time client implementation in Unity and example implementation of transforming the real-time eye tracking data into the virtual coordinate system. The package provides examples to show you how to utilize eye tracking data within Unity very easily. You can find more details here: https://docs.pupil-labs.com/neon/neon-xr/neon-xr-core-package/

user-8bd1e2 12 December, 2024, 15:18:15

also, is there any validation work you could show me comparing NeonXR to the Meta Quest Pro's internal eye tracker?

user-480f4c 16 December, 2024, 09:34:38

To the best of my knowledge, there's no validation work comparing NeonXR to the Meta Quest Pro's internal eye tracker.

Please note that NeonXR uses the same technology as Neon, that is a deep learning approach which provides calibration-free and slippage-invariant gaze data in any environment. You can refer to the specs and Neon's accuracy test report for more details (see my earlier message: https://discord.com/channels/285728493612957698/1187405314908233768/1316750822264148039)

user-979f0f 13 December, 2024, 12:16:52

Hi folks. I understand that the Neon produces only a cylopean gaze vector. Are monocular gaze vectors on the roadmap?

user-0ec495 13 December, 2024, 12:17:07

...and, if so, 2025? πŸ™‚

user-1e648f 13 December, 2024, 12:17:15

...and, if so, 2025? πŸ™‚

user-571883 13 December, 2024, 12:17:39

Let me add some detail:

  • I'm interested in both the L and R monocular gaze estimates, not just one of them. The announcement made on 11/21 suggests I can only get one monocular signal at a time.
  • I'm interested in getting this data in VR / Unity
nmt 13 December, 2024, 12:24:04

Hey @user-8779ef! I've moved your message here since you're interested in VR.

Regarding gaze, you're right. Neon provides a 2D gaze estimate, which is also expressed as a cyclopean gaze ray (elevation and azimuth [deg]). Depending on the selected gaze mode in-app, Neon's gaze can be generated either from binocular eye image pairs or monocularly from the left or right eye image.

Currently, you can't obtain both left and right eye monocular gaze estimates simultaneously. You might want to suggest this in πŸ’‘ features-requests.

Of note: We also generate eye state data with Neon, allowing you to access the optical axes of both eyes. It would probably be feasible for you to implement a custom calibration to transform these into visual axes within a VR environment.

user-8779ef 16 December, 2024, 19:17:47

Thank you, @nmt . I did submit that request. Is it possible to use each monocoular mode in post-hoc, in serial, thereby generating the two monocular tracks for the same VR session?

nmt 17 December, 2024, 10:06:32

No problem. No, this isn't possible with the current pipeline. The monocular gaze is only computed in real-time on the Companion device.

That said, do you think the eye state measurements could be useful for you?

user-0c6521 18 December, 2024, 04:42:48

I'm curious how I can purchase a add on eye tracking module for the vive pro ? Because I can't seem to find where to buy that accessory

user-d407c1 18 December, 2024, 07:17:48

Hi @user-0c6521 πŸ‘‹! The Pupil Core VR Add-ons, including the HTC Vive Add-on, have been deprecated in favor of NeonXR.

Neon is a much more flexible and robust solutionβ€”it’s modular, so you can use it with various headsets or even in a regular frame. Plus, its deep learning approach makes it more robust and easier to use.

We don’t have an off-the-shelf frame for the HTC Vive with NeonXR, but we’ve made the module geometries, PCB board, and a frame design open source so you can easily build your own mount.

That said, if you really want the old HTC Vive mounts based on Pupil Core, feel free to reach out to us at sales@pupil-labs.com to inquire.

End of December archive