Hi Pupil Labs, we are currently working on integrating the Neon XR with our Quest 3 headset and are trying to perform the mount calibration, as outlined in your documentation. We have followed all the steps related to your Unity sample project MRTKDevTemplate, however after opening the MRTKDevTemplate project in Unity, we are unable to locate the expected PupilLabs folder under Assets in the Project window. Consequently, we cannot find the PL_Calibration scene, which is necessary for mount calibration. Do you have any suggestion to solve this issue? Thanks in advace!
Otherwise, if you wish to practice building it, then make sure that you are using the neon
branch of our fork of the MRTK3 Github repository.
Hi @user-665a35 , it is not necessary to build the MRTK3 app yourself to run the PL_Calibration
scene. You can save some effort by running the pre-built APK provided in this part of our Documentation.
I'm working with the Neon in VR on Quest 3. We have a constant gaze offset, possibly due to the wearer having prescription lenses. I see that the old MRTK Unity project has a calibration scene, but it's very old (3 years) and doesn't seem to be supported. Is there a more up to date offset calibration scene in Unity that we can apply?
Hi @user-6a0793 , you can certainly start from the Neon XR Core Package. The MRTK3 Examples are meant to show one way to integrate with an XR SDK. The choice of SDK is up to you.
To better help, may I ask if you mean:
To clarify, for Neon XR, you do a mount calibration, not an eyetracking calibration. The old scene that you refer to is an eyetracking calibration scene.
Since you are starting from the Neon XR Core Package, then you want to run the PL_Calibration
scene and use the resulting mount calibration file in your code.
Additionally, if your participant rather needs a Gaze Offset Correction, you can do this before even putting on the headset.
We are using NeonXR Core in Unity 6000.0.51f1. We did not include MRTK3 in our Unity project.
Our preference is to start with the simplest implementation and build up from there using Neon XR Core. However, if we need to go with the MRTK3 template project, we can try that. Any advice you can give to properly calibrate offsets with Neon XR Core would be appreciated. Thanks!
Hey Pupil Labs, I'm getting Addressables build errors (Object reference not set to an instance of an object) when building my current PC VR project. This is likely not directly caused by the Neon XR package (although Neon XR is the only reason I use Addressables in my current project) but if you can give some tips or advice on how to debug this, that would be welcome! More detailed problem description can be found here: https://discussions.unity.com/t/how-to-debug-addressables-object-reference-not-set-to-an-instance-of-an-object-build-error/1663719 . If you are not able to provide help on this case, I can understand.
Hi @user-ed9dfb , can you give these steps a try and let us know how it goes: https://discord.com/channels/285728493612957698/1351226619641073755/1355143977137082580
Thanks! This seems to have solved the issue!
Hello! One of our Quest 3 Mounts died during a field trial (likely caused by sweat). My issue is, at the moment, that a colleague of mine didn't keep the printed instructions on how to install a different mount and I couldn't find any online
Hi @user-1391e7 👋 ! It’s highly unlikely that the module was damaged by sweat, but we’d like to take a closer look to be sure.
Could you open a ticket so we can follow up and investigate thoroughly?
do you maybe have these instructions available somewhere?
sure, no problem
but it's not the module, it's the mount, just to be clear
I guess the mount is also a module 🙂
hey i hope im the right place i was looking into vr eye tracking specifically for the meta quest 3 and i saw you guys listed i wanted to know is that what this is for or is this some kind dev thing thats unrelated to what im looking for i just play games on vr chat and such and i been looking into addons to add eye and possible face tracking in general
Hi Pupil Labs, we are currently working on integrating the Neon XR with our Meta Quest 3 headset and are trying to perform the mount calibration as outlined in your documentation. We have successfully built and deployed your Unity sample project MRTKDevTemplate (specifically the PL_Calibration scene) to the Quest 3. We can confirm that eye tracking is functional within the application. However, we are encountering an issue where we cannot interact with the scene using either hand tracking or the Quest 3 controllers. The interactive elements within the PL_Calibration scene do not respond to any input. Do you have any suggestion to solve this issue? Thanks in advace!
Hi @user-53797a , you can certainly ask questions about our eye trackers here.
We do indeed have an eye tracking system, called, Neon, that fits into the Meta Quest 3.
Neon is a research-grade eye tracker that's robust, reliable & easy-to-use. It is being applied in many scientific research contexts, as well as commercial projects.
Neon can work in the context of gaming in VR Chat. But note that it would require development work to get it running via their programmable eye tracking interface. We don't have a turnkey solution for that.
To clarify, we do not offer VR face tracking technology.
Are you planning to do research as well?
im not a tech savvy person i wouldn't know how to do research im just looking for a turn key solution to add eye tracking to vrchat
Hi @user-665a35 , using controllers requires extra configuration. Let me know if that is your plan.
Otherwise, make sure you hold your hand far enough out in front of your face for it to be visible to the Quest 3’s sensors. Then, while keeping your hand raised, look at the center of a target and pinch your index finger and thumb. Holding your hand in the upper corners works better.
Thank you very much! Using controllers is not mandatory for us, we actually prefer to use hand tracking. I have tried your previous suggestions, but unfortunately, the issue persists. However, I've observed that the headset successfully detects the pinch gesture to open the Meta menu. This leads me to believe that the pinch action itself might not be the core problem, but i'm not sure.
thx anyways hope it all goes well
You are welcome. Same to you!
Does the same happen when you use our pre-built MRTK3 APK? And, in your build, can you interact with objects, like the coffee cup or piano, by pinch gesture, in the main opening scene?
Nope, when I launch your APK, it doesn't happen, i can pinch and grab objects. However In my own project, I don't even see the hand outlines, I can only interact via gaze
Would you be able to try building a fresh pull of the neon
branch in a new directory? It is possible that a build setting was accidentally changed.
Also, which version of Unity are you using and which Operating System?
Okay thank you, i will try building a fresh pull...i am using Windows 11 and Unity 2021.3.21f1
Hi, does the Meta Quest 3 frame mount also fit into the Meta Quest 3s headset?
Hi @user-b932f3 , no, it does not. They have different facebox geometries.
If you intend to use that headset, then you can 3D print your own mount.
We have open-sourced the 3D geometry of the Neon module and one of our frames as a starting point.
Hi Pupil Labs, I'm using the Neon eye-tracker with my Quest 3 headset. My question is about the config.json file from the PL_Calibration. If I perform the mount calibration once and create the config.json file, can I keep using that same file if I take the Neon off the Quest 3 (maybe to use it somewhere else) and then put it back on the same Quest 3? Or do I have to do the whole PL_Calibration again every time I remount it? Thanks in advance!!
Hi @user-665a35 , no, you don’t have to do that. The config.json
file is re-applied when the Unity app is started.
If you will be using the config.json
file in your own code, then just make sure to follow the steps here, making note of the part about the app’s “persistent Data Path”.
Yes, we are wearing glasses while using the Quest 3 headset with Neon mounted. I suspect that this is causing a large gaze offset, which gets worse around the edges of the FOV (due to refraction of the lenses). I'm hoping there is a calibration that can account for this, per wearer, that we can measure from within unity to streamline our workflow.
We've tried gaze offset correction, and it wont work for us because the Neon's camera is occluded when mounted in the Quest 3.
We can perform a PL_Calibration with some effort to bring it over from the MRTK3 example, and I'll try that today. I'm thinking that we should customize PL_Calibration to measure gaze offset as a gradient across the FOV. That way I can correct for both refraction of the wearer's prescription lenses and mounting offset. I'm thinking of using the same scene as PL_Calibration, but modifying the code to produce a calibration map (gradient over FOV) instead of a static offset translation (position) and rotation (which is what PL_Calibration provides without modification). Do you have any thoughts on this?
You can actually run Gaze Offset Correction while wearing the VR headset. In fact, the participant can do it themselves if you briefly turn on a gaze pointer, such as that provided in our MRTK3 Example (open the Hand Menu to find the option). I personally find this to be comfortable and efficient.
I hope this clarifies things. Let me know if you have any more questions!
Hi @user-6a0793 , thanks for following up. I'll clarify a few points about using Neon and Neon XR with corrective eyewear.
Wearing third-party glasses with Neon is not a supported use case. The frames and lenses of the glasses can obstruct the eye cameras' view. This can cause unpredictable errors in the gaze signal that cannot be resolved with a calibration.
For this reason, our recommendation for users who require vision correction is the 'I Can See Clearly Now' frame, which uses interchangeable prescription lenses. This ensures the module has an unobstructed view of the wearer's eyes. You can in principle use the 'I Can See Clearly Now' frame in a VR headset.
Here are a few notes on the XR mount calibration process and offset correction feature:
Mount Calibration: For standard use with a fixed mount (like our Neon-XR mount), this is a one-time procedure that is automatically reapplied on startup. However, if you use the 'I Can See Clearly Now' frame inside a VR headset, you will need to run the mount calibration each time, as its position relative to the headset will likely change with each use.
Gaze Offset Correction: This feature applies a linear offset to correct for person-specific physiological factors that can cause systematic errors. These errors are described here: https://discord.com/channels/285728493612957698/1047111711230009405/1283848621648908424 You can find more details in our Neon Accuracy Test Report, and this thread about radial-based offset correction: https://discord.com/channels/285728493612957698/1281140902760550430
Hi Rob, here's the result of the PL_Calibration. I've also attached the config.json. I'm getting better accuracy in the center of the FOV after adjusting the gaze offset using the companion app, but the accuracy is bad toward the edges of the FOV. You can see this in the PL_Calibraiton screenshot, which I assume is recording ideal (green) versus actual (red) gaze samples over the FOV. I'm wearing contact lenses now while using the Neon on Quest 3, so there is no interference from frames or lenses. Is there any other way we can get better accuracy?
Hi @user-6a0793 , thanks for the update.
To be sure I understand correctly, is the video showing how your eyes are controlling the green ball? As it seems like it is rather being controlled by your right hand?
May I ask if accuracy is similar when you use the default provided Quest 3 mount calibration?
With respect to your mount calibration in that image, the green and red dots are essentially within the expected variability, although you should be able to improve it, if you feel our default file is not sufficient.
You can do the following:
You can also have someone else run the mount calibration and see if the results improve and then simply use that mount calibration file going forward.
Otherwise, please note that all eyetrackers show reductions in accuracy as you get into the periphery (i.e., the edges of the field of view). You can see an assessment for Neon's accuracy over the field of view in our Accuracy Test Report.
Hi! I'm using Neon with the Quest 3 and trying to understand how the Neon XR plugin in Unity extracts gaze data and uses the "mount calibration." (I followed all the steps here: https://docs.pupil-labs.com/neon/neon-xr/neon-xr-core-package/). However, I'm running into several issues:
1) When using the "NeonXR" prefab, which creates gaze rays from what seems to be the "combined" data, the output is incorrect. Is further calibration needed? Could the issue be that the mount calibration wasn’t done properly?
2) The connection works intermittently. Sometimes Unity manages to connect to Neon, while other times—even with the correct IP, port, etc., or despite the device’s "auto-identification"—it fails to connect. In fact, in the Neon Companion app under "Connected devices," it shows "0." However, when I connect to the device via a Python script using: device = Device(address=ip, port="8080")
the device is detected, and the app confirms it by showing "1" under "Connected devices." What’s the issue? Why can’t Unity establish a stable connection?
Thanks a lot!
Hi @user-a4aa71 , if using the Neon XR Prefab in your own code and you have decided to use a custom mount calibration rather than our default one, then you need to transfer “config.json” file to your app. Please see here for the details.
Otherwise, you might need a Gaze Offset Correction for that wearer.
Regarding the connection, what kind of network are you using? Is it a university or work WiFi?
Yes I've followed that documentation. I'm using the hotspot from my phone
But if i'm using it inside the hmd, how can I perform the offset correction?
You can load our MRTK3 Example and turn on the Gaze Pointer setting, found in the MRTK Hand Menu. Then, with the gaze offset correction interface open in the app, simply press while wearing the headset and move your finger. The pointer will update in real-time. It is quite smooth once you have tried it.
You can also do a normal gaze offset correction without the headset. It applies equally to real world and VR. You don’t need a separate offset correction for each scenario. It is person specific. Not headset specific. It is applied at the level of Neon’s gaze estimation pipeline, before the gaze data are sent to the headset.
Ok I will try, thanks!And what about strange connection behaviour?
As a test, can you give it a try with a dedicated, local WiFi router?
The strange thing is that even though it's connected to my hotspot's Wi-Fi, my PC can connect via a Python script, but Unity can't... Anyway, sure, I’ll try with a dedicated router. Thanks!
Yes, it is just for a quick test as comparison to help narrow down the root cause.
Another thing.. But if the world camera is occluded when the Neon is mounted on the Quest, what would be the correct way to perform this gaze correction?"
You can either: - Wear the VR headset while the MRTK3 app is running. Make sure to enable the VR app's gaze pointer in the MRTK3 hand menu. Then, follow the method here -> https://discord.com/channels/285728493612957698/1187405314908233768/1395759913950511296. - Do it normally with Neon mounted in a standard frame, as shown here.
I was controlling the pointer with my gaze, trying to look at the center of each box. It looked like the gaze was offset at an angle, maybe because of the PL_Calibration having a y-axis rotation of 355.
Hi @user-6a0793 , thanks for the clarification. I ask, because in that video, the motion of the green ball at about 6 seconds is rather untypical for eye movements, both in smoothness and path formation. Are we sure it was not controlled by the hand there?
If it was controlled by the eyes, do you mean that gaze was offset when you look at the middle boxes and right boxes? Because that looks good and within the variability expected for Neon's accuracy. It only seems when you look to the far left that there is some inaccuracy, and as mentioned previously, all eyetrackers exhibit decreases in accuracy in the periphery. You can see an assessment for Neon's accuracy over the field of view in our Accuracy Test Report.
If you feel that a new mount calibration is necessary in your case, then I would recommend following the tips here: https://discord.com/channels/285728493612957698/1187405314908233768/1395333737528557600. Otherwise, if you still feel there is a gaze discrepancy, then that wearer might need a Gaze Offset Correction.
as a quick follow-up here, when does this offset correction get applied in subsequent sessions? I'm not 100% sure just yet, but could it be that it doesn't get applied until I move back to that screen? at least that's the observed behavior right now on my end, but I've not exhausted all possibilities
Sure. When you say "that screen", do you mean a scene in the MRTK3 Example Project?
The Gaze Offset Correction is set in the Wearer Profile, in the Neon Companion app. It applies at the level of Neon's Gaze Estimation pipeline, before the data is even sent to the Unity VR app. When you have loaded a Wearer Profile with a Gaze Offset Correction, it is automatically applied to all future recordings and all future real-time streamed data, until you change Wearer Profile or reset the correction. It will still apply even when you switch Neon to a different frame/mount.
The idea is that you can bring a participant back to the lab later, load up the Wearer Profile, and simply start collecting data. It is in principle a one-time, person specific setting. It is typically only necessary for a subset of participants.
yes, I meant the change in the wearer profile. interesting, I wonder how I manage that
maybe I just need to close the app once or something
This should not be necessary. To clarify, by "change screen", do you mean:
sorry, I'm not being clear.
what I'm doing, is loading up a built XR application, that connects via neon xr plugin. all that works, I see my gaze move as I look left right up down (albeit it with the offset that I didn't want to have anymore)
then I have someone next to me open up the neon companion app (already running, but here's the interesting bit)
once they click the camera preview screen (where I could make a new offset correction)
the gaze localization suddenly changes
to be more correct
like what I set it to
but before that, I have my little offset error in there
I don't doubt that I somehow created that issue, but I can't tell how yet
thanks for the quick help!
Good morning, I tried using a dedicated router as suggested, but Unity still couldn't connect in any way to the Pupil NEON inside my Meta Quest 3. I tried creating a new project, re-importing all the necessary packages, and following every step in the documentation, but each time I got a message saying the connection was "aborted", even though Unity was attempting to connect to the correct IP and port of my smartphone connected to NEON.
I found this chat in the archive (here's the link: https://chat-archive.pupil-labs.com/logs/neon-xr/2024/04/) where other users reported my exact same issue. You suggested they check the port number in the config.json files, but in my case everything was correct – especially since I had already managed to connect the device before. Out of curiosity, since I had installed the APK (neon-quest3.apk), I tried launching one of the demos, and the eye tracker worked! So right after that, I launched my own scene containing the neon-xr prefab (which just before had failed to connect to NEON) and — "magically" — it worked!
Can you please explain why this happens? What does the installed APK inside the headset do differently? I don’t understand the connection between these things. Thank you very much!
Hi @user-a4aa71 , without knowing exactly how your app has integrated the Neon XR Core Package, I cannot be sure why it acts differently from the MRTK3 Example. If you can share the logs from the Unity console, after you encounter the issue, then I can pass it onto the Neon XR team for review.
Otherwise, to be certain, are you using the latest version of the Neon XR Core Package (commit 2cfd2b6), or at least the version from Dec 18th, 2024 (commit 56c77d9)? That version added some extra functionality to make sure Neon XR worked on as many network types as possible.
Hi, I’m using Neon with the Pico 4 Headset and trying to run the MRTK3 Template project (following the steps here: https://docs.pupil-labs.com/neon/neon-xr/MRTK3-template-project). However, when i build and run the default scene (PL_HandInteractionExamples) I encounter the following issues: I am unable to perform any interaction with the controllers (switching scenes, interacting, etc). Although I can select the Earth and heart objects with my eyes, I am unable to select anything else.
For reference, i found these errors in the logcat that might be relevant: Error Unity DllNotFoundException: Unable to load DLL 'UnityOpenXR' Error APxrRuntime : suggestedBindings->interactionProfile == "/interaction_profiles/bytedance/pico4s_controller") is not a supported interaction profile Error APxrRuntime : suggestedBindings->interactionProfile == "/interaction_profiles/bytedance/pico4s_controller") is not a supported interaction profile [OpenXRHook.cpp][OXR_CheckErrors][24]: OpenXR error: GetProcAddr(InInstance, "xrStartVibrateByCacheBD") => Result: XR_ERROR_FUNCTION_UNSUPPORTED(-7)
I am running on Unity version 2021.3.21f1. I’m happy to provide more context/information too.
Any help would be greatly appreciated!
Hi @user-fb087b , our fork of the MRTK3 Examples is not built for controller interaction by default. Rather, the expectation is that you interact with the scene via hand gestures, as shown in the video here.
To activate controller support, please see these two messages:
You can use the Pico Unity Live Preview Plugin to ease some of the testing for controller support.
out of curiosity anyone have this work with an apple vision pro?
Hi @user-e91b9c , while we have not explicitly tested it in the Apple Vision Pro, there should in principle be nothing preventing that. You would simply need to build your own mount and possibly use a Swift RTSP package to receive the streamed data.
Hi Rob, thanks so much! That makes a lot of sense!
I’m also running into an issue with eye tracking data not coming through on my headset (debug ray is straight) using the prebuilt APK.
The logs alternate between “[DnsDiscovery] no device responded in time” and “[DnsDiscovery] received response from: 172.20.10.7.”
I'm currently on my phone's hotspot with poor signal (1 bar in the lab), so I suspect it's network-related. Is this common with weak networks? Any other common causes I should check until we get proper network setup?
Thanks!
Hi @user-fb087b , as a comparison, could you test with a dedicated router?