🤿 neon-xr


Year

user-ed9dfb 05 February, 2025, 10:49:17

Hello Pupil Labs, we want to use the Neon with a Quest 3 using a wired setup with the recommended Anker ethernet hub. We find that the usb c cable from the Neon module (to the usb hub) is quite short for a VR experiment where the participant is standing and pointing at objects. Do you provide longer cables or do you have other recommendations for us?

user-f43a29 05 February, 2025, 14:46:56

Hi @user-ed9dfb ! While we do not offer a longer cable, you could maybe try an active USB 2.0 extender cable (https://discord.com/channels/285728493612957698/1047111711230009405/1257325668472066159), but we have not tested any and I cannot make any guarantees. Note that third-party cables can be of variable quality and the longer it is, the more signal quality will degrade.

May I ask what the end goal is? Ultimately, the real-time gaze sample rate will be throttled by Unity's update rate. This is sufficient for interactive applications, but if you share some more details, then we could brainstorm alternative solutions.

For example: - If latency is of concern, then with a modern, powerful WiFi router nearby, such as this one, and a corresponding WiFi 6(e) or 7 enabled computer, you could potentially mitigate the necessity for an Ethernet connection completely. - If sample rate is of concern, then the Neon Companion device is capable of streaming and running a recording in parallel (see here for some tips: https://discord.com/channels/285728493612957698/1187405314908233768/1326498032404922491).

user-ed9dfb 05 February, 2025, 16:10:47

Hey Rob,

Thank you for your quick and helpful reply. I'm currently working on a scientific experiment where a digital avatar speaks a sentence (e.g. "select the apple" and points/gazes at certain objects on a table. The participant then needs to select the pointed object. We want to measure how quickly participants look at and select the target object under varying circumstances (gaze, gesture and sentence used).

I plan to use the Quest 3 in 120hz mode, so gaze sampling/logging would indeed be throttled to 120hz.

We want to limit latency and signal loss issues as much as possible, which is why we want to use the usb ethernet hub method.

Upon testing, we found that because the usb hub is connected to multiple stiff and/or short cables, the freedom of movement of the standing and pointing participant is limited.

Both usb extension cable and dedicated wifi router might be good solutions. Do you have an idea of the difference in latency and signal reliability between a powerful wifi router and our current wired setup?

user-f43a29 05 February, 2025, 19:02:50

@user-ed9dfb just to be certain I understand correctly:

Does the experiment involve gaze-based VR interactions or real-time gaze-contingent manipulations of stimuli?

It sounds rather as if Neon XR is primarily being used to collect data "in the background" and then that data will be analyzed offline (i.e., post-hoc, after the experiment has finished)?

With respect to that router, I do not have benchmark numbers for differences in latency between that specific router and an Ethernet connection.

user-ed9dfb 12 February, 2025, 16:08:38

It's a bit in between. I'm using my own calibration process in Unity and I also calculate in real-time to which 3d object a participant is looking each frame and writing it to a single csv file with all the other relevant per frame research data. I expect it would be a bit cumbersome in our currently workflow to record xyz gaze data offline and synchronize and calculate the same object gaze data afterwards.

user-386d26 07 February, 2025, 20:46:26

Hey everyone, is it possible to connect to Neon from a Unity application running on a PC and meta quest via Quest Link, ie a wired connection to the headset? Or does our unity app need to be built into an apk and sideloaded onto the quest to be able to connect to the neon? I'm a bit of a beginner here so please let me know if there's anything I'm misunderstanding

user-f43a29 08 February, 2025, 00:27:30

Hi @user-386d26 , yes, this is one of the ways to use Neon XR. Neon XR is in principle not restricted by where your Unity App runs.

You may find this part of the documentation helpful when using a mount calibration in your custom app.

I see you also previously asked about how to load an APK, such as the one we provide for our fork of the MRTK3 Template Project, onto the Quest 3. See here and here for details for the Quest 3.

user-f43a29 12 February, 2025, 16:13:24

I see. I only mention it as a way to have full data fidelity, so full 200 Hz sample rate, without needing to think at all about packet loss or transmission latency. Then, you also have significantly fewer cables.

You can simultaneously run a recording in the Neon Companion app and save the pose of Unity's VR camera over time during the experiment. Then post-hoc, you can interpolate that to the sample rate of gaze. You then combine the pose of the VR camera and the mount calibration (see here for the file location), apply that to the gaze data. Then, you have 200 Hz gaze in the VR coordinate system.

Then, you can have an extra component in your Unity program that can "playback" the position of the observer over time, casting the gaze ray into the VR scene.

user-ed9dfb 12 February, 2025, 17:09:28

Thank you for the suggestion, that could also work indeed but would require more work from my side as a developer. Maybe something to look into once the current projects are running and I have time to research and test this different setup

user-ed9dfb 12 February, 2025, 16:13:28

A bit different question, is it correct that the Quest 3 frame mount only fits when setting the facial interface at the highest position (so eyes are furthest away from the lenses) ? I believe this is the only way to have enough room for the Neon module but it does result in quite some light leakage at the nose and cheeks for myself

user-f43a29 12 February, 2025, 16:20:02

Yes, the Quest 3 mount should be set at the highest position, level 3 (see this part of the Neon XR Documentation). However, the mount ships with a silicon nose cover that can be inserted. Please let me know if this was not the case.

user-ed9dfb 12 February, 2025, 17:01:24

Thanks, that's good to know so I shouldn't try to put it at lower positions. I have the nose cover but I currently don't have it inserted for testing purposes but I'll try it soon.

user-f43a29 12 February, 2025, 17:57:33

@user-ed9dfb You are welcome 👍

user-ed9dfb 18 February, 2025, 15:42:08

Hey Pupil Labs, another question. What is the expected frame rate when using NeonGazeDataProvider.cs? I'm currently using the wired setup using the recommended anker usb ethernet hub and I log the gaze data each frame (at 60 fps). In my log file, the gaze data usually stays the same over 4-5 frames so this would mean around 12-15 fps which is quite low.

user-f43a29 20 February, 2025, 10:30:50

Hi @user-ed9dfb , if you'd like to get the maximal sample rate available in your Unity app, then it would be recommended to work with the GazeDataReceived property of RTSPClient in a separate thread and synchronize that with the main thread. Otherwise, the rate will be limited by whatever else takes place in the main thread, or in your case, perhaps a dedicated data saving/writing thread.

You could take the code from NeonGazeDataProvider.cs, copy it to a custom class, and use that. Depending on how you do the synchronization in your setup, you might need to add some code to the OnGazeDataReceived method.

Of course, please let us know if you are already doing that.

user-3e88a5 19 February, 2025, 15:19:29

Go0d afternoon, using the NeonXR Core Package inside a Unity application developed for the Meta Quest 3 which ocular data is possible to save? Thank you!

user-f43a29 20 February, 2025, 09:47:13

Hi @user-3e88a5 , you can receive and save the:

  • Gaze data in VR coordinates
  • 3D Eye State, also transformed to VR coordinates, including pupil diameter in millimeters

You can find examples of how to collect and work with all of that data in our fork of the MRTK3 Template Project.

user-ed9dfb 20 February, 2025, 14:11:33

Hey @user-f43a29 , thank you for your reply. It doesn't make sense to me that the main thread would limit the gaze sample rate since the main thread is running at a much higher frequency. I even tested it in a (non-editor) VR Build running at 120hz and logging the data each frame. At stable 120hz I often see 7-8 duplicate gaze data entries while for example the VR HMD orientation is uniquely logged each frame. During a trial I'm saving all the data to memory and only at the end of the trial I'm writing the data to SSD.

As a test I added some print statements to NeonGazeDataProvider.cs (see green lines on the left where I added code) and monitored the console while playing in editor (@60 fps). As you can see, not every frame I'm getting new gaze data and there is quite some time/frames between "dataReceived" events

Chat image Chat image

user-ed9dfb 20 February, 2025, 15:34:53

I did some more testing. Is it true that gaze data is send in bursts? I logged each rtspClient.GazeDataReceived callback and also each Unity frame with time stamps (DateTime.Now) and I see that sometimes there is no gaze data received between multiple unity frames (@60fps) and sometimes there is 8-15 gaze data received between 2 unity frames

Chat image

user-3e88a5 24 February, 2025, 08:09:41

thank you for the answer, is it possible to run a recording from the neon companion app, while using the application in unity? and moreover if I just save gaze data in VR coordinates is it possible to calculate fixations, blinks and saccades?

user-f43a29 24 February, 2025, 16:08:31

Hi @user-3e88a5 , yes, it is possible to run a recording in parallel and is a recommend way to collect data for rigorous analysis.

If you save the gaze data in VR coordinates, with appropriate timestamps, then it is in principle possible to calculate metrics such as fixations and saccades, but you will have highest data fidelity & reliable data collection with a recording. Please see here for a description of how to convert Neon's default gaze in scene camera coordinates to VR coodinates post-hoc (i.e., after recording): https://discord.com/channels/285728493612957698/1187405314908233768/1339268128722518229

Calculating blinks requires a record of the eye videos. To that end, it is easier to run a recording in parallel, rather than trying to add that stream to Neon XR.

user-3e88a5 24 February, 2025, 08:42:29

and another question, is it possible to map the gaze in the vr reference system after a recording with the companion app?

user-f43a29 24 February, 2025, 16:09:07

Yes, this is also possible. Please see this message for a high-level description of how to go about doing it: https://discord.com/channels/285728493612957698/1187405314908233768/1339268128722518229

user-f43a29 24 February, 2025, 16:03:23

Hi @user-ed9dfb , I see. I had misinterpreted the original message.

I double checked with the Neon XR team. Here are clarifying points:

  • Currently, the Neon XR Core Package connects to Neon's Real-time API via WebSockets (based on TCP) and tunnels the RTSP communication through that
  • Neon's Real-time API sends the gaze data at the data rate setting in the Neon Companion app
  • Neon XR makes a request for data every time Unity runs the related Update function
  • TCP favors connection reliability, effectively grouping smaller packets of data sent close together
  • Once a batch has been received, it is made available to the user

This enables many kinds of interactive XR applications.

Since the receipt of data happens after synchronisation with the main thread, then if you were to run this process in a separate thread, you can in principle collect all of the gaze data, as it will not be distrubed by anything else happening in the main thread.

Please note that the underlying Real-time API is intended for interactive purposes & monitoring. If you want to collect data for rigorous data analysis, then it is recommended to run a recording in parallel, per the tip provided here: ⁠https://discord.com/channels/285728493612957698/1187405314908233768/1339268128722518229

With respect to the logs, I am not sure I see the duplicate entries. It seems each printed entry is different?

user-ed9dfb 27 February, 2025, 10:48:20

Hey Rob, thanks for the clarifications. I do then wonder, what is the use of the usb ethernet hub if latencies can be up 60-100ms? This doesn't sound very low latency to me in this context.

I was getting multiple entries in my csv log file (which logs an entry each unity update loop, so 60-120 fps) because sometimes there is no new data received for 60-100ms (from the async rtspClient.GazeDataReceived EventHandler, so not throttled by the Unity Update loop / main thread) .

user-3e88a5 24 February, 2025, 17:27:32

ok, thank you very much, just a question... how can I synchronize the recording in parallel with what's happening in the application. Can the timestamp be enough? And also when you're saying to save the position of the vr camera position if I am using the OVR Camera Rig in Unity do I have to refer to the CenterEyeAnchor? And can you tell me where I can find the mathematical calculation to apply this transformation?

user-f43a29 25 February, 2025, 11:14:00

Hi @user-3e88a5 , one way is to use Neon's Events, which blend well with the rest of the tools in the ecosystem. See this message for more details: https://discord.com/channels/285728493612957698/1187405314908233768/1288425388824985601

The calculation for applying the mount calibration and VR pose to gaze data is now linked in the updated post (https://discord.com/channels/285728493612957698/1187405314908233768/1339268128722518229). You can find the method here:

  • You simply offset and rotate the gaze ray, relative to the current pose of the VR camera, based on the position and rotation fields of the mount calibration.

With respect to the camera, it seems the OVRCameraRig is part of Meta's SDK? I am not familar with that and you may need to adapt our mount calibration method to the specific OVRCameraRig setup that you use. The relative offset & rotation of the VR camera setup in that system may be overall different.

You may want to check with their community about how to extract the camera pose from it. If it helps, in our MRTK3 Template Project, an [XR Origin [email removed] is associated with the camera. The position & rotation (i.e., the "pose" of the camera) can be obtained from that camera's Transform property. You could use the corresponding properties of the OVRCameraRig object.

user-3e88a5 25 February, 2025, 11:20:21

Thank you very much, all clear. Regarding the OVRCameraRig I've solved!

user-f43a29 25 February, 2025, 11:21:45

You are welcome. Please also take note of my update that you may need to adapt the mount calibration method for your OVRCameraRig. The relative offset & rotation of the VR camera system overall may be different in that environment, relative to how MRTK3 does it. One validation should be enough to determine it for your case.

user-9b48d1 25 February, 2025, 12:59:24

Hi Rob, thank you! It works now, I have gaze tracking in our project now and intend to write a script that tracks lingering on certain objects in the environment. However, I notice, that while with the MRTK Template the gaze tracker seems to align well with my eye movement (I calibrated it there and saved the calibration on the device), within our application, my pupils and the eye gaze visualizer don't quite align. Are there resources that expand on this fine tuning? Thank you in advance!

user-f43a29 25 February, 2025, 15:30:36

Hi @user-9b48d1 , great to hear about your progress!

I would not expect a fine tuning to be necessary in this case, but without the details of your particular application, I cannot be fully sure. Is your application also built on the MRTK3 Template Project or are you using some other system? May I ask how you've integrated the mount calibration into your application?

user-9b48d1 25 February, 2025, 16:10:23

Hi, thanks for the answer! To answer your questions: Our project is not built on the MRTK3 template as that would require a lot of new work. Rather, we built a Unity application previously using the OpenXR interaction toolkit provided by Unity and integrated the Neon XR prefab afterwards once we decided to employ gaze data tracking. For now, I simply used the mount calibration aside from our application (and saved the JSON), however, I just noticed that I didn't check if the calibration has to be loaded into our application...

user-9b48d1 25 February, 2025, 16:11:59

I see this has already been adressed in previous messages, I can sideload the config file into the corresponding folder of our project, can't believe I missed that.

user-f43a29 25 February, 2025, 16:12:31

No problem! Let us know how it works out!

user-9b48d1 25 February, 2025, 17:08:56

Hi Rob, it works exceptionally well now! So easy, a point to miss actually.

user-37a2bd 26 February, 2025, 06:53:49

Hi Team, one of our clients is exploring doing a study with VR. I wanted to know what kind of outputs we could get from the XR Glasses if we were to purchase them. Can we expect similar outputs like that of the normal Neon ET? We are mainly into market research so our clients generally expect outputs like Heatmaps, Scanpaths and AOI Metrics.

user-f43a29 26 February, 2025, 15:32:54

Hi @user-37a2bd , great to hear that you are looking to explore VR!

With Neon XR, you can order a module and VR frame bundle or use a module you already have in a VR mount, just like how you can swap it between the glasses-style frames. We currently provide mounts for the Quest 3 and Pico 4 headsets (scroll to the bottom of that page).

With respect to the outputs, you get gaze and 3D eyestates in VR coordinates, including pupil diameter, streamed in real-time, just like you get with the standard glasses frames. If necessary, you can also save all of the other standard datastreams by running a recording in parallel. You can use the streamed data for real-time interactive purposes, such as activating a button in the VR world when the wearer looks at it.

If you are looking for heatmaps, then our MRTK3 Template Project includes a real-time example. You can see one version in the demo video on that page, and another version on the Neon XR page. For outputting these kinds of analyses as plots or creating something like a scanpath, then this will require some additional coding. To that end, you might also be interested in SilicoLabs, who offers a "turn-key" Unity XR experiment builder that integrates with Neon XR.

user-3e88a5 26 February, 2025, 13:00:59

Hi, I'm sorry but I have another question. In the NeonGazeDataProvider the rawGazePoint and the rawGazeDir are still in the reference system of the Neon Module, while the conversion in the VR coordinate system is done in the GazeDataVisualizer after extracting the GazeRay?

user-f43a29 26 February, 2025, 15:37:22

Hi @user-3e88a5 , nothing to be sorry about! Feel free to ask whenever something is unclear.

You are correct with your understanding.

user-3e88a5 26 February, 2025, 17:07:37

Perfect, thank you a lot!

user-f43a29 26 February, 2025, 15:41:48

Happy to hear it! If possible, feel free to share the results in the 📸 show-and-tell channel!

user-9b48d1 10 March, 2025, 14:19:04

Hi Rob! I'd like to, but apparently I broke something along the way. I wrote a script to log the objects hit by the ray and how long it lingers on a CSV file. However, I also read that I had to enable a custom build manifest, so ticked that option and built it. I then, like before, sideloaded the config JSON for the eye tracker and checked that the phone and the headset are in the same network. However, now the build doesn't seem to register my eye movement. I reverted the changes (disabled custom manifest, deactivated my logging script), but it still doesn't work.

I checked, the installed MRTK3 Template still registers my eye movement and I can also see it on the app, so the device running the app seems to connect (as MRTK3 works). So something must be off with my built application where I imported the Neon package and inserted the Neon XR Prefab. That did work shortly, so I am at a loss at the moment...

user-3e88a5 26 February, 2025, 17:50:51

and in the script GazeDataProvider the "protected Pose gazeOrigin" come from the config file after the calibration right?

user-f43a29 26 February, 2025, 19:15:33

Hi @user-3e88a5 , yes. It works like this:

user-f43a29 27 February, 2025, 18:45:05

Hi @user-ed9dfb , you are welcome!

What is reported here is different from transmission latency. To clarify, WebSockets "buffer" data of a certain size, so to say, before it is sent over the network. If the WebSocket deems it necessary, then it will send a full packet once it has batched enough data. Over Ethernet, this packet will indeed be sent with low transmission latency. This is an inherent behaviour of WebSockets, which are built on TCP.

The Ethernet connection also has applications outside of Neon XR. For example, when running timing critical gaze contingent experiments that use the Python Real-time API, the MATLAB integration, or PsychoPy. In those cases, the RTP packets are sent over a standard RTSP connection, which uses UDP, where data are not batched. With an Ethernet connection in that context, you would in principle experience the full gaze data rate and low transmission latency.

Even in a low latency gaze contingent experiment, it would still be recommended to run a recording in parallel. You then benefit from the full fidelity of all data streams and the data will be robustly saved in a format that is compatible with the tools in the Neon ecosystem.

Having said that, if you would prefer that the Unity integration be modified for your situation, please feel free to open a 💡 features-requests !

user-ed9dfb 05 March, 2025, 12:56:01

Thanks for the further clarifications. I created a feature request: https://discord.com/channels/285728493612957698/1346828230950260877

End of February archive