I constantly get notifications on Moto 40 Edge Pro to update to Android 15. According to your website, only Android 13 and 14 are supported. Should I continue to try to prevent updating to Android 15?
Hi @user-ed9dfb! Thanks for letting us know you've started getting prompts. For the time being, please try to prevent it updating to A15. This is very recent update and we want to first be sure the Neon Companion app will run as expected. We'll update you as soon as we can!
Hello @wrp , @user-f43a29 , The recent update to the neon companion app also applicaple to neon xr core unity package? Does the package tracks eye lid angles, IMU, Fixation etc?
Hi @user-6c6c81 , yes, and the Neon XR Core Package has similarly been upgraded to account for the eyelid angle data, which it tracks. IMU data and fixation data, however, are not streamed to Neon XR.
Neon XR also has some updates related to Addressables that potentially resolve the initial issues that you had encountered.
that means i cannot stream IMU and Fixation data through unity?
You can, in principle. It requires adding code in line with our Real-time API Documentation.
It is only that the default Neon XR Core Package does not implement this. If you would like to see that added, feel free to open a π‘ features-requests .
Hello @user-f43a29, my neon is now saving and tracking all the requeried gaze data in a csv. But the thing is, it is recording in editor but not in build version of my unity. Are there are any setting to consider to write data in csv in build version?
Hi @user-6c6c81 , when you say the "build version", do you mean when the app is running on the headset? When running the "build version", do you get messages in the logs like [RTSPClientWs] 2000 messages processed
? If so, then Neon XR is working as expected and something about the CSV writing routine would need to be modified.
Yep! Found the bug. Thanks.
Hello! I am currently trying to set up an experiment with Unity, and with the Neon, but without a head mounted VR device, and had a few questions about the best path forward. I am quire new to both the Neon and Unity, so please let me know if something anthing is unclear!
We have an experiment running in Unity, and would like to track participants' gaze over trials in the 3D space, as well as their pupil diameter, via Unity. We want it to be via Unity because we want to track the position of objects in the 3D space as well, and be able to relate gaze positions to eye positions over time. So it is pretty important to have the gaze positional data and object positional data in the same coordinate space ultimately.
So far, I have been able to use Neon-XR to get the βrawβ gaze coordinate data (X,Y coordinates in the Neon camera space) as well as the calculations of the 3D gaze positions in Unity, and feed them into Unity in real time.
The issue is that the positions the package estimates in 3D space are quite off, with respect to where the gaze is in the 3D scene. I was wondering if you have any suggestions on how we can get accurate gaze positional data, that is in the coordinates of the 3D Unity space? At the moment, we were thinking of implementing a short calibration scene, in which we ask participants to look at targets at known locations in the 3D space, as well as have AprilTags present throughout the experiment, so that we can sort of off-line calculate the mapping between the Neon camera coordinates and the Unity 3D coordinates. But I was wondering if there is perhaps a more elegant solution to do this, and/or if there is a way to get the gaze coordinates in 3D space in real time in Unity? Thanks a lot!
Hi @user-28d3e5 , you are quite close to the solution already!
So, the standard Neon XR code is written with the assumption that you are using Neon in a VR mount. That keeps Neon's scene camera and Unity's VR camera in a fixed relationship. In your situation, this assumption does not hold, since the user can freely move their head independently of the Unity display on the computer monitor. Hence, in this case, the estimated 3D positions in the Unity scene, as provided by the default Neon XR Core Package setup, will be incorrect in general. However, as you are finding, you can still use the Neon XR Core Package to this end.
You already have Neon's gaze coordinates in Unity, so to now map Neon's scene camera data to the screen, you could consider these options:
Once you have that, then you could try the method in this post (https://discord.com/channels/285728493612957698/1248580630430875661) to cast the gaze ray into Unity's 3D scene and detect what object it collides with.
Thanks a lot for the answer! I have a few follow-up questions, but I apologize if they are super obvious/basic.
So just to clarify, if I don't need the coordinates in real time, I donβt even need to have a calibration scene, though I could use Apriltags for the Marker Mapper, as I would be able to get the screen coordinates for the gaze offline? Or would you still recommend having a calibration scene?
Also, I was wondering whether, if I were to have participants use a chin-rest, so they donβt really move their head relative to the monitor/Unity camera during the experiment, and then run the calibration scene from the Neon-XR package once for each session, would this be a viable option as well?
Finally I was wondering if out of the available options, there is one you would recommend over the others, for any reason?
Thanks a lot!
Hi @user-28d3e5 , Neon is deep-learning powered & calibration free. You simply put it on and you are accurately tracking gaze. When the Neon XR documentation refers to "calibration", it is referring to a one-time mount calibration that determines the fixed relationship between the Neon scene camera and Unity's virtual VR camera.
Using a chinrest and running Neon XR's mount calibration per session can in principle work, but would need some modification to the code. Users have previously tried to use the default mount calibration scene with flat panel displays and reported that the results deviated from their expectations. Essentially, the mount calibration is designed with the expectation that you are wearing a VR headset with a 3D stereoscopic display system.
Yes, with Marker Mapper, you get the 2D screen coordinates offline.
Without knowing the full details of your experiment & setup, I am not sure if I can explicitly recommend one option over another. It essentially comes down to what you find easiest & most applicable, I would say. If anything, using Marker Mapper or the standard Reference Image Mapper pipeline would require the least effort to map Neon's gaze signal to the 2D coordinates of your display.
Okay! Thanks a lot for the answers, I definitely have a better handle on things now!
No problem! If you have any other questions, feel free to ask!
Hi there, I'm trying to stream the real-time video from the Companion app into a SwiftUI app I'm developing. I'm attempting to use WebSocket with RTSP (similar to XR), but it doesn't seem to work. I can connect to the server and receive the correct headers, but no messages are coming through. Any idea where I should look or what could be causing this?
Thanks a lot!
Hi @user-af602a π !
I think the π» software-dev channel might be a better place for this question.
That said, there are quite a few unknowns here, and the issue could be related to the code itself.
If you're building your own implementation and need help specifically with RTSP in Swift, a dedicated Swift developer forum might offer more targeted assistance.