Thanks, @user-7afd77 . I think that can be improved, especially the correspondence between the red+green dots at the top.
Two things to try:
Let us know how it goes.
Hello, I think I made sure not to blink before or while pinching.
I let another guy try the mount calibration but I think it looks even worse
Thanks. Ok, let me confer with the Neon XR team and update you.
Sure, thank you.
Here's the config file, if helpful:
{ "rtspSettings": { "autoIp": true, "deviceName": "", "ip": "192.168.1.49", "port": 8686, "dnsPort": 52634 }, "sensorCalibration": { "offset": { "position": { "x": 0.0, "y": 0.030000001192092897, "z": 0.04000000283122063 }, "rotation": { "x": 3.3058021068573, "y": 1.9991434812545777, "z": 1.1359539031982422 } } } }
I also see that even on the MRTK3 Template tha debug gaze doesn't really hit the target at far distances, but it is still recognized as an hit since the item is being highlighted, probably because near the object. Is it possible to know if this is a natural and desired behavior? If so, I can probablt implmenet something that looks like this and register it as an hit if gazing at the item but the ray is not hitting the item.
The team clarifies that this is indeed related to MRTK3. It uses a Fuzzy Gaze Interactor.
I have asked the team. This might be simply related to the size of the default hit boxes (i.e., colliders) for those objects. May I ask what headset you are using? The team suggests that your mount calibration results could be certainly improved. Have you tried the default calibration that comes with the Neon XR Core Package?
I am using a Meta Quest 3. I can try and reset the calibration file so that I use the default one
This is what I mean
I may be pushing the gaze looking at the edges of the FoV, but this is what I mean by "I am looking at an item but the gaze is not hitting". I know that the debug gaze is a result of different calculated steps and is not a raw result from the eye, but I suppose it is better than the raw eye gazes (since they are not even centered based on the VR headset camera position).
I also read the accuracy test report from Neon but I am not sure wheter I am pushing the limits of the FoV angles here and items should be closer to the center for items to be "truly gazed on"
Hi @user-7afd77 , understood. As mentioned, this MRTK3 scene uses a Fuzzy Gaze Interactor, so this is not the ideal scene to test accuracy.
Indeed it is not, I am working on my scene but wanted to make sure, thanks π
Hi all, thank you for this great work!
I would like to obtain the diameter of both eyes (as in docs) so I tried to set in the "Neon companion" app the setting "Compute eye state" to on, recorded a session and exported it.
Using pl-rec-export /mypath/myFolderWithExport there is saccades.csv but not 3d_eye_states.csv. Why?
Are there other steps to be done to obtain it or additional settings I need to add?
Thank you so much
Hi @user-a9fbdb , pl-rec-export is now deprecated. You should consider using either:
Hello everyone!
In our previous discussion, I learned that it is not possible to use the "Reference Image Mapper" for the VR environment of the Meta Quest 3.
My question now is: Is there a workflow to use the Reference Image Mapper (or Surface Tracker) while using the Quest 3 in Passthrough mode?
I am planning an experiment that frequently switches between MR and VR environments, and I want to map gaze onto real-world objects while in Passthrough mode.
Does anyone have experience or advice on how to achieve this? (e.g., Is Reference Image Mapper effective on the Passthrough video, or should I use Surface Tracker with AprilTags instead?)
Thanks in advance!
Hi @user-137482 , the Reference Image Mapper and Surface Tracker use Neon's Scene Camera video, which is not recorded when Neon is mounted in a VR headset. The scene camera is blocked within the headset and, as such, is unable to record the video from VR or passthrough. This means it is not possible to use Reference Image Mapper in that context. Using Surface Tracker in Neon Player might be possible with a properly constructed video, but before going that route, let's clarify some points.
For VR, you know the ground truth position of all objects in the 3D environment and you know the position of the observer and their gaze direction, so there is no need for Reference Image Mapper. Rather, the Reference Image Mapper is meant for real-world scenes, where you rarely have direct access to the ground truth.
For your MR scene, you only want to know where they look in the real-world, or will you also be presenting augmented reality stimuli and you want to know where they look on this AR stimuli?
Thank you for the clarification.
I understand that the Neon scene camera is blocked. However, since you mentioned that post-hoc analysis might be possible with a "properly constructed video," I have an idea:
Can I record the Quest 3 view (including Passthrough) separately and manually replace the pitch-black world.mp4 file in the Pupil recording with this Quest video?
If I can synchronize them, would it be possible to run the Reference Image Mapper on this replaced video in Pupil Player?
Hi @user-137482 , yes, that type of recording is in principle possible. We've recently been updating the neon-vr-recorder tool to produce output that is Neon Player compatible.
However, please note that Reference Image Mapper is only available on Pupil Cloud and with Neon, you want to use Neon Player, not Pupil Player.
Hello !
I feel like I am not the first one asking here. I tried the record.py in https://github.com/pupil-labs/neon-vr-recorder
I have followed all the steps to connect to the headset. The headset is connected to the companion app etc..
but when I run record.py I just get
urllib.error.URLError: <urlopen error [Errno 60] Operation timed out>
the error comes from
File "/Users/barry/Desktop/HES-SO/EyeTracker/neon-vr-recorder/devices.py", line 84, in get_module_serial response = urllib.request.urlopen(f"http://{self.ip}:{self.port}/api/status")
is there anything to add to the HeadSet I would have forgotten ?
Modified: with right github link !
Hi @user-f95c19 , just to clarify, the Pupil Tutorials are for our previous eyetracker, Pupil Core.
Are you using Pupil Core XR or Neon XR?
Neon XR
Ok. Did you also supply the IP address of your Neon device to the script with the β-iβ command line parameter?
yes my command is python record.py -i ip -p 8080 (though the port is default)
curl is also telling me I can not connect
Do you mean you put the word βipβ, or the numbers, as in 192.168,1,1?
haha nono the numbers π
How do I generate output that is compatible with Neon Player?
Hi @user-137482 , as mentioned in the previous message, we've been updating the tool to produce output that is Neon Player compatible. We will update you when it is ready. We can send an update potentially early next week.
Is anyone here at UnitedXR by any chance today? Would be nice to meet
That sound interesting! Will you publish here in the chat?
Yes, we can update you here when it is ready.
Hello!! I don't know what is happening, but i made some tests today with the neonxr in my meta quest 3 and the fixations recordings are empty. I was working on a proyect during these past weeks, and everything worked correctly. I try reset the headsets and the app, and removing the cachΓ© data too, and it isn't working. Any idea? π₯Ίπ€¦π»ββοΈ
Hi @user-8fc524 ! Could you double check whether the "Compute fixations" toggle is on in the Settings page of the Companion App?
Yes, the toggle is on. On tuesday everything worked correctly, and suddenly on wednesday it not worked. I haven't changed anything π€
Could you open a ticket at π troubleshooting and share a recording where that happened? That would help us further investigate what may have happened.
Okey, i will do that. Thank you
I followed the tutorial to set up the environment (except for using Unity 2022 version), connected with Neon Companion, and it should have been successful. Now, the all the MRT sample doesn't have eye-tracking functionality. Can anyone give me some hints?
I currently want to get it running successfully in MRT first, and later use the Neon XR package in my own project for interaction and recording users' gaze point information in XR.
Hi @user-c94dc2 , if you want to run the example scenes standalone, then you need to add the MRTK NeonXR Variant Prefab to them, as detailed here.
I also want to ask, after importing the neon xr package in my own project, it's now connected to neon companion. Are there any built-in features, such as displaying the user's gaze point directly in the XR environment? I'd also like to ask if there are any detailed neon xr video tutorials available? Many parameters are unclear to me.
There are no tutorial videos, but all of the principles are covered in the MRTK3 examples. To display the user's gaze point directly in the XR environment, you could use the origin & direction of the GazeRay value provided by the GazeDataProvider and raycast those into the scene or project them onto the image plane of the VR camera. To that end, this message should be helpful:
If you are looking for a more turnkey solution, be sure to check out the Neon XR-compatible software provided by SilicoLabs.
Please let us know if you have any other questions.