Is there a general chat for like.. pupilabs CSR?
Hi @user-eb3fce ! You can write us at info@pupil-labs.com
Hello channel, I am a couple of weeks into integrating eye tracking into my Unity application. Everything works so far, however, I am not totally satisfied with the tracking quality I am obtaining as the detection point in 3D space always looks a bit off from where I am looking (or think I am looking). Therefore, I ask the following questions:
Eye Calibration: Have you found a nearly perfect calibration setup parameter? Currently I am using: 9 target points (in circular form), 45 samples per target and 1.5 seconds per target. Which parameters should I change to improve the calibrarion process and overall what best practices should I follow?
Gaze Data in 3D Following the provided examples, I implemented the detection of gaze using the information from gazeData, using the gazeDIrection approach. I shoot a raycast in the provided direction starting at the the Camera position which is updated by the HMD tracking. However, I am not totally satisfied with the obtained precision. Have you also experienced this? Do you have any suggestions on how I could improve my tracking?
Thanks for the help!!
Hello @user-11b6e8! Thank you for sharing your plans for using the eye tracker. It would be very helpful if you could provide us with a recording so we can give you more specific feedback. You can share it here or at [email removed]
To achieve good precision, it is crucial to have reliable pupil detection and a well-fitted model before calibration. You can find some general advice on https://discord.com/channels/285728493612957698/446977689690177536/1119152625456267296.
Also, what HMD are you using?
Thanks for the information. So when you mean recording you mean from the pupil capute software, correct? Also, what do you mean by well-fitted model? Are you referring to the hardware attached inside the HMD? The HMD I am using is the htc vive cosmos elite.
Concerning the recording, should I start recording as soon as I connect to pupil capture from the Unity editor? Will the recording also capture the calibration set and data I perform within the editor?
I will have again access to the headset and tracking devices on Wednesday and will record and send data as suggested. Thanks!
However, do you have any other best practices to follow for the implementation on the Unity side? Or you believe that if tracking does not seem properly accurate it might be mostly a "problem" with the tracking outside of Unity?
I am unfamiliar with that headset, but I think the display differs from the Vive in terms of resolution, refresh rate and field of view.
Ideally, start the recording before calibrating. Hence, the calibration choreography is also there, but keep in mind that to record the content displayed on the headset, you will need to have the screencast enabled. https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#screencast
Nice! We can give you more feedback when we have a look at the recording.
It's hard to say where the error comes from, but let's start from the basics and build upon it. Regarding best practices, to implement in Unity, we only have the demos and the documentation to offer you. https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md
I followed your guidelines and prepared 3 recordings of 3 different sessions of the same procedure which features: calibration, a test scene and then the actual scene where users must look at several avatar's fase. You can find all the data at the following link, all recordings are zipped into the same folder. What seemed to be the most accurate is 007 whereas 006 seemed the worst. The data can be downloaded from here: https://filetransfer.io/data-package/X2iizofR#link Let me know if you prefer another data transfer option. Thanks!
Terrific, and thank you for the help, will get back to you as soon as I get back to you and the rest of the team as soon as I have the data! Thanks!
Dear @user-d407c1, following to my previous message (https://discord.com/channels/285728493612957698/285728635267186688/1121110819661426880), did you have time to look into the data recordings I sent? Would you prefer I send the same data to [email removed] and is the filetransfer.io solution ok?
Hi @user-11b6e8 ! I checked the recordings that you shared. Here go some tips to improve their quality and get better precision.
FPS: Firstly, the eye cameras are not getting a stable frame rate. Do you have Pupil Capture and Unity3D running on the same computer, fighting for resources? How many FPS do you get in Unity3D? Is the light baked? And what is the frequency that you actually require?
ROI: You can try defining an AOI in the eye windows to limit the regions where the Pupil can be. This would help eliminate the dark areas you get on the borders that could mislead the algorithm at some point. Check out this video: https://drive.google.com/file/d/1tr1KQ7QFmFUZQjN9aYtSzpMcaybRnuqi/view?usp=sharing
Gazer: Generally speaking, one can use the 2D gazer to obtain a higher degree of accuracy, although it can be less robust to movement. It seems like you are using the HMD calibration https://github.com/pupil-labs/hmd-eyes/tree/master/python_reference_client. Is that right? Have you ensure the clocks are sync?
Recording 006: In that specific recording, you can clearly see the pupil confidence dropping, and that's why you get low quality on the gaze estimations. Basically, you are having a poor 3D eye model, as denoted by the bright blue outline. You will need to roll your eyes before calibrating to ensure you get a correct 3D model of the eye, as mentioned here. https://discord.com/channels/285728493612957698/446977689690177536/1119152625456267296 https://youtu.be/_1ZRgfLJ3hc
You can also run a PostHoc pupil detection, but it seems like you require data to feed your application, so that's not an option for you.
Thanks very much for the feedback. Some followup considerations/questions:
FPS: yes I am running both software on the same machine. Despite having performed all the optimization steps, the VR (unity) application is quite resource intensive as it features many animated avatars. I could run the pupil capture software on a second machine (as this is featured in our experimental setup), my question would be, how do I sync the clocks between the 2 applications so that the data I get can be properly used in the VR environment (running on a separate machie)
ROI: definitely, I actually missed this info from your previous message and will attempt it
Calibration: actually calibration is performed following the developer documentation of the Unity resource https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#calibration. Is it the same as the one you pointed out? Are clock syncs required also for the calibration triggered from the unity environment?
Eyes rolling: I will definetly perform eyes rolling before calibration
Thanks again for your feedback
Is it possible to purchase an HMD add-on camera that supports 1920x1080 resolution? I heard that it was available for sale a few years ago.
Hi @user-11b6e8 ! Apologies for the delay answering, regarding the calibration it should be fine with the one you mention. In any case, you can try the 2D gaze mapper and see if you get better accuracy. Regarding the clock syncs, you only need them if you observe some latency in the gaze output.
Let us know how it goes!