Hi @user-8779ef. Thanks for the overview . I'm afraid I don't know how to resolve this easily. I can ask a member of my team to take a look, but we won't have capacity in the immediate future. It would help if you could share the recording so we know what we're working with, at least. Would that be possible?
Perhaps. Can you please provide a means for me to send you the link privately?
I wonder if what you're looking for is in offline_data/reference_locations.msgpack
Oh, I should add that that file is only created after you open the data for the first time. So whatever values are in there are taken from the βrawβ files I have been looking at. Still, when you load the data up any time after that file has been created, I think it must be drawing the reference locations from there. So yeah, that might be a solution.
Worth a look. Thx for the suggestion
I don't have any experience with the HMD stuff, and pretty limited experience with this part of the Pupil Core source code, but it's possible that the values aren't taken from any raw data files, but rather calculated from video frames on demand and then cached in offline_data
Not calculated from video files. Just extracted from the raw data files I was playing with earlier. In any case, your suggestion was helpful! I had completely overlooked that this file existed. Editing it had the desired effect. Thanks!
Hi, I noticed the Pupil Lab VR/AR Docs are for unity version 2018. I was wondering if Pupil labs has an updated doc for unity 2022?
Hi @user-bd82aa ππ½ ! We have not systematically evaluated the unity version 2022. We would recommend opting for the version suggested in the docs: https://github.com/pupil-labs/hmd-eyes
@user-480f4c Thank you
Hello, some months ago I opened an issue on the github page regarding a problem when building my final application as an IL2CPP build rather than Mono. If the application is built with IL2CPP I will get an exeption when trying to connect to pupil capture. The error is: FormatterNotRegisteredException: System.Collections.Generic.Dictionary
First of all is anyone facing the same problem? Has anyone successfully built an application in IL2CPP with the pupil API functioning correctly?
As a reference this is the issue I opened: https://github.com/pupil-labs/hmd-eyes/issues/116
Hi! Is it possible that the pupil labs add-on for the htc vive pro runs faster than 200Hz?
Hi @user-e16e05 π This is not possible, as the maximum sampling rate of the VR add-on for HTC Vive is 200 Hz.
May I ask you to further elaborate on your question? Do you have data collected with the add-on that seems to be sampled at a frequency higher than 200 Hz?
Yes, here I attach a recording example. When counting the number of frames recorded in one second I get around 230. These data were recorded with pupil capture. @user-c2d375
In other words, that's two 200 Hz cameras running asynchronously. There are some frames where both signals are lumped into a single binocular estimate, and some frames where only 1 eye contributed to the gaze estimate. As a result, the final signal is > 200 Hz.
You can image that if they were perfectly out of phase all the time, the signal could be up to 400 Hz. Unfortunately, that's not something you could engineer. It's just meant to provide intuition.
Thank you for sharing the data file. The VR add-on employs free-running cameras characterized by sampling rates that may not be in sync. For this reason, Pupil Capture employs a pupil data matching algorithm to determine which video frames should contribute to each gaze estimation for generating gaze data. For a comprehensive explanation, please refer to this page: https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching
Hello, I am trying to remove the VR add on from HTV Vive Pro but it feels pretty tight and the frame seems fragile, any tips to remove it safely?
Hello, would like to followup incase my message was overlooked π
Hi, I am currently trying to use the HTC-Vive Pro Binocular add-on eyetracker with the HTC Vive headset. I am trying to sync depth map data from the headset with the videos of the eye and the world. However, when I convert the mp4 video of the eye to png images, there is a discrepancy between the number of frames and the number of timestamps. What action would you recommend I take to resolve this discrepancy? This problem occurs for both the left and right eye and the video of the world
Hi @user-bd82aa π May I ask you which tool you are using to extract frames from the eye videos? For this purpose, we recommend to use pyav
(https://github.com/PyAV-Org/PyAV) or decord
(https://github.com/dmlc/decord). We do not recommend using opencv
, as it might lead to frame drops.
Hi @user-c2d375 , thank you. I will try using PyAV
When I watch the eyes' cameras during a recording I see the FPS of both cameras running at around 120 Hz though, does this mean that I have a lot of asynchronous data?
230 Hz data-matched gaze is about what to expect from two 120 Hz free-funning eye cameras, in my experience
Is there a way to synchronize timestamps from left and right eye ?
Hello! With the addon on the HTC vive, is it normal for the tracking to drift a lot? (Like looking at a specific point but the ray provided in the starter project seems to move frantically around that point)
Hi @user-c32d43! If both pupil detection and calibration are functioning well, you should not experience drift. The issues you describe suggest that one or both of these could be improved. Could you share a screen capture showing the eye videos? This would allow me to provide more specific feedback.
Hi @nmt, so sorry! I've completely forgotten I am already in contact with someone from Pupil Labs with this issue! I've just sent a follow up email with a recording but I'm more than happy to share it with you here or on DMs!
No problem. We'll check out the recording you've shared by email