Hi @nmt , I am curious if analyzing the recorded data from Pupil Capture needs an additional step of adding and exporting raw data through Pupil Player ? Can I directly use the .npy and .pldata files ?
Another weird thing I noticed is that the exported world timestamps are 6 hours ahead of my actual timestamp which I recorded separately both from unity as well as converting Pupil data via scripting. Any idea why this may be happening ?
The exporting step does impact my efficiency and was wondering if there are scripts to do the same process as the player does without doing the intermediate step
Hi @user-d5c9ea! Yes, you can consume those binary files programmatically if you wish. And others have indeed done so. Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/757530345842278421
Are you sure those timestamps were taken from the same clock? It's just that Pupil Capture runs it's own clock, which is different from system/world time. Read more about that here: https://docs.pupil-labs.com/core/terminology/#timing
Thanks. I am converting pupil time to system time. I am curious if that conversion is in UTC and not local to the system where the data was recorded ?
🤔 I'm not quite sure I understand the question. Could you elaborate what you mean by 'conversion is in UTC and not local to the system?' For context, pupil time is essentially arbitrary as it has a random start point. System Time is just the current datetime of the device that is running Pupil Core software.
Hi pupil-labs, I am trying to set up your core-xr and got a few things right. That is on a windows 11 pro, Unity 6.0 xxx, VS 22 community, pupil capture 3.5.1. I can see the eye videos in some of unity demos and see the calibration method, however, the visual are all in pink. Though I like the colour .... there might be just a little thing wrong in the settings.
Hi @user-a79410 !
Quickly stepping in for @user-f43a29 here—could you let us know which rendering pipeline are you using?
It looks like the materials might be missing on the assets, which can happen if you're using URP or HDRP in Unity. You can resolve this by either:
- Switching to the built-in rendering pipeline, or
- [Converting the assets to [email removed]
For further details, you may want to consult Unity's forums or documentation.
HI @user-a79410 , my first guess is that this could be due to Unity version. Unity 6 is quite new and a number of things changed in that release. Have you tried using a version closer to Unity 2018, such as Unity 2021 LTS that is available in Unity Hub?
And I can see the VR in the HMD Vive Pro well. The second thing, I would need to record the gaze data for post-hoc analyses. We are after the estimated eye's rotation centre of the 3D model, then we might be able to map the presented stimuli to retinal locations. Would you know if is already available to some degree somewhere? many thanks in advance.
Yes, this data is provided in the pupil_positions.csv
file. You might be interested in the circle_3d_normal_x/y/z
and theta/phi
variables.
Hello, we’re using “core” in connection with a VR project. Our target distance is up to 2000 mm, and we are checking the gaze distance through the core open-source code (using self.last_gaze_distance in gazerheadset.py).
Previously, the hard-coded depth was 500 mm, which meant we couldn’t confirm distances up to 2000 mm. After modifying the hard-coded depth to 2000 mm and performing calibration at around 2000 mm, we were able to see more meaningful results than before.
Is there a method or approach we could try that would enable us to measure longer distances (around 2000 mm) more effectively, as we did in this case?
Hi @user-fce73e! I've moved your message to this channel! I think it would be helpful if you could elaborate on a few things such that we can try to offer constructive feedback. Firstly, can you expand on 'using Core in connection with a VR project' - do you have the standalone Pupil Core headset, or are you using Core cameras mounted into a HMD? Secondly, what is your goal by using gaze distance - are you trying to measure viewing depth based solely on gaze data?
Hi, I'm Kentaro, and I'm new here. Our lab is working on data visualization in AR. I'm looking for advice on which Pupil Labs product would be best for our setup.
Our goal is to use a Pupil Series device (Neon or Core) to capture world camera and eye gaze data. We then want to process both data streams (world camera + gaze) and overlay visuals in the user's field of view in real-time.
Our lab currently has XReal Air, Quest 3, and Vision Pro. Given this setup, which Pupil Labs device would be the best choice?
Right now, we’re considering purchasing Pupil Core and using it with XReal Air, but we’re unsure if this combination will work effectively or if there’s a better approach. Any recommendations would be greatly appreciated!
Hi @user-a5dd96 , you'll have an overall smoother experience with our latest eyetracker, Neon. It is calibration-free & modular, making it a great fit for AR and real-world research. It also has an integrated 9-DoF IMU, which can be helpful in such contexts.
With Pupil Core, you have to re-calibrate it whenever it slips, and using it with third-party glasses, while possible, can make the whole process trickier. In comparison, Neon is designed to be mobile & mountable, which leads to a more comfortable experience when wearing it in an AR setup and walking around.
When it comes to using Neon in an XR context, we already offer a mount for the Quest 3 headset (scroll down to the options), as well as a Unity integration. We have also open-sourced the geometry of the module, making it possible to build custom mounts for other devices.
If you'd like, we can demonstrate Neon + Pupil Core in a Demo call, where we can also answer any other questions in more detail. Feel free to book an appointment at your convenience here.
We are using the standalone Pupil Core headset (the glasses-type model). We are researching VR-optimized displays and lenses, and we’d like to use the human gaze distance (depth) purely as a reference.
For that reason, we want to confirm if we can measure a person’s gaze distance up to about 2000 mm. Also, we are using only gaze data from the Pupillabs Core algorithm to measure depth
So if I understand correctly, you wear the Core headset, whilst gazing at a physical screen/display, mounted at some distance from the subject. And you're using the hdm-eyes repo to calibrate in the context of a VR scene?
Hi, I'm Michael. Our lab is using the "HTC Vive Binocular Add-on" eye tracker to collect pupillometry data, and we're having a suspected hardware issue that may require re-wiring or similar service. Should I share more details here, or in a direct conversation with someone? Many thanks in advance for your help!
Hi @user-5bd924 - I see you've opened a support ticket. We will continue there!