πŸ₯½ core-xr


Month (messages)

user-a1487f 05 March, 2025, 00:19:01

I'm running into an issue working with the PL HL version. The eye camera connectors broke, so I had to swap them out and re-solder the wires. Unfortunately, my left eye camera will still not connect. Can I get the wiring information so that I can ensure I soldered the correct wires together for the left eye camera?

user-f43a29 05 March, 2025, 11:08:57

Hi @user-a1487f , while I am unable to share the wiring information and details on how to resolder the components, you could send an email to info@pupil-labs.com with the original Order ID. Then, the Operations team would be in touch about coordinating a repair or replacement for you.

user-52c248 10 March, 2025, 06:32:08

Hi, I'm trying to repurpose the HTC Vive add-on eye-tracker for standalone use (without the headset) in my research. I need to get the gaze data in a python script for my application. Currently I am able to see the pupil detection within Pupil Capture. However, when I try to use the Realtime API in python, no device is detected. Moreover, when I use the Network API with ZMQ to listen for pupil information, I do not receive any data. Please suggest what I can try next.

nmt 10 March, 2025, 11:29:07

Hello, @user-52c248! If you van see pupil detection in Capture, then the data are available via the real-time API. And so what you're describing sounds more like a network issue. Which script/example from the real-time API did you try? Can you share a link?

user-52c248 10 March, 2025, 18:33:39

Thank you for your reply! I tried https://pupil-labs-realtime-api.readthedocs.io/en/latest/examples/simple.html#find-one-or-more-devices to discover devices and got None and []. What can I try next?

nmt 11 March, 2025, 02:20:27

You're trying to use the wrong real-time API. That one is for Neon. This is the one you'll need to use with Core: https://docs.pupil-labs.com/core/developer/network-api/#network-api

user-52c248 11 March, 2025, 07:26:49

Thank you for your help @nmt . I tried the code here https://docs.pupil-labs.com/core/developer/network-api/#reading-from-the-ipc-backbone and I find that on subscribing to the gaze. topic my program is getting stuck when the subscriber tries to receive information with subscriber.recv_multipart(). I am able to see the two eye cameras on Pupil Capture at the same time however my world camera is off since I need to use the HTC Vive add-on without the headset for my task. I am able to receive the PUB and SUB ports which means that the connection to pupil_remote is working. What can I try next?

nmt 11 March, 2025, 10:23:06

You won't be able to subscribe to gaze without calibrating. Since you have no scene camera, calibrating won't be possible. In the context of VR, the Unity world cam essentially replaces the physical world camera. You could subscribe to pupil data though. This example script shows how: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py

user-52c248 11 March, 2025, 22:10:36

Thanks, I am able to receive pupil data now! Is there some way I can calibrate and receive gaze data without a world camera? My application has fixed points on the rim of my glasses that I need to look towards. Is there some way to run this calibration script through python to guide the user through these points?

nmt 12 March, 2025, 02:11:23

Our calibration routines all essentially map pupil data to scene camera coordinates. The scene camera is a prerequisite for several reasons. But it might be possible to do something with your points on the rim. What is your overall goal with this?

user-52c248 12 March, 2025, 03:32:14

I'm trying to place tiny displays on the rim of my glasses and I want to track which display the user is looking at. What I had in mind was to initially activate the displays in a sequence with the user looking at one display at a time for gaze calibration and then subsequently perform gaze tracking. Is there some workaround to achieve this? I can trigger the displays individually in code so perhaps mapping the pupil data to each display event?

nmt 12 March, 2025, 03:45:19

Yes I think that would the be first thing to try. If you know the participant is gazing at a point on the rim at a given time, you can establish a known relationship between the pupil data and that point in on the rim. The main caveat being this approach won't be robust to headset slippage, i.e. relative movement of the headset on the wearer, since that would break the physical relation.

user-52c248 12 March, 2025, 06:49:27

Thank you so much @nmt I managed to get my device running with your support!

nmt 12 March, 2025, 07:54:18

Great to hear!

user-185d95 25 March, 2025, 21:37:44

Hello! I am not experiencing issues with the Pupil Player software and/or the old HMD-eyes for HTC goggles. I exported the fixations from experiment, that was in VR but what was in fact the free-viewing in 2-d - people were calibrated at the beginning, then they were looking at the pictures on the virtual screen. Normally in such situation I would expect 2-5 fixations per seconds, but I've got like 1 or even not at all. When I looked closer at the data I've seen that I have long breaks between detected fixations, even few seconds - which is not possible in this paradigm. Probably it was something wrong with my export settings - which are as below (example from fixatrion report file): fixation classifier Dispersion_Duration max_dispersion 1.510 deg min_duration 100 ms max_duration 1000 ms

fixation_count 389 so...it anything I can do to improve the quality of export fixations? I already checked different settings - 2.11 deg instead 1.510, it improved the fixation detection, but not enough. maybe they are some best practices re: the settings?

user-d407c1 26 March, 2025, 07:58:06

Hi @user-185d95 ! Thanks for sharing the details β€” let’s break this down.

The Pupil Core fixation detector on HMD Eyes works the same as without HMD eyes. The 3D/2D viewing setup shouldn’t affect the fixation detection itself and the context of free-viewing images on a virtual screen should still yield reasonable fixation counts.

A few things to check:

  • Pupil Confidence: Firstly, what confidence values are you observing in your data? Low confidence could lead to missing fixations, especially if there are drops or noisy signals.
  • Fixation Duration Expectations: In your paradigm, are you expecting longer fixations per image? If so you may want to increase the max duration filter.

If increasing the dispersion threshold improved detection somewhat, but not enough, you might also try:
- Adjusting min_duration β€” lowering it slightly (e.g., to 80ms) can sometimes help capture quicker fixations without over-detecting.
- Reviewing the confidence filter in your export β€” sometimes useful data is dropped due to a strict confidence threshold.

If you can share an example confidence trace or recording snippet, we can dig into this further.

user-185d95 29 March, 2025, 10:10:26

Hello! many thanks for you input! also my apologies for the delay in answer, I came to the lab today to do some experiments with the data export. As for the confidence - I generally observe high confidence. over .80 or even .90 in many cases, and I haven't filter any data basing on confidence in further analysis. Yet still I observe fixation undetected. I tried to change then time limit to 80 ms to 1500 ms - this increase the number of fixations detected, but at the same time led to artifacts - one fixation detected as 2 . I share also the exports - fixatioin and world view - with two different tresholds. Many thanks one more time!

Chat image

nmt 02 April, 2025, 02:34:17

Hi @user-185d95! Can you please share the full recording for us to provide concrete feedback?

user-185d95 29 March, 2025, 10:16:51

as the file is too bif to share, I am sending link top my Google drive - https://drive.google.com/file/d/1VLCwTYcAo3QgWLi_6vVqbloE6k8bha6U/view?usp=sharing

End of March archive