🥽 core-xr


user-13fa38 02 November, 2023, 17:54:54

Hello pupil labs, our lab is in the middle of transition right now and we want to know if it is possible to use the AR/VR add-on (used for Vive Cosmos) can be used together with Matlab, where we setup the eye trackers stand-alone (without the headset) and the subject head fixed looking through the eye trackers. If so, is there any API for the eye trackers to work with Matlab?

nmt 03 November, 2023, 10:02:17

Hi @user-13fa38! Technically it's possible. Do you already have the VR Add-on?

user-c32d43 08 November, 2023, 01:50:59

Heya! Thanks for answering my emails, it has been informative and helpful with the drifts.

Out of curiosity, is it possible to run the package in 2023 version of Unity? I understand if it is recommended to use 2018, just wanted to see if others have done using 2022

user-d7ac5c 09 November, 2023, 11:57:14

Hi all, does somebody know if the Vive VR add-on is also usable for the Vive XR elite?

user-480f4c 09 November, 2023, 17:02:39

Hi @user-d7ac5c 👋🏽 ! Our add-on is only compatible with the HTC Vive, Vive Pro, and Vive Cosmos. If you're interested in using our hardware with a different VR headset, please note that it's not compatible right out of the box. However, we can offer a solution by providing you with the necessary cameras and cabling for custom prototyping. In this case, you would need to develop mounts that fit the geometry and constraints of your VR headset. If you are interested, please reach out to [email removed]

user-fa0f94 10 November, 2023, 01:46:02

Hello! I use hmd eyetracker with HTC Vive https://github.com/pupil-labs/hmd-eyes I created car sim for my research in Unity3d. Everything works well but when I build project and run build file the pupil capture doesnt connect with simulator :c The first screen is build, the second one is played in Unity editor. Do you have any ideas why release build doesn't connect to pupil capture but do it in editor? I still can do research without building game but I think build could have better performance

Chat image Chat image

user-e16e05 10 November, 2023, 08:33:36

Hi, I'm using pupil labs add-on in my vr experiment and most of the times I run the calibration I get the message "An unexpectedly large amount of pupil data (> 20%) was dismissed due to low confidence. Please check the pupil detection" even when pupil detection seems ok. What can I do to improve it? Also, I noticed one eye is not in focus like the other but it is really difficult to find the right adjustment. Thanks!

nmt 11 November, 2023, 18:16:18

What sort of Pupil detection confidence do you see when performing the calibration?

user-e16e05 13 November, 2023, 14:42:00

here I attach a HMD 3D calibration recording example where I get the error. It is really hard to have a high confidence detection for eye0.

nmt 14 November, 2023, 12:06:32

🤔 That eye image seems pretty blurry. Can you double-check the camera is free from dust particles or grease?

user-e16e05 15 November, 2023, 11:42:19

Thank you for the reply. How do I clean the cameras?

user-c32d43 14 November, 2023, 13:28:35

Hi, out of curiosity, I noticed how the VR addon tends to have issues with the random sudden drifts (following the steps) and having to change the parameters for different people but when it comes to other products (like the invisible) the parameters seem to just work just fine with minimal drifts. Both are calibrated. Is this normal?

user-d407c1 14 November, 2023, 13:40:26

Hi @user-c32d43 ! Pupil Invisible / Neon employ a different approach to eye tracking relative to Pupil Core or the VR Add-ons.

Pupil Core or the VR-Addons perform a more traditional eye tracking, where you need to calibrate and the calibration is subject to deteriorate as the eye cameras position change, so they are more likely to be affected by for example slippage or drifts.

In the other hand, Pupil Invisible & Neon use a deep neural network to measure where you are looking at. This means, it does not rely on a calibration and therefore is more robust.

If you would like to know more about it, I encourage you to look at our whitepaper:

https://arxiv.org/pdf/2009.00508.pdf

user-c32d43 14 November, 2023, 13:41:45

Awesome, thank you!

user-c32d43 14 November, 2023, 13:45:55

Would DNN be implemented to Pupil Core in the future within Core or is it due some specifics between the products? Sorry, I'm not very familiar with the Invisible or Neon but I see my lab partners use a mobile device.

user-d407c1 14 November, 2023, 13:50:39

Neon and Invisible do connect to a phone, and yes, the DNNs are specific to those products. Nothing to be sorry about, here you can find high level descriptions of each product and their underlaying technologies https://pupil-labs.com/products

user-3b1221 15 November, 2023, 07:54:01
user-3b1221 15 November, 2023, 07:55:54

Can I integrate this product into a UE4-based driving simulation environment?

user-cdcab0 15 November, 2023, 12:02:23

Hi, @user-3b1221 - it's certainly possible, but would likely require you to be comfortable coding in C++ for UE4. The Network API plugin for Core makes data available via ZMQ, and a quick internet search shows that there are some open source ZMQ plugins for UE4

user-3b1221 15 November, 2023, 15:14:47

@user-cdcab0 Excuse me, thank you for the reply. I have some questions. I plan to do some research on driving process. To be specific, we plan to receive data from the eye track during our simulation. How to get it by using this device? Should I make secondary development in C++ or just use pythonAPI?

user-cdcab0 15 November, 2023, 21:29:39

UE doesn't have official Python support. I do remember seeing some 3rd party/community efforts in that vein some years ago, but I don't think any of them were that mature. Maybe that's changed? Personally, I'd probably go the C++ route, but if you're more comfortable in Python and/or haven't done C++ in UE4 before (it can be a little challenging to learn even for an experienced C++ dev), then it may be worth looking at what Python libs still exist for UE4

user-3b1221 16 November, 2023, 01:31:33

@user-cdcab0 Thanks. I would like to know the workflow of eye tracking. For instance, if I want to collect eye movement data on a large computer screen, is there a calibration process involved? Furthermore, can I project my simulation environment onto the big screen and then confine the eye tracking within the screen's bounds, correlating the coordinates of eye movement readings with the simulated environment? Or, as you mentioned, should I undertake secondary development to achieve corresponding results between the coordinates and the simulation?

user-cdcab0 16 November, 2023, 09:32:47

Aside from the normal calibration process (https://docs.pupil-labs.com/core/software/pupil-capture/#calibration), you'll need to enable the Surface Tracker plugin (https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking), affix AprilTag markers around or on the display (or render them on the display), and define a surface with those markers. Then you can receive gaze coordinates in surface space

user-78b456 19 November, 2023, 11:01:38

Hi im also trying to figure out how to extract the gaze position on a surface using python but im struggling. Im able to use the network API (zmq and msgpack) to get frames of eye data, but im lost trying to figure out how to get information about where im looking on a surface with apriltags

user-78b456 19 November, 2023, 11:52:09

update: i was able to figure out that by using the subscribe(surfaces.) function im now able to see the fixation datum, now im trying to figure out how to access one of the datum parts....

#...continued from above
# Assumes `sub_port` to be set to the current subscription port
subscriber = ctx.socket(zmq.SUB)
subscriber.connect(f'tcp://{ip}:{sub_port}')
subscriber.subscribe('surfaces.')  # receive all gaze messages
user-78b456 19 November, 2023, 12:35:02

i just cant figure out how to ONLY read the topic gaze_on_surfaces, or i guess the topic 'gaze.3d.1._on_surface' so that i can just extact ONLY the norm_pos from this {topic}: {message} pair

user-78b456 19 November, 2023, 12:35:46

like is it a list? a long string? a dictionary?

user-78b456 19 November, 2023, 12:36:24

i cant do like message['gaze_on_surfaces']

nmt 19 November, 2023, 12:52:03

This example script shows how to receive surface mapped gaze in real time: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py

user-78b456 19 November, 2023, 13:09:57

yep okay this is exactly what i need thanks!

user-e16e05 22 November, 2023, 11:12:59

Hello, I noticed that the timestamps of my eyetracker data have duplicated values (values that occur even more than once). Why?

user-d407c1 22 November, 2023, 11:32:43

Hi @user-e16e05 ! Most probably what you are seeing it the output from the 2D and 3D detector. Is it the case? What happens if you filter using method?

Have a look here on how you can filter by method using Python https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb

user-d407c1 23 November, 2023, 14:06:47

Hi @user-c39646 ! Since your query is about the HTC Vive Pro Addon (https://pupil-labs.com/products/vr-ar) , I move it here.

The key specs are CPU and RAM. We suggest a recent generation Intel i7 CPU with 16GB of RAM. It can be the same computer where you are running your VR experiment, but if your VR application requires a lot of resources, you can also use a different one.

user-11b6e8 28 November, 2023, 16:12:38

Hello all! I was wondering, in the examples, is there an implementation for detecting blinks in real-time? Thank you!

user-d407c1 28 November, 2023, 16:24:19

Hi @user-11b6e8 ! If using Core or the VR-Addons, kindly not the that the blink detector publishes blinks like this :

https://docs.pupil-labs.com/core/developer/network-api/#blink-messages

user-11b6e8 28 November, 2023, 16:26:10

Do you perhaps have an example for detecting the message in a Unity application, or an example from another message type to use as a reference?

user-d407c1 28 November, 2023, 16:39:07

In the hmd eyes, you may want to have a look at https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/Subscription.cs

user-e16e05 29 November, 2023, 10:06:09

Hello, I'm using the VR add-on and I would like to merge the data saved from Unity at 90 Hz with the gaze data at 120 Hz. Since the add-on cameras run asynchronously how should I proceed? I also noticed that the confidence value recorded at 90 Hz not always match with the confidence value saved at 120 Hz, so when I want to analyze eye movement which one of the two should I use to clean data? Thank you!

user-f3bc0e 30 November, 2023, 02:26:34

Hi, I am fairly new to the device. My experiment setup required me to use the HTC front camera and the head display. I wonder if it is possible to calibrate using the HTC front camera as the world view in Pupil Capture. An older post seemed to suggest this is impossible: "The Pupil Capture app will not pick up the Vive camera" (https://groups.google.com/g/pupil-discuss/c/8X9lArs2c_I?pli=1). If it is not possible to calibrate using the Pupil Capture, is there any other way to calibrate using the existing API? Sorry for the long question, and thanks for your attention.

nmt 30 November, 2023, 04:06:58

You'll want to check out the getting started section in our hmd-eyes repository, which contains the building blocks to implement eye tracking in Unity3D: https://github.com/pupil-labs/hmd-eyes/tree/master#vr-getting-started

nmt 30 November, 2023, 04:05:23

Actually, we would recommend that you use the screencast feature and save everything natively in Pupil Capture. You can read about that here: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#screencast

user-e16e05 06 December, 2023, 16:54:38

Thank you for the answer. Since I have eye data recorded without the sceencast feature, is there any suggestions on how to merge the data recorded at 90Hz with the ones recorded at 120Hz?

End of November archive