vr-ar


user-2b9d78 04 November, 2022, 16:06:33

Hello dear Pupil Labs members, Could you tell me please if there are some settings to tweak to match the same precision in gaze tracking with or without wearing glasses? When I'm wearing my glasses while using the eyetracker the gaze visualizer seems completely lost...

papr 04 November, 2022, 18:32:19

Check the eye video. Your glasses might be obstructing the pupil

user-2b9d78 07 November, 2022, 08:29:40

@papr there are some reflection of the lights on my glasses. Do you think it can be caused by the blue lights filter on it ?

Chat image Chat image

papr 07 November, 2022, 08:34:07

The blue lights filter is not an issue as we recording the eyes in the infrared spectrum

papr 07 November, 2022, 08:32:38

With glasses, the image seems darker. You might want to increase the exposure time manually in the video source settings.

papr 07 November, 2022, 08:34:43

The built-in auto-exposure works by looking at the average image illumination. The bright reflections by the glasses cause the auto exposure to lower the exposure more than it should.

user-2b9d78 07 November, 2022, 08:35:28

Is there on pupil service?

papr 07 November, 2022, 08:35:41

Yes, check the eye windows menu icons the right

user-2b9d78 07 November, 2022, 08:36:20

Ah ok it is absolute exposure time ?

user-2b9d78 07 November, 2022, 08:43:39

the auto-exposure seems to solve the glasses problem thanks !

papr 07 November, 2022, 08:44:41

Just to confirm: Did you turn auto exposure on or off?

user-2b9d78 07 November, 2022, 08:49:39

it was off by default so I turned it on

user-525b2f 15 November, 2022, 15:36:45

Hi,

1.,What does "EYE1: Container was closed already!" Pupil Capture warning actually mean?

  1. Is there a documentation part in which all Pupil Capture warning and error types would be explained?
papr 15 November, 2022, 15:38:43
  1. It seems like the software is attempting to save/record a video frame even though the video file (container) was closed/finished already.

  2. Unfortunately, no

user-525b2f 15 November, 2022, 15:40:11

Thank you for your reply 🙂

user-29faf0 16 November, 2022, 18:00:35

Hello team,

I'm a researcher looking to purchase the VR/AR add-on for doing pupil tracking and gaze estimation. I wanted some clarification regarding the VR/AR add-on before I purchased it.

Does the VR/AR add-on have the same functionality as Core? More specifically, I want to know if I can use the VR/AR add-on without HTC vive, Vive Pro or Vive cosmos but still be able to use the same Network API, Pupil Capture, Pupil Player and Pupil Service and obtain pupil detection information.

Do let me know if I should divert this question to a more relevant channel.

Thanks!

papr 16 November, 2022, 18:22:33

You get the same features 🙂

user-29faf0 16 November, 2022, 18:22:50

Great, thanks!

user-b73350 18 November, 2022, 15:21:50

Hello, I am a researcher who has a few questions about your 3D gaze processing pipeline. I see that the gaze vectors (gaze_normal0_x, gaze_normal0_y, gaze_normal0_z) are already in the world coordinate system but I cannot find any information on how you do the transform to get them there. Do you apply any head corrections (via head position or head rotation) in order to get the normalized gaze vectors? Also, since everything is open source is there a way you can point me to the webpage that has the code that computes the gaze vectors?

papr 18 November, 2022, 15:26:22

Please note that these are in "Pupil Capture" world coordinates (a coordinate system that is fixed to the subject's head, i.e. moves together with any head movements). Not unity world coordinates. See https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#Map-Gaze-Data-to-VR-World-Space

user-b73350 18 November, 2022, 15:50:22

Thanks for your help and for sending that along. Is there a link to the source code for the TransformDirection function? I just want to make sure I'm understanding what is needed to do the proper transformation from the pupil capture world coordinates to the Unity world coordinate system.

papr 18 November, 2022, 15:51:10

This is a unity function https://docs.unity3d.com/ScriptReference/Transform.TransformDirection.html

user-b73350 18 November, 2022, 16:06:43

Thank you so much. So if I transform two gaze vectors (binocular) from the local coordinate space to world coordinates, will the vergence angle between those two vectors change due to the transformation?

Second, I've seen in the literature that people will typically transform the eye-in-head gaze vector into the gaze-in-world vector using a head rotation quaternion. Does this simply map the gaze vector into world coordinates so that the point of regard can be computed? And would this gaze-in-world transformation change the vergence angle relative to the eye-in-head vectors?

papr 18 November, 2022, 16:17:23

@user-3cff0d could you help out with this question?

papr 18 November, 2022, 16:12:39

This operation is not affected by scale or position of the transform. If you calculate vergence as the angle between the directions, the result should be the same.

papr 18 November, 2022, 16:15:09

I am not sure how one would transform points between the two coordinate systems. Please consider the Unity docs in this case.

user-3cff0d 18 November, 2022, 16:39:11

by "world coordinates" do you mean world camera-relative coordinates? Or actual unity coordinates where the origin is some arbitrary position in the 3d environment?

user-b73350 21 November, 2022, 17:44:39

@user-3cff0d I think I mean the unity coordinates. Basically my goal is to find the transformation between the gaze vector and the world coordinate system (origin is where the headset is turned on). I will eventually want to map the gaze onto the world. I've seen various ways to do this in the literature (e.g., via transforming the eye-in-head gaze vector into a gaze-in-world vector with a head rotation quaternion) but it is unclear whether this is just a measure to control for rVOR or whether this will map the gaze vector onto the world coordinate system to compute the point of regard while maintaining the proper vergence angle between the binocular gaze vectors

user-af4a66 18 November, 2022, 19:27:39

Hi, I am currently developing a simulation environment in Openvr (or Steamvr) with the Vulkan API. I wish to integrate the rendered world view in the VR to the pupil labs's capture module and I am wondering what would be the easiest way to do so. I see there is an some sort of ZMQ HMD stream implementation here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/hmd_streaming.py... My current idea is to save a frame buffer, with raw RGBA values, from the GPU buffer, to a RAM shared memory to then be consumed by the pupil software. But I am just pondering if there would be another alternative way or existing solution already to achieve my goals.

papr 18 November, 2022, 21:16:34

Sending it via the network api would be the easiest solution

user-4dc2c6 18 November, 2022, 20:04:53

Does anyone know what's the dimensions of the VR-set? Like, height, length, and width.

user-29faf0 22 November, 2022, 07:34:11

Hello team,

I've purchased the HTC vive VR/AR addon, however, I don't intend to use it with HTC vive. I'm planning on designing a new VR headset to be fitted with the VR/AR addon. Is there a Solidworks (CAD) model for the VR/AR addon for HTC vive available that I could use to design a headset?

user-c2d375 22 November, 2022, 09:10:57

Hi @user-29faf0 đź‘‹ If you would like to design your own VR headset, we can offer a de-cased version of our Add-on with cameras and cable tree. If you're interested, please send an email to sales@pupil-labs.com

user-29faf0 22 November, 2022, 23:32:42

Thanks for the info! That sounds like something that would suit our needs. We already placed an order for the HTC vive VR/AR add-on yesterday. Would it be possible to hold this order until we get more information on the decased version? I've included more information in the email. I appreciate your help 🙂

user-d407c1 22 November, 2022, 10:08:11

Hi @user-887da7 
Let’s move this to the VR channel.


  1. First, let me clarify there are two world coordinates here, Unity’s world coordinate, for which, if you want the data in these coordinates, I recommend you take a look at https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#map-gaze-data-to-vr-world-space 



Additionally, we have Pupil’s Core “world coordinates”, please check out this high-level description of Core’s coordinate system here https://docs.pupil-labs.com/core/terminology/#coordinate-system



Finally, you can find more information about which parameters are defined on which coordinates system by had-eyes here (https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/GazeData.cs)


  1. Both eyes have an origin and normals of the visual axis towards the point you are looking at. The origin of the Gaze Direction corresponds to ~~the eye centre when monocular, or a cyclopean eye in between both eyes when binocular.~~ the scene cam origin. You can read more about gaze context in the developer documentation (https://github.com/pupil-labs/hmd-eyes/blob/69d53d8862b1c31808ad6dcda02d07cc073e665f/docs/Developer.md#accessing-gaze-data)



You can use GazeData.MappingContext to acknowledge if the gaze estimation was based on binocular or monocular context.



This eye centre coordinates are in Core’s world coordinates.

With integrated gaze point, I assume you mean gaze_point_3d? This has its origin in the virtual scene camera



When relative to HMD, it refers to the difference in scene coordinates (mm) from the eyes relative to the HMD centre.

user-887da7 22 November, 2022, 18:38:51

Thanks! I still have the following questions about the gaze data 1. Based on this documentation (https://docs.pupil-labs.com/core/software/pupil-player/#export), the GazePoint3D seems to refer to the object that the eyes are looking at, not the gaze position itself. Is that correct? If so, then the coordinate in the HMD/Unity world refers to the objects' coordinates in the HMD/Unity, instead of eyes, right? 2. this question is related to the above one. Based on your reply, in my understanding, your meaning is that GazePoint3D indeed refers to the gaze position. Then what is the difference between GazePoint3D and center/normal of left and right eyes? I am kind of confused about your description about GazePoint3D 3. Now I plot the GazePoint3D in HMD coordinates and Unity world coordinates (left figure), as well as the original and normal of left and right eyes (only in HMD coordinates). I just plot the x-axis here. I find the data of GazePoint3D is not within 0-1 but has a very large number. Moreover, the data in world coordinates and in HMD coordinates are largely overlapped. This observation can also be applied to y and z axis. Do you understand why? 4. for the right figure, I observed that the normal of the two eyes is basically the same, which is make sense since the ray direction change of the two eyes should be near. I just wonder why the value is very small (<0.001) Where is the (0,0,0) and (1,1,1)?

Chat image Chat image

user-887da7 06 February, 2023, 16:48:14

@user-8a236b here

user-d407c1 23 November, 2022, 09:53:25

GazePoint3D

End of November archive