πŸ₯½ core-xr


user-b14f98 01 July, 2022, 16:27:26

@papr In the Vive, we're consistently seeing poor gaze mapping for the lowest rows, when the eyes are pointed more directly at the camera. Strange, because that's when pupil detection is easiest. Here's a screenshot for example. We used a custom ellipse finder for this. Each image corresponds to a fixation at one of the three fixation points in the bottom row of the assessment fig below. Notice the great 2d fits (blue circles) and very poor model fits.

Chat image

papr 01 July, 2022, 16:28:18

Given the green eye ball model outline, it looks like you are not using pye3d, is that correct?

user-b14f98 01 July, 2022, 16:29:16

The question is - is this behavior you would expect? Note that Kevin is currently looking to validate the same strange 3D model behavior in other subjects.

user-b14f98 01 July, 2022, 16:29:26

Yes, it's Py3D

user-b14f98 01 July, 2022, 16:29:44

@user-3cff0d - the green circle is just a custom display implementation, right?

user-b14f98 01 July, 2022, 16:29:58

(credit to Kevin, who did all the work on this)

user-3cff0d 01 July, 2022, 16:30:23

The green circle mirrors the way that the actual Pupil Labs software displays the projection of the 3d model outline

papr 01 July, 2022, 16:31:01

The more circular the pupil is the more difficult it is to triangulate the eye model center, i.e. it is difficult to estimate z. The old 3d detector had an issue with that specifically. The result is that the eye model jumps a lot in z directions.

papr 01 July, 2022, 16:35:13

Note that pye3d should be less affected by this than the older detector due to the (ultra-)long-term models providing a good bias in case of circular ellipse input

user-3cff0d 01 July, 2022, 16:32:02

This makes sense considering the bottom row of fixation points pictured has the pupil the most circular, due to the eye cameras being below the eyes

user-3cff0d 01 July, 2022, 16:31:20

That is, with cv2.ellipse2Poly applied to the pye3d datum's "projected_sphere"

user-3cff0d 01 July, 2022, 16:32:58

Hmm. Maybe we can manually filter out pupil ellipses that are too circular (have semi-major and semi-minor axes that are too similar) from affecting the 3d model

papr 01 July, 2022, 16:33:30

The solution is to fit and freeze the eye model first, then to do the accuracy measurement (applies for old 3d and pye3d)

user-3cff0d 01 July, 2022, 16:33:51

We've actually started doing that already

user-b14f98 01 July, 2022, 16:34:10

@papr We are now working offline, and can fit the 3D eye model to an arbitrary duration of data (e.g. to minimize reprojection error). So, perhaps we should do what kevin said, and filter out ellipses with an aspect ratio near to 1 (from the model fitting process)

user-b14f98 01 July, 2022, 16:34:25

Then fix the model and estimate gaze.

user-b14f98 01 July, 2022, 16:34:38

Ok, great, thanks fo the quick tip. We'll do that and compare across a few subjects

user-b14f98 01 July, 2022, 16:35:31

... I think we are using py3d

user-3cff0d 01 July, 2022, 16:35:39

We are using pye3d yes

papr 01 July, 2022, 16:35:49

Just wanted to put that out there πŸ™‚

user-b14f98 01 July, 2022, 16:35:53

Thanks πŸ™‚

user-3cff0d 01 July, 2022, 16:36:21

Yeah the 3d models snap back to being decent again once the participant looks away from those bottom-most fixation points

papr 01 July, 2022, 16:43:18

Might be that the bias is too weak to keep the short-term model on track. More info here https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-estimates-eyeball-position-on-different-time-scales (you might know this page already). In the implementation, the models only differ in the type [1] and length [2] of buffer/storage used https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/detector_3d.py#L205-L237 [1] flat FIFO buffer vs a spatially-aware buffer (different sub-buffers for e.g. top-left vs top-right pupil ellipses) [2] few vs many samples

user-3cff0d 01 July, 2022, 16:36:41

I don't know if that can be attributed to the ultra-long term model coming back into play or if it's something else

user-3cff0d 01 July, 2022, 16:37:14

To be honest I'm still not 100% sure how pye3d (or the original 3d detector for that matter) utilize the multiple models

user-b14f98 01 July, 2022, 16:37:20

Great! We'll let you know what we've learned. We're seeing a great increase in precision when using custom pupil detectors, but our confidence thresholds for model fitting are not effective, and the model jumps around a lot. We're currently playing with this offline filtering of pupils prior to the fitting of a static 3D model.

papr 01 July, 2022, 16:46:46

@user-b14f98 @user-3cff0d see also this paper re pupil circularity https://link.springer.com/article/10.3758/s13428-021-01657-8

user-af390a 04 July, 2022, 10:24:08

Hi! I have a question on data processing. I have a few datasets where the eye tracking could not be optimized appropriately (maybe due to bad lightening in the VR headset) and I would like to optimize the date in retrospect. Is there a possibility to do so that goes beyond automatic post-hoc re-calibration? I have a feeling that in my case only manual "pupil clicking" would do the job, but I could not figure out whether this is an option with the pupillabs software. Thank you in advance!

papr 04 July, 2022, 11:19:33

Hi, Pupil Core software does currently not have the option to manually annotate pupil data.

user-1eb241 04 July, 2022, 11:58:45

Hi…. Thanks for the help you guys provided earlier. I’m facing a wierd error now from pupil labs cameras. The image from my left camera is fine and bright but my right camera is very dark… as I kept my finger in front of both cameras …I felt the Light falling on my finger is little in front of my right camera. FYI, I kept the exposure time same for both cameras and it is in manual mode. Any help is highly appreciated

user-1eb241 04 July, 2022, 11:59:19

Earlier when I faced something similar, you guys suggested to change exposure time which worked great for me. But this is something different and new….

user-1eb241 04 July, 2022, 12:02:56

Does this variability in brightness of images /LED power observed earlier at different time instances

papr 04 July, 2022, 12:04:18

Hi! Do I understand correctly that adjusting the manual exposure time for the darker camera does not help?

user-1eb241 04 July, 2022, 12:05:14

Yes

user-1eb241 04 July, 2022, 12:05:21

It does not help

user-1eb241 04 July, 2022, 12:08:49

please find the attached images and difference in the brightness

Chat image Chat image

papr 04 July, 2022, 12:15:33

Thanks! Could you send them to [email removed] My guess is that one of the LEDs broke.

user-1eb241 04 July, 2022, 12:16:07

Aren’t there two LEDs

user-1eb241 04 July, 2022, 12:16:19

Any idea how can we fix this

papr 04 July, 2022, 12:19:41

Our hardware team will be more helpful than me in this regard. Please contact them at info@pupil-labs.com

user-1eb241 04 July, 2022, 12:16:36

There was no physical damage to the cameras assembly

papr 04 July, 2022, 12:19:01

I don't remember. Is this a DIY headset or a purchased one?

user-1eb241 04 July, 2022, 12:19:19

Purchased one…

user-1eb241 04 July, 2022, 12:19:52

Thanks

user-1eb241 04 July, 2022, 12:20:36

I hope they are as quick as you guys…. We are right in the middle of an experiment

papr 04 July, 2022, 12:23:23

They are πŸ™‚

user-be99cd 04 July, 2022, 13:02:24

Hi everyone, I see that i am not alone wanting to integrate PL's solutions to the HoloLens 2. So for the record, Drouot et al., (2022) did it. (btw, the product used in this paper is not referenced correctly on the publication section i.e., in VR instead of AR, https://pupil-labs.com/publications/) https://www.sciencedirect.com/science/article/abs/pii/S0003687022001168?casa_token=AdgLTz1yTi8AAAAA:qo0KnKKfEwAojudLugbdyaSvHuCr3jtqP3p3AhZpnUcqoHULGTgkWoQ-vttTyE69E8BqA0uzhp6F

Apparently, they used CAD modeling and 3D printing to integrate the hardware developped by PL to the HL2.

I have contacted the author and she no longer works with the research engineer that developped these CAD models, as she has finished her thesis during which she did this work, and been doing something else since. She kindly told me it was possible and gave me the e mail adress of the guy. I have contacted the research department in which the research engineer worked (and maybe still works), at the time of the paper, and the engineer himself, without any response from both. I'll give you udpates on the matter.

user-15e3bc 15 October, 2023, 07:36:13

Hi, do you have the results now?

user-0aca39 05 July, 2022, 08:53:13

hi, is it possible to connect your HTC Vive eye-tracker to Android Phone and make it work with AR core/AR foundation? - calibration and eye tracking in real time

papr 05 July, 2022, 09:43:59

Hi! To calibrate the add-on, you need a fixed relationship between the eye cameras and the "scene camera". In case of AR core, it seems like the phone's camera would be recording the scene. Since the phone moves independently of the eye tracking add-on, a calibration would not be possible.

user-0aca39 05 July, 2022, 09:52:31

Phone will be fixed to the user’s head using aryzon Mr headset

papr 05 July, 2022, 10:03:43

Ok. The hardware itself does not provide gaze data. You require software to process the eye video. The software that we provide for this purpose, Pupil Core software, is only supported on desktop operating systems, specifically macOS, Windows, and Ubuntu. Even if the software was ported to Android, I doubt that the phone had the necessary computational resources to run the pupil detection in real time. An alternative would be to stream the video via wifi to a desktop PC running Pupil Capture and receiving the gaze estimates back via wifi. Similarly to this project https://github.com/Lifestohack/pupil-video-backend/

user-0aca39 05 July, 2022, 10:14:44

Ok, thank you.

user-19daaa 13 July, 2022, 00:53:03

Hello, I am using the pupil core add-on of HoloLens for an experiment. I am just wondering if I develop an application using the unity plugin of the pupil core, then where will be the origin of the gaze data in the world space? I found that gaze normal vectors provide the rotation of the visual axis separately for each eye. Is there any way of getting the gaze origin vectors for each eye separately? Are the eye center vectors and gaze origin vectors the same?

papr 13 July, 2022, 06:48:52

Hi, the gaze coordinate system defined by the scene camera. See https://docs.pupil-labs.com/core/terminology/#coordinate-system And yes, the eye centers are the origin of the gaze normals (in scene camera coordinates) πŸ‘

user-19daaa 13 July, 2022, 19:08:01

Hello @papr Thanks for your response. I have one more query, I am trying to compute the angle created by the left and right eye gaze normal vectors while focusing on an object. It is true that sometimes they intersect, sometimes they don't. But, the magnitude of the angle does not make sense to me yet. It ranges between 60 to 90 degrees. Sometimes, 20 to 30 degrees. In both cases, the calibration step and focusing object are the same and at the same place. Do you know what could be the reason for this? If I calculate the angle from gaze normals, do I need to do any further steps? Any ideas or thoughts? I am tagging 'papr(pupil labs)' in the message, but if anyone else have any thoughts please feel free to share.

user-0c307c 13 July, 2022, 16:46:16

Hello, I am an intern who is using your Pupil Labs Addon-on for the HTC Vive. We have begun using your software however when using world capture, EYE1 constantly drops connection, despite the hardware having a wired connection to the desktop. You can see what i am talking about in the picture i have attached.

Also, as a result eye1 is not captured during calibration and within the vr demo the eye tracker is inaccurate.

Finally, how can I record what is being viewed in the vr environment in both the demo and environments that are offered on Steam?

Please let me know how to resolve these issues.

user-89d824 14 July, 2022, 09:08:59

Hi, I'm very new to Pupil Core (and everything else, actually). I'm trying to use Pupil Core in a CAVE environment (it's an immersive cylinder, which gives us a 360-degree panoramic view instead of a cube). For now, I'm mostly interested in finding out, for example, how long the participant spends looking at targets/non-targets, how much they fixate on it, etc. The scenario is built in Unity and it'll be projected by 5 projectors in the aforementioned CAVE environment, and the participant will wear the Pupil Core device as they navigate and interact with the virtual environment (i.e., they will NOT be wearing a HMD).

May I know how can I go about this, please? I came across an old post here on Discord that seemed to have a similar set up, and the advice was to use the hmd-eyes plugin. I'm afraid I need a bit more hand-holding than that 😬

I also tried the demos in Unity and for some reason, in the Eye Frame Virtualiser, my eyes were upside down but they looked find in Capture. Please let me know how to rectify this, thank you πŸ™‚

papr 14 July, 2022, 14:39:17

Hey, the first step would be to remove low confidence samples to avoid noisy samples. Even then, gaze normals are known to be noisy. An alternative way to calculate vergence would be to create two new vectors and to calculate the angle between them: A=eye_center0 - gaze_point_3d B=eye_center1 - gaze_point_3d

user-19daaa 14 July, 2022, 17:46:41

Yes, I am following the first step. I tried your given ideas of creating two new vectors. But, still, the values are highly similar, just slight changes from the calculation of vergence from the gaze normals. Just wondering, what are we getting by subtracting gaze points from the gaze center? Do you think this issue related to the calibration issue? We are doing 10 points natural feature calibration. Any better way of doing this?

papr 14 July, 2022, 14:40:19

If you are using Pupil Core you will need some way to track the environment within the scene videos. We usually use apriltag markers for this: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-89d824 14 July, 2022, 14:47:57

I'm so sorry but I've read the link you've just posted a few times but I still don't quite understand the concept of surfaces in the context of eye-tracking.

Anyway, I'm not sure whether apriltags can be used in my experiment. I think the easiest way to describe what I will be doing is this: A participant will be in an immersive cylinder (such as this one: https://www.igloovision.com/images/ImageLib/6m_270_3-4_x-ray_1.jpg) and they will play a game in it. I'm interested in seeing whether the participant will be able to spot targets in the game correctly.

How would apriltags work in this case?

papr 14 July, 2022, 14:52:01

Gaze is estimated in the Core headset scene camera not the environment/immersive cylinder coordinates. Now, you can go ahead, review the recorded scene video and manually annotate all events in which the target looks at a target. But that is a lot of work. What you want to do is to find the gaze location in immersive cylinder coordinates. But to do that autonomously, you need a self-updating mapping function between the headset und the environment. This can be achieved by using recognizable markers in the environment.

I don't know much about CAVE. Does it track the users head position?

user-89d824 14 July, 2022, 14:56:35

As of now we're not interested in tracking the user's head position but that may change in the future.

And thank you very much for explaining the use of apriltags to me. I mean I could place the apriltags in the virtual environment on all the targets and nontargets. So for instance if the virtual environment is populated with boxes of different sizes (and the largest ones are the targets), I suppose I could have a unique apriltag appear on each box.

Do you think that'll work?

user-ab6350 14 July, 2022, 14:52:26

Hi there,does anybody know how to remove the htc vive add-on hardware from the hmd?It just got stuck there!!!

papr 14 July, 2022, 14:58:00

not interested in tracking the user's head position That would give you an potential alternative to using apriltag markers though

I suppose I could have a unique apriltag appear on each box. That should work well (might depend on actual lighting conditions)

user-89d824 14 July, 2022, 15:05:35

I've looked at the head pose tracker tutorial video/page but I think I'll have to test that in our set up to better understand how that would work in VR.

As for the lighting conditions, that's the bit I'm worried about. The VR environment is supposed to have low visibility but there will be a virtual torchlight that the participant can control to light up a specific area of interest. I guess I'll just have to try it out and see whether it works.

Thank you so much for your time!

papr 14 July, 2022, 14:59:10

The headset clips on. There is a bit of force required to pull them off as far as I know

user-ab6350 14 July, 2022, 15:20:42

Thanks anyway

user-ab6350 14 July, 2022, 15:20:15

Did it,had to use a screwdriver.that's more than a bit of force..

papr 15 July, 2022, 11:46:19

what are we getting by subtracting gaze points from the gaze center? The resulting vectors (gaze directions) are guaranteed to intersect in the gaze_point_3d while gaze_normals0/1 rarely intersect perfectly.

We are doing 10 points natural feature calibration Which hardware are you using?

user-19daaa 15 July, 2022, 15:53:03

We are using pupil core device's add on for hololens

user-19daaa 15 July, 2022, 16:08:18

@papr Another parameter we are interested in measuring is the distance between the center of the left eye pupil to the center of the right eye pupil, in other words, interpupillary distance. I am considering the distance between the eye centers and the distance between the pupil centers as a 3d circle from the pupil data to check which one behaves appropriately. When I checked the magnitude while considering the distance between the centers of the pupil from pupil data, it showed values ranging from 15-20mm, which is wrong in terms of IPD. When we consider eye centers, it goes from 50 to 65 mm, which is the expected range. But, when we verify this with a pupillometer for a certain distance, we found the difference around 10mm (e.g., the distance between the eye center is 60mm, the pupillometer shows 70mm). Could you please provide some suggestions regarding this? Also, which parameter would be more appropriate for the IPD calculation based on the pupil lab eye tracker?

user-b14f98 20 July, 2022, 17:54:19

If you're doing this in the lab, an easy way is to just have the person look at a mirror that is mounted on the wall at eye height . Use a dry erase marker to mark the distance of their pupils on the mirror, and then measure the distance between the markings. You can then use this value to set the physical IPD (by adjusting the knob) and virtual IPD in your game engine (but know that, unless you override the virtual IPD, it is likely updated by changes to physical IPD).

papr 18 July, 2022, 11:18:26

Hey, the most correct parameter to measure IPD would be the distance between eye centers. But the system cannot be used to measure IPD. To find the physical relationship between the movable cameras, it assumes a specific starting location of the eye balls, i.e. a specific IPD of 60mm. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L17-L19

Make sure to fit your eye models well before calibrating. If you continue seeing unrealistic values, please share an example Capture recording incl. calibration choreo with [email removed] This will allow us to give more concrete feedback.

user-d67a7f 17 July, 2022, 21:20:28

Hi, is it possible to generate a heatmap in VR from eye tracking data? I have a 360 panoramic image and I am interested to plot heatmap from eye tracking data.

papr 18 July, 2022, 11:19:40

Generally, yes, but you will have to calculate it yourself by intersecting the gaze direction vectors with your image and aggregating those locations.

user-d67a7f 21 July, 2022, 18:07:14

Thanks for your reply. I believe that the gaze direction vectors can be found in gaze_positions.csv file and those are: gaze_point_3d_x - x position of the 3d gaze point (the point the sublejct lookes at) in the world camera coordinate system gaze_point_3d_y - y position of the 3d gaze point gaze_point_3d_z - z position of the 3d gaze point

user-d67a7f 26 July, 2022, 18:25:08

In pupil labs VR page there is this video attached. I am trying to do the same thing. So I am trying to plot the gaze data in a panoramic image. It seems this is more difficult that I anticipated. If anyone can give me some advices on how to do something similar, I would really appreciate it πŸ™‚

user-057596 27 July, 2022, 13:49:05

When using the VR eyetracking device we aren’t getting virtual reality imaging on the World view screen in pupil capture and the eye 1 camera view keeps freezing or disconnecting. Looking for some help to rectify these problems. Thanks Gary

papr 28 July, 2022, 06:59:07

See the docs regarding screencasting the VR environment to Capture. https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#screencast

Please contact info@pupil-labs.com regarding the disconnecting camera.

user-057596 28 July, 2022, 07:12:58

Thanks Papr. My experience has been all with Pupil Core and Invisible but I was helping someone out yesterday at Heriot Watt University who was setting up Pupil for eyetracking in VR. What I noticed was that the World View scene was either streaming one of the eye camera feeds and whichever one it was was then freezing. So what is the default image on the worldview if you don’t screencast the VR environment to the world view?

papr 28 July, 2022, 07:16:37

This explains the freezing. Sounds like the world and eye camera processes seem to fight for the same camera.

The default VR world view is no connected camera -> a grey frame. If you restart with default settings, that should come up properly.

user-057596 28 July, 2022, 09:33:20

Thanks Papr. Last question, as I said I’m helping this group who have been setting up the eyetracking in VR at the university but I’ve just been dropped in the deep end πŸ˜±πŸ˜‚. When you record the VR eyetracking data on capture and download it is there a video file included of what the participants were looking at in VR and also the gaze overlay so we can tell what they were looking at?

papr 28 July, 2022, 09:43:35

If the scene is streamed to Capture, Capture will record the scene as video and the gaze as raw data separately. If you open the recording in Pupil Player, it will preview the gaze on the recorded video. Using the world video exporter plugin, you can export the scene video with overlayed gaze. https://docs.pupil-labs.com/core/software/pupil-player/

user-057596 28 July, 2022, 09:47:17

So you have to first set up the screen-casting facility to stream the scene to Capture?

user-057596 28 July, 2022, 14:29:57

Papr the students who are setting up the system are 4th year Psychology and so don’t have a great deal of experience would they be able to set up the screen casting of the VR environment to Capture following the documents or will need some with more expertise/experience to do this?

papr 28 July, 2022, 14:33:57

Maybe, it is possible to setup something similar to the screencast demo linked in the docs.

papr 28 July, 2022, 14:32:11

They will need some experience with Unity, especially when it comes to connecting prefabs.

user-057596 28 July, 2022, 14:33:14

Thanks Papr, we can get someone from computer science to help them and I appreciate all your help today.πŸ’ͺπŸ»πŸ‘πŸ»

user-057596 28 July, 2022, 14:35:22

I will pass across to them and once again thanks for all the help.

End of July archive