@papr In the Vive, we're consistently seeing poor gaze mapping for the lowest rows, when the eyes are pointed more directly at the camera. Strange, because that's when pupil detection is easiest. Here's a screenshot for example. We used a custom ellipse finder for this. Each image corresponds to a fixation at one of the three fixation points in the bottom row of the assessment fig below. Notice the great 2d fits (blue circles) and very poor model fits.
Given the green eye ball model outline, it looks like you are not using pye3d, is that correct?
The question is - is this behavior you would expect? Note that Kevin is currently looking to validate the same strange 3D model behavior in other subjects.
Yes, it's Py3D
@user-3cff0d - the green circle is just a custom display implementation, right?
(credit to Kevin, who did all the work on this)
The green circle mirrors the way that the actual Pupil Labs software displays the projection of the 3d model outline
The more circular the pupil is the more difficult it is to triangulate the eye model center, i.e. it is difficult to estimate z
. The old 3d detector had an issue with that specifically. The result is that the eye model jumps a lot in z
directions.
Note that pye3d should be less affected by this than the older detector due to the (ultra-)long-term models providing a good bias in case of circular ellipse input
This makes sense considering the bottom row of fixation points pictured has the pupil the most circular, due to the eye cameras being below the eyes
That is, with cv2.ellipse2Poly
applied to the pye3d datum's "projected_sphere"
Hmm. Maybe we can manually filter out pupil ellipses that are too circular (have semi-major and semi-minor axes that are too similar) from affecting the 3d model
The solution is to fit and freeze the eye model first, then to do the accuracy measurement (applies for old 3d and pye3d)
We've actually started doing that already
@papr We are now working offline, and can fit the 3D eye model to an arbitrary duration of data (e.g. to minimize reprojection error). So, perhaps we should do what kevin said, and filter out ellipses with an aspect ratio near to 1 (from the model fitting process)
Then fix the model and estimate gaze.
Ok, great, thanks fo the quick tip. We'll do that and compare across a few subjects
... I think we are using py3d
We are using pye3d yes
Just wanted to put that out there π
Thanks π
Yeah the 3d models snap back to being decent again once the participant looks away from those bottom-most fixation points
Might be that the bias is too weak to keep the short-term model on track. More info here https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-estimates-eyeball-position-on-different-time-scales (you might know this page already). In the implementation, the models only differ in the type [1] and length [2] of buffer/storage used https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/detector_3d.py#L205-L237 [1] flat FIFO buffer vs a spatially-aware buffer (different sub-buffers for e.g. top-left vs top-right pupil ellipses) [2] few vs many samples
I don't know if that can be attributed to the ultra-long term model coming back into play or if it's something else
To be honest I'm still not 100% sure how pye3d (or the original 3d detector for that matter) utilize the multiple models
Great! We'll let you know what we've learned. We're seeing a great increase in precision when using custom pupil detectors, but our confidence thresholds for model fitting are not effective, and the model jumps around a lot. We're currently playing with this offline filtering of pupils prior to the fitting of a static 3D model.
@user-b14f98 @user-3cff0d see also this paper re pupil circularity https://link.springer.com/article/10.3758/s13428-021-01657-8
Hi! I have a question on data processing. I have a few datasets where the eye tracking could not be optimized appropriately (maybe due to bad lightening in the VR headset) and I would like to optimize the date in retrospect. Is there a possibility to do so that goes beyond automatic post-hoc re-calibration? I have a feeling that in my case only manual "pupil clicking" would do the job, but I could not figure out whether this is an option with the pupillabs software. Thank you in advance!
Hi, Pupil Core software does currently not have the option to manually annotate pupil data.
Hiβ¦. Thanks for the help you guys provided earlier. Iβm facing a wierd error now from pupil labs cameras. The image from my left camera is fine and bright but my right camera is very darkβ¦ as I kept my finger in front of both cameras β¦I felt the Light falling on my finger is little in front of my right camera. FYI, I kept the exposure time same for both cameras and it is in manual mode. Any help is highly appreciated
Earlier when I faced something similar, you guys suggested to change exposure time which worked great for me. But this is something different and newβ¦.
Does this variability in brightness of images /LED power observed earlier at different time instances
Hi! Do I understand correctly that adjusting the manual exposure time for the darker camera does not help?
Yes
It does not help
please find the attached images and difference in the brightness
Thanks! Could you send them to [email removed] My guess is that one of the LEDs broke.
Arenβt there two LEDs
Any idea how can we fix this
Our hardware team will be more helpful than me in this regard. Please contact them at info@pupil-labs.com
There was no physical damage to the cameras assembly
I don't remember. Is this a DIY headset or a purchased one?
Purchased oneβ¦
Thanks
I hope they are as quick as you guysβ¦. We are right in the middle of an experiment
They are π
Hi everyone, I see that i am not alone wanting to integrate PL's solutions to the HoloLens 2. So for the record, Drouot et al., (2022) did it. (btw, the product used in this paper is not referenced correctly on the publication section i.e., in VR instead of AR, https://pupil-labs.com/publications/) https://www.sciencedirect.com/science/article/abs/pii/S0003687022001168?casa_token=AdgLTz1yTi8AAAAA:qo0KnKKfEwAojudLugbdyaSvHuCr3jtqP3p3AhZpnUcqoHULGTgkWoQ-vttTyE69E8BqA0uzhp6F
Apparently, they used CAD modeling and 3D printing to integrate the hardware developped by PL to the HL2.
I have contacted the author and she no longer works with the research engineer that developped these CAD models, as she has finished her thesis during which she did this work, and been doing something else since. She kindly told me it was possible and gave me the e mail adress of the guy. I have contacted the research department in which the research engineer worked (and maybe still works), at the time of the paper, and the engineer himself, without any response from both. I'll give you udpates on the matter.
Hi, do you have the results now?
hi, is it possible to connect your HTC Vive eye-tracker to Android Phone and make it work with AR core/AR foundation? - calibration and eye tracking in real time
Hi! To calibrate the add-on, you need a fixed relationship between the eye cameras and the "scene camera". In case of AR core, it seems like the phone's camera would be recording the scene. Since the phone moves independently of the eye tracking add-on, a calibration would not be possible.
Phone will be fixed to the userβs head using aryzon Mr headset
Ok. The hardware itself does not provide gaze data. You require software to process the eye video. The software that we provide for this purpose, Pupil Core software, is only supported on desktop operating systems, specifically macOS, Windows, and Ubuntu. Even if the software was ported to Android, I doubt that the phone had the necessary computational resources to run the pupil detection in real time. An alternative would be to stream the video via wifi to a desktop PC running Pupil Capture and receiving the gaze estimates back via wifi. Similarly to this project https://github.com/Lifestohack/pupil-video-backend/
Ok, thank you.
Hello, I am using the pupil core add-on of HoloLens for an experiment. I am just wondering if I develop an application using the unity plugin of the pupil core, then where will be the origin of the gaze data in the world space? I found that gaze normal vectors provide the rotation of the visual axis separately for each eye. Is there any way of getting the gaze origin vectors for each eye separately? Are the eye center vectors and gaze origin vectors the same?
Hi, the gaze coordinate system defined by the scene camera. See https://docs.pupil-labs.com/core/terminology/#coordinate-system And yes, the eye centers are the origin of the gaze normals (in scene camera coordinates) π
Hello @papr Thanks for your response. I have one more query, I am trying to compute the angle created by the left and right eye gaze normal vectors while focusing on an object. It is true that sometimes they intersect, sometimes they don't. But, the magnitude of the angle does not make sense to me yet. It ranges between 60 to 90 degrees. Sometimes, 20 to 30 degrees. In both cases, the calibration step and focusing object are the same and at the same place. Do you know what could be the reason for this? If I calculate the angle from gaze normals, do I need to do any further steps? Any ideas or thoughts? I am tagging 'papr(pupil labs)' in the message, but if anyone else have any thoughts please feel free to share.
Hello, I am an intern who is using your Pupil Labs Addon-on for the HTC Vive. We have begun using your software however when using world capture, EYE1 constantly drops connection, despite the hardware having a wired connection to the desktop. You can see what i am talking about in the picture i have attached.
Also, as a result eye1 is not captured during calibration and within the vr demo the eye tracker is inaccurate.
Finally, how can I record what is being viewed in the vr environment in both the demo and environments that are offered on Steam?
Please let me know how to resolve these issues.
Hi, I'm very new to Pupil Core (and everything else, actually). I'm trying to use Pupil Core in a CAVE environment (it's an immersive cylinder, which gives us a 360-degree panoramic view instead of a cube). For now, I'm mostly interested in finding out, for example, how long the participant spends looking at targets/non-targets, how much they fixate on it, etc. The scenario is built in Unity and it'll be projected by 5 projectors in the aforementioned CAVE environment, and the participant will wear the Pupil Core device as they navigate and interact with the virtual environment (i.e., they will NOT be wearing a HMD).
May I know how can I go about this, please? I came across an old post here on Discord that seemed to have a similar set up, and the advice was to use the hmd-eyes plugin. I'm afraid I need a bit more hand-holding than that π¬
I also tried the demos in Unity and for some reason, in the Eye Frame Virtualiser, my eyes were upside down but they looked find in Capture. Please let me know how to rectify this, thank you π
Hey, the first step would be to remove low confidence samples to avoid noisy samples. Even then, gaze normals are known to be noisy. An alternative way to calculate vergence would be to create two new vectors and to calculate the angle between them: A=eye_center0 - gaze_point_3d B=eye_center1 - gaze_point_3d
Yes, I am following the first step. I tried your given ideas of creating two new vectors. But, still, the values are highly similar, just slight changes from the calculation of vergence from the gaze normals. Just wondering, what are we getting by subtracting gaze points from the gaze center? Do you think this issue related to the calibration issue? We are doing 10 points natural feature calibration. Any better way of doing this?
If you are using Pupil Core you will need some way to track the environment within the scene videos. We usually use apriltag markers for this: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
I'm so sorry but I've read the link you've just posted a few times but I still don't quite understand the concept of surfaces in the context of eye-tracking.
Anyway, I'm not sure whether apriltags can be used in my experiment. I think the easiest way to describe what I will be doing is this: A participant will be in an immersive cylinder (such as this one: https://www.igloovision.com/images/ImageLib/6m_270_3-4_x-ray_1.jpg) and they will play a game in it. I'm interested in seeing whether the participant will be able to spot targets in the game correctly.
How would apriltags work in this case?
Gaze is estimated in the Core headset scene camera not the environment/immersive cylinder coordinates. Now, you can go ahead, review the recorded scene video and manually annotate all events in which the target looks at a target. But that is a lot of work. What you want to do is to find the gaze location in immersive cylinder coordinates. But to do that autonomously, you need a self-updating mapping function between the headset und the environment. This can be achieved by using recognizable markers in the environment.
I don't know much about CAVE. Does it track the users head position?
As of now we're not interested in tracking the user's head position but that may change in the future.
And thank you very much for explaining the use of apriltags to me. I mean I could place the apriltags in the virtual environment on all the targets and nontargets. So for instance if the virtual environment is populated with boxes of different sizes (and the largest ones are the targets), I suppose I could have a unique apriltag appear on each box.
Do you think that'll work?
Hi there,does anybody know how to remove the htc vive add-on hardware from the hmd?It just got stuck there!!!
not interested in tracking the user's head position That would give you an potential alternative to using apriltag markers though
I suppose I could have a unique apriltag appear on each box. That should work well (might depend on actual lighting conditions)
I've looked at the head pose tracker tutorial video/page but I think I'll have to test that in our set up to better understand how that would work in VR.
As for the lighting conditions, that's the bit I'm worried about. The VR environment is supposed to have low visibility but there will be a virtual torchlight that the participant can control to light up a specific area of interest. I guess I'll just have to try it out and see whether it works.
Thank you so much for your time!
The headset clips on. There is a bit of force required to pull them off as far as I know
Thanks anyway
Did it,had to use a screwdriver.that's more than a bit of force..
what are we getting by subtracting gaze points from the gaze center? The resulting vectors (gaze directions) are guaranteed to intersect in the gaze_point_3d while gaze_normals0/1 rarely intersect perfectly.
We are doing 10 points natural feature calibration Which hardware are you using?
We are using pupil core device's add on for hololens
@papr Another parameter we are interested in measuring is the distance between the center of the left eye pupil to the center of the right eye pupil, in other words, interpupillary distance. I am considering the distance between the eye centers and the distance between the pupil centers as a 3d circle from the pupil data to check which one behaves appropriately. When I checked the magnitude while considering the distance between the centers of the pupil from pupil data, it showed values ranging from 15-20mm, which is wrong in terms of IPD. When we consider eye centers, it goes from 50 to 65 mm, which is the expected range. But, when we verify this with a pupillometer for a certain distance, we found the difference around 10mm (e.g., the distance between the eye center is 60mm, the pupillometer shows 70mm). Could you please provide some suggestions regarding this? Also, which parameter would be more appropriate for the IPD calculation based on the pupil lab eye tracker?
If you're doing this in the lab, an easy way is to just have the person look at a mirror that is mounted on the wall at eye height . Use a dry erase marker to mark the distance of their pupils on the mirror, and then measure the distance between the markings. You can then use this value to set the physical IPD (by adjusting the knob) and virtual IPD in your game engine (but know that, unless you override the virtual IPD, it is likely updated by changes to physical IPD).
Hey, the most correct parameter to measure IPD would be the distance between eye centers. But the system cannot be used to measure IPD. To find the physical relationship between the movable cameras, it assumes a specific starting location of the eye balls, i.e. a specific IPD of 60mm. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L17-L19
Make sure to fit your eye models well before calibrating. If you continue seeing unrealistic values, please share an example Capture recording incl. calibration choreo with [email removed] This will allow us to give more concrete feedback.
Hi, is it possible to generate a heatmap in VR from eye tracking data? I have a 360 panoramic image and I am interested to plot heatmap from eye tracking data.
Generally, yes, but you will have to calculate it yourself by intersecting the gaze direction vectors with your image and aggregating those locations.
Thanks for your reply. I believe that the gaze direction vectors can be found in gaze_positions.csv file and those are: gaze_point_3d_x - x position of the 3d gaze point (the point the sublejct lookes at) in the world camera coordinate system gaze_point_3d_y - y position of the 3d gaze point gaze_point_3d_z - z position of the 3d gaze point
In pupil labs VR page there is this video attached. I am trying to do the same thing. So I am trying to plot the gaze data in a panoramic image. It seems this is more difficult that I anticipated. If anyone can give me some advices on how to do something similar, I would really appreciate it π
When using the VR eyetracking device we arenβt getting virtual reality imaging on the World view screen in pupil capture and the eye 1 camera view keeps freezing or disconnecting. Looking for some help to rectify these problems. Thanks Gary
See the docs regarding screencasting the VR environment to Capture. https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#screencast
Please contact info@pupil-labs.com regarding the disconnecting camera.
Thanks Papr. My experience has been all with Pupil Core and Invisible but I was helping someone out yesterday at Heriot Watt University who was setting up Pupil for eyetracking in VR. What I noticed was that the World View scene was either streaming one of the eye camera feeds and whichever one it was was then freezing. So what is the default image on the worldview if you donβt screencast the VR environment to the world view?
This explains the freezing. Sounds like the world and eye camera processes seem to fight for the same camera.
The default VR world view is no connected camera -> a grey frame. If you restart with default settings, that should come up properly.
Thanks Papr. Last question, as I said Iβm helping this group who have been setting up the eyetracking in VR at the university but Iβve just been dropped in the deep end π±π. When you record the VR eyetracking data on capture and download it is there a video file included of what the participants were looking at in VR and also the gaze overlay so we can tell what they were looking at?
If the scene is streamed to Capture, Capture will record the scene as video and the gaze as raw data separately. If you open the recording in Pupil Player, it will preview the gaze on the recorded video. Using the world video exporter plugin, you can export the scene video with overlayed gaze. https://docs.pupil-labs.com/core/software/pupil-player/
So you have to first set up the screen-casting facility to stream the scene to Capture?
Papr the students who are setting up the system are 4th year Psychology and so donβt have a great deal of experience would they be able to set up the screen casting of the VR environment to Capture following the documents or will need some with more expertise/experience to do this?
Maybe, it is possible to setup something similar to the screencast demo linked in the docs.
They will need some experience with Unity, especially when it comes to connecting prefabs.
Thanks Papr, we can get someone from computer science to help them and I appreciate all your help today.πͺπ»ππ»
I will pass across to them and once again thanks for all the help.