Hi there,
I wanted to check about calibrating the Pupil Core eye tracker. I tried calibrating it using the natural feature. Unfortunately, it was not accurate, even with the markers pinned onto the wall. Any tips? The screen marker worked ok, though. Urgently seeking advice. Thank you.
Hi @user-d5d36b 👋 I see you have contacted us via email. I will respond to your message there 🙂
Thanks Richard.
Hi Everyone! My name is Daniel and I'm a PhD student in Switzerland. I'm currently investigating if your technology has an application in the treatment of eye cancer. Does anyone of you have time to answer some of my questions?
Hi @user-3d239a 👋. Feel free to ask away!
Fantastic! The code found in your git repo gives access to your deep learning network used for the eye tracking, correct?
Since you ask about deep learning, I assume you're interested in Pupil Invisible rather than Core. Pupil Invisible's gaze estimation pipeline isn't open-source. That said, we expose raw data to the user, and you can certainly integrate third-party software/hardware with Pupil Invisible via the real-time API. Will your questions be solely around Pupil Invisible? If so, let's move to the Invisible channel.
Yes it seems like Pupil Invisible is what Im interested in then
Hi everyone! I am a PhD student in Canada. I am interested in using Eyetracking in a surgical setting. We created a DIY set, but it is not performing very well (relative to our requirements). I was wondering if it is possible to receive a loaner Core system to evaluate performance to judge whether it would be worth purchasing?
Hi @user-864bf1! While we do not offer free loans, we do have a 30 day free return policy, which you could use as a trial. Also feel free to book a video demo via [email removed] to discuss the feasibility of your use-case beforehand with one of our product specialists.
Thanks, Marc!
Hi all ~ 👀 I am a grad student at UT Austin looking at gaze tracking and saw the awesome core demo (https://youtu.be/7wuVCwWcGnE). Is there a 3D geometric eye model used here that determines the size and placement of the green circle based on the pupil center location? Where can I locate this model if there is one? 🙏🏿
Hi @user-42a104 👋 😸 👋
Yes, there is a 3D eye model. You might want to check out the following: 1. pye3d repo & paper linked in the repo: https://github.com/pupil-labs/pye3d-detector/ 2. plugin in Pupil Core software: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/pye3d_plugin.py
Fascinating work! The paper discussion mentions that "inducing shifts of the eyeball, headset-slippage... effectively limits the maximal integration time". Has this issue been addressed in the more recent iterations? Another question is whether this eye model can also be adopted for unconstrained setting where the subject's eyes and camera are farther apart and not in fixed positions. I'd love to learn more 🥲.
Super... thank you @wrp . I shall have a look at them 😘
@user-92dca7 when you have time please could you respond to ☝️
Hey @user-42a104, thanks for reaching out! We have indeed considered the effect of slippage on model fitting further. As a result, in the latest version of pye3d three different eye models are fitted concurrently. Each eye model reflects pupil observations gathered on a different timescale, ranging from few seconds to a couple of minutes. You can find more information about this idea in our documentation: https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-estimates-eyeball-position-on-different-time-scales As to your second question, note that pye3d is developed in order to work with a near-eye camera that provides close-up images of the eye and which is fairly static with respect to the eyeball center. When you say unconstrained, you refer to a setup e.g. with a webcam and a potentially moving person in front of it?
@kai thank you for the details. By unconstrained setup, yes I have in mind a person potentially moving and at a say 1 to 3m distance from the observing camera. What would it take to bridge that gap to enable it to work beyond near-eye?
@user-42a104 What you are describing is a setup that appears to me as best served by a remote eye tracking system. In particular, large scale movement with respect to the camera violates the assumptions of the pye3d eye model and thus would necessitate a different approach. Also note, pye3d is based on an analysis of pupil shape. It seems questionable to me that you can achieve sufficient resolution of the pupil at 3m distance in order to make eye model fitting feasible.
@user-92dca7 thank you for the clarification 🙏🏿. I'd be interested in researching into how to make the eye model work at a greater distance. If someone in your team is interested in this problem, just let me know.
Hello, we are doing experimenting with using pupil core device. How we can analyze the data of this device? We want to get this infotmations: Heat map, scan path and area of interest. We use plug ins during the expriment: Accuracy Visualizer , Fixation Detector, Blink Detector, Surface Tracker