research-publications


user-d5d36b 05 April, 2022, 08:51:48

Hi there,

user-d5d36b 05 April, 2022, 08:53:17

I wanted to check about calibrating the Pupil Core eye tracker. I tried calibrating it using the natural feature. Unfortunately, it was not accurate, even with the markers pinned onto the wall. Any tips? The screen marker worked ok, though. Urgently seeking advice. Thank you.

user-9429ba 05 April, 2022, 09:38:49

Hi @user-d5d36b 👋 I see you have contacted us via email. I will respond to your message there 🙂

user-d5d36b 05 April, 2022, 09:44:53

Thanks Richard.

user-3d239a 08 April, 2022, 10:18:30

Hi Everyone! My name is Daniel and I'm a PhD student in Switzerland. I'm currently investigating if your technology has an application in the treatment of eye cancer. Does anyone of you have time to answer some of my questions?

nmt 08 April, 2022, 12:19:00

Hi @user-3d239a 👋. Feel free to ask away!

user-3d239a 08 April, 2022, 12:26:33

Fantastic! The code found in your git repo gives access to your deep learning network used for the eye tracking, correct?

nmt 08 April, 2022, 12:32:23

Since you ask about deep learning, I assume you're interested in Pupil Invisible rather than Core. Pupil Invisible's gaze estimation pipeline isn't open-source. That said, we expose raw data to the user, and you can certainly integrate third-party software/hardware with Pupil Invisible via the real-time API. Will your questions be solely around Pupil Invisible? If so, let's move to the Invisible channel.

user-3d239a 08 April, 2022, 12:33:29

Yes it seems like Pupil Invisible is what Im interested in then

user-864bf1 12 April, 2022, 20:33:08

Hi everyone! I am a PhD student in Canada. I am interested in using Eyetracking in a surgical setting. We created a DIY set, but it is not performing very well (relative to our requirements). I was wondering if it is possible to receive a loaner Core system to evaluate performance to judge whether it would be worth purchasing?

marc 13 April, 2022, 09:26:05

Hi @user-864bf1! While we do not offer free loans, we do have a 30 day free return policy, which you could use as a trial. Also feel free to book a video demo via [email removed] to discuss the feasibility of your use-case beforehand with one of our product specialists.

user-864bf1 13 April, 2022, 13:13:21

Thanks, Marc!

user-42a104 20 April, 2022, 09:32:57

Hi all ~ 👀 I am a grad student at UT Austin looking at gaze tracking and saw the awesome core demo (https://youtu.be/7wuVCwWcGnE). Is there a 3D geometric eye model used here that determines the size and placement of the green circle based on the pupil center location? Where can I locate this model if there is one? 🙏🏿

Chat image

wrp 20 April, 2022, 09:36:58

Hi @user-42a104 👋 😸 👋

Yes, there is a 3D eye model. You might want to check out the following: 1. pye3d repo & paper linked in the repo: https://github.com/pupil-labs/pye3d-detector/ 2. plugin in Pupil Core software: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/pye3d_plugin.py

user-42a104 20 April, 2022, 11:42:05

Fascinating work! The paper discussion mentions that "inducing shifts of the eyeball, headset-slippage... effectively limits the maximal integration time". Has this issue been addressed in the more recent iterations? Another question is whether this eye model can also be adopted for unconstrained setting where the subject's eyes and camera are farther apart and not in fixed positions. I'd love to learn more 🥲.

user-42a104 20 April, 2022, 09:39:20

Super... thank you @wrp . I shall have a look at them 😘

wrp 20 April, 2022, 17:29:11

@user-92dca7 when you have time please could you respond to ☝️

user-92dca7 21 April, 2022, 11:09:55

Hey @user-42a104, thanks for reaching out! We have indeed considered the effect of slippage on model fitting further. As a result, in the latest version of pye3d three different eye models are fitted concurrently. Each eye model reflects pupil observations gathered on a different timescale, ranging from few seconds to a couple of minutes. You can find more information about this idea in our documentation: https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-estimates-eyeball-position-on-different-time-scales As to your second question, note that pye3d is developed in order to work with a near-eye camera that provides close-up images of the eye and which is fairly static with respect to the eyeball center. When you say unconstrained, you refer to a setup e.g. with a webcam and a potentially moving person in front of it?

user-42a104 21 April, 2022, 17:05:04

@kai thank you for the details. By unconstrained setup, yes I have in mind a person potentially moving and at a say 1 to 3m distance from the observing camera. What would it take to bridge that gap to enable it to work beyond near-eye?

user-92dca7 21 April, 2022, 17:37:14

@user-42a104 What you are describing is a setup that appears to me as best served by a remote eye tracking system. In particular, large scale movement with respect to the camera violates the assumptions of the pye3d eye model and thus would necessitate a different approach. Also note, pye3d is based on an analysis of pupil shape. It seems questionable to me that you can achieve sufficient resolution of the pupil at 3m distance in order to make eye model fitting feasible.

user-42a104 22 April, 2022, 04:46:21

@user-92dca7 thank you for the clarification 🙏🏿. I'd be interested in researching into how to make the eye model work at a greater distance. If someone in your team is interested in this problem, just let me know.

user-865b37 23 April, 2022, 11:55:11

Hello, we are doing experimenting with using pupil core device. How we can analyze the data of this device? We want to get this infotmations: Heat map, scan path and area of interest. We use plug ins during the expriment: Accuracy Visualizer , Fixation Detector, Blink Detector, Surface Tracker

End of April archive