Hi @user-7fc432 π !
Calling this a "study" might be an overstatement, but we did collect some sample data and would be happy to share the Pupil Cloud workspace with you. This will allow you to review how we approached the automated mapping and aggregation.
If you havenβt yet purchased our eye trackers and would like to learn more about themβor discuss the workspace furtherβfeel free to book a call with us.
Let me know if you'd like access!
is there a documantation of this study available or paper?
We are interested in this study and would like to reproduce this and considering using your hardware
Hi @user-98b2a9 ! π
Apologies for the delayed response, I just realized we hadnβt replied here. As mentioned in the documentation, Face Mapper leverages the RetinaFace algorithm. Thus, the most correct citation for this would be:
[email removed] title={Retinaface: Single-shot multi-level face localisation in the wild}, author={Deng, Jiankang and Guo, Jia and Ververas, Evangelos and Kotsia, Irene and Zafeiriou, Stefanos}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={5203--5212}, year={2020} } ``` But since using our implementation, you might want to refer to it as Face Mapper (Pupil Cloud) too, for reproducibility.
Thanks!
Hello,
I am working on a research problem (unfortunately not with pupil labels hardware, so I hope it is ok if I post this question here) , we have a moving stereo IR camera and we have DL based pupil and sclera tracking model.
We are trying to get Gaze from this and it has been complicated. Since it is IR and our stereo is in a odd angle it is hard to match points in each frame. And since the camera also moves Swirski model does not work well too and unfortunaly we can't have a callibration step.
Would appreciate any ideas.