Hi
Hello I am looking for information on how long or how many applications it takes for children aged 6-10 years to get used to the glasses and perform their activities unaffected by them. We want to test scenarios that come as close as possible to the reality of the participants.
Hello Everyone. I would like to share our recent work:
2024 ISMAR-Journal Track Paper: Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual Reality
The project (including video, code and dataset) is here: https://zhimin-wang.github.io/TaskTypeRecognition.html
In this paper, we propose four scene-agnostic task types for facilitating task type recognition across a broader range of scenarios. We present a new dataset that includes eye and head movement data recorded from 20 participants while they engaged in four task types across 15 360-degree VR videos. Using this dataset, we propose an egocentric gaze-aware task type recognition method, TRCLP, which achieves promising results. Additionally, we illustrate the practical applications of task type recognition with three examples. Our work offers valuable insights for content developers in designing task-aware intelligent applications.
I also would like to share our recent work:
2024 ISMAR Conference Paper: GazeRing: Enhancing Hand-Eye Coordination with Pressure Ring in Augmented Reality
The project (including video, code and dataset) is here: https://zhimin-wang.github.io/GazeRing.html
In this paper, we propose GazeRing, a multimodal interaction technique that combines eye gaze with a smart ring, enabling private and subtle hand-eye coordination while allowing usersβ hands complete freedom of movement. Specifically, we design a pressure-sensitive ring that supports sliding interactions in eight directions to facilitate efficient 3D object manipulation. Additionally, we introduce two control modes for the ring: finger-tap and finger-slide, to accommodate diverse usage scenarios. Through user studies involving object selection and translation tasks under two eye-tracking accuracy conditions, with two degrees of occlusion, GazeRing demonstrates significant advantages over existing techniques that do not require obvious hand gestures (e.g., gaze-only and gaze-speech interactions). Our GazeRing technique achieves private and subtle interactions, potentially improving the user experience in public settings.
Hello, just trying my luck here: does anyone have, or know any open eye tracking data on attention allocation task please? Such as examining saccade or dulation when subject allocation attention between image and semantic information etc. Thanks!