@everyone we're excited to introduce our new tool: Tag Aligner! πͺ
Experience your participants' journey and gaze direction in a real-world environment, digitally reimagined! With Tag Aligner, you can dive deep into where someone looked and how they moved.
In the video. On the right: Neon scene video with gaze and fixation visualization. On the left: 3D model of the environment made from a scan using LumaLabsAI
π Key Features: - Integrates Neon eye tracking and head-pose data with a user-supplied 3D model or scan - Reveals 3D spatial gaze and pose, i.e., where the wearer looked and how they moved within the model - Enables participant gaze and trajectory plotting from various perspectives - opening a wealth of analysis and visualization options
Get creative and share your projects in https://discord.com/channels/285728493612957698/1238043619999617125
Learn more: Tag Aligner Guide
@everyone We just published a new paper on pupillometry with Neon!
Accurately measuring pupil size is crucial for many applications. In this paper, we comprehensively evaluate the robustness of Neonβs pupil-size measurements with respect to variations in gaze angle and slippage. We tested the capabilities of pupillometry with Neon in real-world experimental settings by replicating a number of known experimental results from previously published pupillometry research.
Our experimental results affirm Neon as a versatile and reliable measurement device, enabling pupillometry studies in controlled laboratory conditions as well as highly dynamic naturalistic environments.
Check out the paper here: https://zenodo.org/records/10057185
Feel free to post questions in π neon. We are looking forward to seeing your pupillometry experiments (and work in progress).
@everyone New features for Pupil Cloud! π βοΈ
We added a Manual Mapping enrichment - a streamlined interface enabling you to manually map fixations onto a reference image.
Visualize the mapped data just as you would with any of our automated gaze mapping tools. Generate heatmaps, draw AOIs + get metrics on AOIs, and export mapped data as CSVs.
Manual Mapper was designed for standard fixation mapping tasks, but has a lot of potential for broader use e.g. semantic labeling tasks. We encourage you to explore!
A bunch of other features - post-hoc gaze offset, heatmap visualization updates, post-hoc wearer changes - on cloud as well. Check out details in the release notes: https://pupil-labs.com/releases/cloud-v7-2