πŸ“― announcements


Year

nmt 07 June, 2024, 08:37:55

@everyone we're excited to introduce our new tool: Tag Aligner! πŸͺ„

Experience your participants' journey and gaze direction in a real-world environment, digitally reimagined! With Tag Aligner, you can dive deep into where someone looked and how they moved.

In the video. On the right: Neon scene video with gaze and fixation visualization. On the left: 3D model of the environment made from a scan using LumaLabsAI

πŸ” Key Features: - Integrates Neon eye tracking and head-pose data with a user-supplied 3D model or scan - Reveals 3D spatial gaze and pose, i.e., where the wearer looked and how they moved within the model - Enables participant gaze and trajectory plotting from various perspectives - opening a wealth of analysis and visualization options

Get creative and share your projects in https://discord.com/channels/285728493612957698/1238043619999617125

Learn more: Tag Aligner Guide

wrp 14 June, 2024, 03:50:29

@everyone We just published a new paper on pupillometry with Neon!

Accurately measuring pupil size is crucial for many applications. In this paper, we comprehensively evaluate the robustness of Neon’s pupil-size measurements with respect to variations in gaze angle and slippage. We tested the capabilities of pupillometry with Neon in real-world experimental settings by replicating a number of known experimental results from previously published pupillometry research.

Our experimental results affirm Neon as a versatile and reliable measurement device, enabling pupillometry studies in controlled laboratory conditions as well as highly dynamic naturalistic environments.

Check out the paper here: https://zenodo.org/records/10057185

Feel free to post questions in πŸ‘“ neon. We are looking forward to seeing your pupillometry experiments (and work in progress).

Chat image

wrp 28 June, 2024, 06:27:29

@everyone New features for Pupil Cloud! πŸ‘€ ☁️

We added a Manual Mapping enrichment - a streamlined interface enabling you to manually map fixations onto a reference image.

Visualize the mapped data just as you would with any of our automated gaze mapping tools. Generate heatmaps, draw AOIs + get metrics on AOIs, and export mapped data as CSVs.

Manual Mapper was designed for standard fixation mapping tasks, but has a lot of potential for broader use e.g. semantic labeling tasks. We encourage you to explore!

A bunch of other features - post-hoc gaze offset, heatmap visualization updates, post-hoc wearer changes - on cloud as well. Check out details in the release notes: https://pupil-labs.com/releases/cloud-v7-2

End of June archive