👓 neon


Year

user-bda2e6 01 November, 2024, 15:25:19

Hello! I have some questions regarding using Neon and Psychopy together at the same time, to which channel should I direct these questions?

user-cdcab0 01 November, 2024, 15:32:48

Hi, @user-bda2e6 - this is the right place - what's your question?

user-bda2e6 01 November, 2024, 15:35:12

Thank you for the reply! I just can't get them connected. I followed the steps on this website https://www.psychopy.org/api//iohub/device/eyetracker_interface/PupilLabs_Neon_Implementation_Notes.html, and also tried the sample experiment. But every time I run the experiment in Psychopy I get errors and it crashes

user-cdcab0 01 November, 2024, 15:44:37

Does Neon's built-in monitor app work for you in a web browser? If that doesn't work, PsychoPy's plugin won't work either, and it's likely due to a network configuration problem. Some network problems can be worked-around by using the device's IP address instead of "neon.local" (both in the web-based monitor app and in PsychoPy's experiment settings), but some issues are more complex and require more troubleshooting.

If the web monitor works ok for you and using the IP address doesn't help in PsychoPy, then it would be helpful to know what errors you are seeing in PsychoPy. Some university and corporate networks have restrictions that prevent things from working properly, so it would also be good to know what your network environment is.

Finally, for your reference, that link you posted has an extra slash that breaks the formatting on PsychoPy's website, and it's unfortunately the link you get when googling. The corrected link is nicer to look at. Additionally, you may be interested in our Neon+PsychoPy documentation as well.

user-bda2e6 01 November, 2024, 22:40:24

Thank you! I’ll look into it and let you know if there are any problems @user-cdcab0

user-a8bc07 01 November, 2024, 18:20:56

Hi There. Is there a problem with Pupil Cloud

user-f43a29 04 November, 2024, 09:28:39

Hi @user-a8bc07 , can you please let us know if you are still having trouble with your recordings?

user-a8bc07 01 November, 2024, 18:21:09

my recordings keeep buffering since this morning

user-d086cf 01 November, 2024, 19:13:00

Hey guys. I'm quickly getting up to speed but have one more question. We were able to get surface tracking with april tags working for our study, and can see the log it generates with the normalized gaze positions on the surface. Is there a way to automatically use these gaze values in the other logs generated by pupil player instead of the pixel values of the world camera? Or does this need to be done by a custom script post export from pupil player?

user-f43a29 04 November, 2024, 09:27:12

Hi @user-d086cf , I assume you mean Neon Player? Which other data exports would you like to convert?

Since the data are all timestamped from the same clock, technically, you can substitute the normalized & mapped gaze coordinates by finding the matching timestamps.

user-e7add5 02 November, 2024, 07:55:39

Hi, I am just getting to know the Neon and checked the data output it creates. I would like to link the aoi.name to start.timestamp..ns. I think currently they are in different files, but for my purposes it would be great to be able to link them so that I could have a timeline of aoi visits. Any tips for this?

user-29a91e 03 November, 2024, 23:17:59

Hello! I have a pair of Neon Glasses that I am having trouble connecting to the LSL Lab Recorder. It is not showing up under ‘Record from Streams’. I have tried different troubleshooting tips like trying different wifi networks, connecting from a personal hotspot, and disabling firewall connections. Is there anything you would suggest to connect the Neon Glasses to LSL?

user-f43a29 04 November, 2024, 09:17:12

Hi @user-29a91e , may I first confirm the following:

  • Is the "Stream over LSL" option in the settings enabled?
  • What version of the Neon Companion App do you have installed?
user-480f4c 04 November, 2024, 07:19:25

Hi @user-e7add5! To get the start/end timestamp [ns] (available in fixations.csv) for the fixations that appear in the aoi_fixations.csv, you can use the pandas function merge to merge the two dataframes, e.g., if df1 = aoi_fixations.csv, and df2 = fixations.csv:

merged_df = pd.merge( df1, df2[['section id', 'recording id', 'fixation id', 'start timestamp [ns]', 'end timestamp [ns]']], on=['section id', 'recording id', 'fixation id'], how='left' ) this will merge the first and second data frames and add the start timestamp [ns] and end timestamp [ns] columns from the second DataFrame into the first, based on matching section id, recording id, and fixation id.

user-e7add5 04 November, 2024, 15:17:19

@user-480f4c Thanks a lot, I will try that!

user-16354b 04 November, 2024, 10:51:35

Dear sir

Thank you for your management. Could you please respond to the following? Please answer the following questions?

I have NEON and INVISIBLE. I'm plan to record more than a few dozen hours of data.

Q1 Is the storage capacity of Pupil Cloud only 2 hours? (Has this been the case before?)

Q2 If I delete them from PupilCloud after 2 hours of data storage, can I immediately save the data in PupilCloud?

user-f43a29 04 November, 2024, 10:59:45

Hi @user-5c56d0 , I've moved your message to the 👓 neon channel.

To answer your questions:

  • We recently introduced paid Add-Ons for Pupil Cloud. Please see the related announcement for more details (https://discord.com/channels/285728493612957698/733230031228370956/1292784609938898954).
  • It sounds like you certainly purchased your devices before Oct 28th, 2024, so you should be a member of the Early Adopters Club. This means that you have 1 year of Unlimited Storage for each device until Oct 31st, 2025. (More details are contained in the announcement linked above.) You can check this by clicking on your user icon in the top right of Pupil Cloud, going to Account Settings, and then Cloud Add-ons.
  • Otherwise, the free accounts can access 2 hours of recordings on Pupil Cloud, per device.
  • I assume with question 2, you mean that if you delete recordings from Pupil Cloud, does this then free up the 2 hour quota, so that you can immediately work with new data on Pupil Cloud? If so, then yes, but this will not be necessary while you have the Unlimited Storage Add-on.
user-d407c1 04 November, 2024, 11:20:00

Hi @user-e7add5 👋! To add to my colleague's response https://discord.com/channels/285728493612957698/1047111711230009405/1302894963050151946, I wanted to clarify that there isn’t currently an AOI timeline visits feature within Cloud. But if you’d like to see this feature in Cloud, please consider suggesting it in the 💡 features-requests channel.

In the meantime, you can use this gist to programmatically generate a timeline of visits alike the one you mention, see image below.

Chat image

user-e7add5 04 November, 2024, 15:19:32

@user-d407c1 Thank you, will also try that!

user-5c56d0 04 November, 2024, 12:48:59

Dear Rob

user-f43a29 04 November, 2024, 15:26:50

Hi @user-5c56d0 , just letting you know that your message was incomplete.

user-a09f5d 04 November, 2024, 16:11:15

Hi I am interesting in finding the 3D (X,Y and Z) gaze location when tracking an object moving in 3D. The gaze ray cast provided in gaze.csv isn't appropriate for this since it assumes that the gaze location (as X, Y, Z coordinates) is always the point at which the ray intersects an object, however, this is not always the case in my experiment. For example, if you fixation on the tip of your finger for a few second and then move your finger away while maintaining the same fixation/focus, the ray cast would assume you were now fixating on the far wall, even though in reality your gaze is still fixated on an empty point in space where your finger used to be.

Chat image

user-a09f5d 04 November, 2024, 16:11:20

Therefore, following a conversation with @user-cdcab0 a little while back I have instead opted to use the optical axes data provided in 3d_eye_states.csv (the eventual hope is to use the visual axes instead) to find the point at which the optical axes from each eye intersect and take this as the 3D gaze position. However, the gaze estimates calculates in this way seem to be wildly off and don't seem to change over the course of the experiment. Furthermore, when I use a visualization to compare the optical axes and the ray cast from gaze.csv, they are very different. For example, in the attached figure, the green and blue lines represents the right and left optical axes respectively (from data in 3d_eye_states.csv) while the red line is the gaze ray cast (from data in gaze.csv). The representation of the two eyes and the origin (Neon world camera) are too small to see in this example without zooming in. As you can see from the figure the position of the optical axes vector and the gaze ray vector are very different.

Do you know why this is? Based on my knowledge of where the subject was looking during the experiment I believe that it is the optical axes rays that are are wrong, but I have no idea why this, particularly given that I assume the optical axes are used in the calculation of the gaze ray?! Additionally, when you compare different frames, the gaze ray moves around as the subject looks around the room, however, the optical axes seem to mostly just point in the same rough direction (with some exceptions).

user-d407c1 04 November, 2024, 16:59:37

Hi @user-a09f5d 👋 ! Firstly, may I ask how do you transform the optical axes vector from the eyes origin to the scene camera? It looks like your are only shifting the origin.

user-a09f5d 04 November, 2024, 22:14:47

I first use tag aligner to get 'aligned_poses.csv'. I then convert 'translation_X/Y/Z' into mm (as the AprilTag used for tag aligner was defined in cm): aligned_poses_data['translation_x [mm]'] = aligned_poses_data['translation_x'] * 10 aligned_poses_data['translation_y [mm]'] = aligned_poses_data['translation_y'] * 10 aligned_poses_data['translation_z [mm]'] = aligned_poses_data['translation_z'] * 10

I then use timestamps to match the data in aligned_poses_data (from aligned_poses.csv) with the data in Eye_state_data (from 3d_eye_states.csv):

aligned_poses_data['unix_start_timestamp'] = aligned_poses_data['start_timestamp'] * 1e9
aligned_poses_data['unix_end_timestamp'] = aligned_poses_data['end_timestamp'] * 1e9

def find_matching_pose(row):
    matches = aligned_poses_data[(aligned_poses_data['unix_start_timestamp'] <= row['timestamp [ns]']) &
                                 (aligned_poses_data['unix_end_timestamp'] >= row['timestamp [ns]'])]
    if not matches.empty:
        match = matches.iloc[0]
        return pd.Series({'matched aligned pose X [mm]': match['translation_x [mm]'],
                          'matched aligned pose Y [mm]': match['translation_y [mm]'],
                          'matched aligned pose Z [mm]': match['translation_z [mm]']})
    else:
        return pd.Series({'matched aligned pose X [mm]': None,
                          'matched aligned pose Y [mm]': None,
                          'matched aligned pose Z [mm]': None})

# Apply the function to each row in eye_states
Eye_state_data[['matched aligned pose X [mm]', 'matched aligned pose Y [mm]', 'matched aligned pose Z [mm]']] = Eye_state_data.apply(find_matching_pose, axis=1)
user-29a91e 04 November, 2024, 17:54:24

Yes the "Stream over LSL" option in the settings is enabled and we have the 2.8.34 version of the app installed.

user-f43a29 06 November, 2024, 00:41:03

Hi @user-29a91e , could you open a support ticket in 🛟 troubleshooting about this? Thanks.

user-f43a29 05 November, 2024, 09:05:42

Hi @user-29a91e , let me check with the team today and update you.

user-a09f5d 04 November, 2024, 22:15:04

I then calibrate the position of the eyeball centers:

# Use the aligned poses to calibrate the position of eyeball center
Eye_state_data['aligned eyeball center left x [mm]'] = Eye_state_data['eyeball center left x [mm]'] + Eye_state_data['matched aligned pose X [mm]']
Eye_state_data['aligned eyeball center left y [mm]'] = Eye_state_data['eyeball center left y [mm]'] + Eye_state_data['matched aligned pose Y [mm]']
Eye_state_data['aligned eyeball center left z [mm]'] = Eye_state_data['eyeball center left z [mm]'] + Eye_state_data['matched aligned pose Z [mm]']
Eye_state_data['aligned eyeball center right x [mm]'] = Eye_state_data['eyeball center right x [mm]'] + Eye_state_data['matched aligned pose X [mm]']
Eye_state_data['aligned eyeball center right y [mm]'] = Eye_state_data['eyeball center right y [mm]'] + Eye_state_data['matched aligned pose Y [mm]']
Eye_state_data['aligned eyeball center right z [mm]'] = Eye_state_data['eyeball center right z [mm]'] + Eye_state_data['matched aligned pose Z [mm]']

... and use this calibrated position of eyeball center as the origin of the optical axes in the previous figure:

# Define the centers of the eyeballs
left_eye_center = np.array([eye_state_row['aligned eyeball center left x [mm]'], eye_state_row['aligned eyeball center left z [mm]'], eye_state_row['aligned eyeball center left y [mm]']])
right_eye_center = np.array([eye_state_row['aligned eyeball center right x [mm]'], eye_state_row['aligned eyeball center right z [mm]'], eye_state_row['aligned eyeball center right y [mm]']])
# NOTE: that Z and Y are switched because Z (which is depth in pupil labs coordinate space) needs to be plotted on the plots Y axis, and Y (which is height) needs plotting on the vertical Z axis of the plot.
user-a09f5d 04 November, 2024, 22:15:11

Now the last bit is where I suspect I may be going wrong but I am not sure. To calculate the end of the end of each optical axis I do the following:

 # Define the optical axes
optical_axis_left = np.array([eye_state_row['optical axis left x'], eye_state_row['optical axis left z'], eye_state_row['optical axis left y']])
optical_axis_right = np.array([eye_state_row['optical axis right x'], eye_state_row['optical axis right z'], eye_state_row['optical axis right y']])

# Calculate the end points of the optical axis lines
optical_axis_length = 10000 #To increase the length of the optical axes beyond the limits of the eye model. 
left_eye_optical_axis_end = left_eye_center + optical_axis_left * optical_axis_length
right_eye_optical_axis_end = right_eye_center + optical_axis_right * optical_axis_length
user-a09f5d 04 November, 2024, 22:24:21

For the gaze ray cast I do something similar except I use the aligned_poses.csv as the origin... aligned_origin = np.array([eye_state_row['matched aligned pose X [mm]'], eye_state_row['matched aligned pose Z [mm]'], eye_state_row['matched aligned pose Y [mm]']])

...and the end point of the gaze ray is calculated as:

# Calculate the end point of the ray
neon_gaze_ray_x_end = gaze_ray_length * np.cos(eye_state_row['matched elevation [deg]']) * np.sin(eye_state_row['matched azimuth [deg]']) # left-right direction. Positive x points to the right. Negative x points to the left
neon_gaze_ray_z_end = gaze_ray_length * np.cos(eye_state_row['matched elevation [deg]']) * np.cos(eye_state_row['matched azimuth [deg]']) # depth. Positive z points forward (away from the observer). Negative z points backward (towards the observer)
neon_gaze_ray_y_end = gaze_ray_length* np.sin(eye_state_row['matched elevation [deg]']) # height. Positive y points upward. Negative y points downward

'matched elevation [deg]']) and 'matched azimuth [deg] are the timematched values of 'elevation [deg]']) and 'azimuth [deg] from gaze.csv.

user-24f54b 05 November, 2024, 00:00:57

Hello, is it possible to start recording from the bare metal neon using a trigger signal from another device or can it only be started from the app?

user-480f4c 05 November, 2024, 07:25:20

Hi @user-24f54b! Yes, it's possible to control remotely your recording programmatically using the Real-Time API. You can read more on our docs that include some code snippets on how to start/stop recordings programmatically, send events and more: https://docs.pupil-labs.com/neon/real-time-api/tutorials/

user-d407c1 05 November, 2024, 08:47:42

Hi @user-a09f5d, On a quick glance, I noticed several potential confounding factors in your approach. If I understand correctly, you’re finding the relationship with your 3D environment using Tag Aligner, correct? Then:

  1. You convert your measurements to millimeters.

  2. You identify the corresponding pose for the specific timestamp.

  3. You shift/translate the eyeball centers using the aligned position, which is fine. However, swapping the axes at this stage seems to add unnecessary complexity. Could you share which tool you are using for plotting and why you're altering the axes? We adhere to OpenCV’s convention, and I suggest making plotting transformations the final step.

  4. You use the aligned pose as the origin for your gaze.

  5. For optical axes, it seems you're not considering the rotation of the aligned poses. You should apply the rotation matrix to your vectors to align them properly. For example:

   optical_axis_left_aligned = rotation_matrix @ optical_axis_left_original

Then, you can swap the axes as you do to plot.

  1. A few areas I'd need clarification: How do you calculate the eye state elevation and azimuth while accounting for the swapped axes? Keep in mind that gaze elevation and azimuth do not account for the axis swapping, and the gaze vector should also be rotated to match the aligned pose.

Here’s an example for converting between Cartesian and spherical coordinates if it helps.

Also, note that np.cos()and np.sin() functions take angles in radians.

user-a09f5d 06 November, 2024, 21:10:45

Hi @user-d407c1 Thanks for your helpful response.

  • If I understand correctly, you’re finding the relationship with your 3D environment using Tag Aligner, correct?* Yes, I am interested in smooth pursuit eye movements when tracking object in the real world. I am recording an observer's eye movements as they track a moving target in 3D. I use tag aligner to convert the eye movement data into the same coordinate system that I use to track the moving target.

*You shift/translate the eyeball centers using the aligned position, which is fine. However, swapping the axes at this stage seems to add unnecessary complexity. * This is a fair critique. I'm not sure why I chose to swap the axes at this stage, however, I've now simplified the code so that I only flip the axes when plotting and not before.

Could you share which tool you are using for plotting and why you're altering the axes? We adhere to OpenCV’s convention, and I suggest making plotting transformations the final step. I've been using PyVista (which is what I used for the image in my previous post) and matplotlib. Both packages have it to that Z is the 'vertical' axis (which intuitively should be the vertical position/height when visualisation a 3D scene). This is why I swap Z and Y when plotting since Y is the vertical axes or height of gaze in OpenCV’s convention.

*For optical axes, it seems you're not considering the rotation of the aligned poses. You should apply the rotation matrix to your vectors to align them properly. * Thank you for catching this. This was definingly one of the places I was going wrong. I have modified the code to take into account the rotation. I now calculate the optical axes as follows:

user-ac085e 06 November, 2024, 00:33:40

Hello all, I was wondering if someone could tell me how the "time to first fixation [ms]" variable is calculated in the aoi_metrics.csv file from an image reference mapping enrichment download? I'm using a temporal selection but the values are not matching up with manual verification of the gaze tracking through the app. For example, I'm seeing a value of 139993ms in the csv file but looking at the video/gaze tracking itself, the value should be around 300ms, or 1200ms at most even if the first gaze was missed.

user-d407c1 06 November, 2024, 07:49:14

Hi @user-ac085e 👋 !

Quick question: is the time to first fixation (TTFF) you're reporting from the CSV files or the visualization? Also, how many recordings are in the project? And do you have a custom event set for the start of the enrichment?

By default, when you run an enrichment, it processes all recordings in the project that have the start and end event delimiters set on the temporal selection (typically recording.begin and recording.end). This means the enrichment runs on the entirety of each recording. To make this more efficient, you can set specific start and end points, trimming the enrichment to only the sections where the image appears. This approach reduces processing time and minimizes false detections.

Currently, however, TTFF is reported from recording.begin, not a specific custom start event. If you want to calculate TTFF from a custom start point, you’ll need to subtract that start event time from the reported TTFF.

In the visualization, TTFF values are averaged across all recordings. You can select which recordings contribute to this average using the left sidebar. If your project includes recordings where the reference image appears later, this could explain the TTFF you’re observing.

How to address this:

  • When you download the data, check the aoi_metrics.csv file, which provides metrics per recording rather than an average.
  • To adjust the AOI visualization, select specific recordings to include using the left sidebar.
  • If you want TTFF from a custom enrichment start event rather than recording.begin, you’ll need to subtract the start event time from the reported TTFF.

For a timeline view of AOI visits, which could help clarify TTFF, you may find this reference useful:
https://discord.com/channels/285728493612957698/1047111711230009405/1302955505169465444

user-ac085e 06 November, 2024, 00:42:49

Also, the support page states "Average time in milliseconds until the corresponding area of interest gets fixated on for the first time in a recording." What is it taking an average of? If its the first time it gets fixated on in a recording, isn't that only one instance?

user-f43a29 06 November, 2024, 10:43:56

Hi @user-29a91e , actually, may I ask, what Operating System is running on your computer?

user-29a91e 07 November, 2024, 18:17:54

Windows 11

user-8825ab 06 November, 2024, 11:08:50

hi there, can i ask how to play the video by fixation to fixation on pupil lab software?

user-d407c1 06 November, 2024, 11:30:54

Hi @user-8825ab ! Are you using Neon? And if so, are you using Pupil Cloud?

user-8825ab 06 November, 2024, 11:29:28

Hi there, I also have a question about how we can give access to other people to make edits on the annotation, but we also want to keep the original data in hand. do we need to create an account for them? if we can, can anyone talk to me about it please, much appreciated

user-8825ab 06 November, 2024, 11:32:48

yes

user-d407c1 06 November, 2024, 11:45:39

Currently, there isn’t a way to navigate from fixation to fixation in the main Cloud Player. You can choose the velocity or go frame by frame (Shift + Arrow Key) but not jump across fixations. When using the manual mapper, you have this option as you will be labelling fixations to an image. But if fixation navigation in Cloud’s player would be helpful for your workflow, feel free to suggest it in the 💡 features-requests

Our offline solution, Neon Player offers a fixation-by-fixation navigation once you enable the fixation detector. Then you would be able to navigate across fixations by pressing F or f, for backwards or forward.

Regarding annotations, are you referring to events ? You can collaborate with others by inviting them to your workspace. They’ll need to create an account, and you can invite them from your workspace settings on the top left.

user-8825ab 06 November, 2024, 11:33:08

is it on manual mapper?

user-d407c1 06 November, 2024, 11:47:25

Have you already had your onboarding session? If not, you might be eligible for one! Send an email to info@pupil-labs.com with your order ID, and we can confirm your eligibility and provide the next steps to book it.

user-8825ab 06 November, 2024, 11:47:55

thanks, by inviting them, will they make changes to the original data? as we want to keep the original data without any changes and edits

user-d407c1 06 November, 2024, 11:50:15

Yes, the events are global, there is no way to create a subset of events for a specific person. They can, though, download the Native Recording Format, load it into Neon Player and work with it locally.

user-8825ab 06 November, 2024, 14:03:17

it works thank you Miguel

user-ac085e 06 November, 2024, 17:59:48

Hi Miguel,

The TTFF I'm reporting is from the CSV file. There are 141 recordings in the project, all with custom event sets for the start and end of the enrichment. I created custom "event-start" and "event-end" events for each video. I then ran an enrichment using those events, see attached.

Is there a way to find the event start time for each individual recording besides manually looking at each video? Is that reported in any of the output files?

Chat image

user-d407c1 06 November, 2024, 18:12:05

Yes, if you download the project data, it will include an events.csv file, which lists all events for each recording. This means you’ll have, per recording, the timestamps for both the recording.begin event and your event.start.

To calculate the time from the beginning of the recording to your event, simply subtract the timestamp of recording.begin from event.start. This difference gives you the offset to remove from the TTFF. Just a note: timestamps are in nanoseconds.

If it would be helpful, I can provide an example script for this tomorrow. Let me know!

user-ac085e 06 November, 2024, 18:08:29

I'm looking to get TTFF from the beginning of the event. Since I have 137 videos, I'd rather not have to go through them manually.

user-ac085e 06 November, 2024, 18:23:00

It is likely that participants looked at the particular AOI that I am trying to calculate metrics on before the start of the event as well. Given this, it seems like using your calculation method will still report the first fixation based on the entire recording, rather than the first fixation within the desired time window. Is my understanding correct?

user-d407c1 07 November, 2024, 08:43:34

@user-ac085e First, let’s clarify how time to first fixation (TTFF) works. Even if the start time is set at the beginning of the recording, detecting the first fixation on the AOI requires:

  1. The image to be detected,
  2. A fixation to occur on the image, and
  3. The fixation to fall within the designated area on the image.

So, if you define your reference image mapper enrichment from event.start to event.end, the image will only be detected within that timeframe (step 1). This means there can’t be any valid fixations on the AOI before event.start, even if a fixation occurs there.

Therefore subtracting the time elapsed from recording.begin to event.start will give you time to first fixation from event.start.

I am attaching a script that does exactly that, but since you mentioned sections, it uses the sections.csv files which makes it a bit cleaner than recursively accessing the events.csv.

Note on Section IDs & Sections.csv
The section ID uniquely identifies each segment defined from event.start to event.end in your recording. You can run the enrichment on multiple sections—see the attached image for reference. A TimeSeries download is also technichally considered as an enrichment and also contains a section.csv with the recording sections in this case the whole recording.

About Project Data:
To access your project’s data, navigate to your Project, then go to Downloads on the left sidebar. There, you’ll find a TimeSeries download that includes the TimeSeries data for all recordings with in the project and includes the section.csvfor that too.

Chat image ttff_from_event.py

user-a09f5d 06 November, 2024, 21:12:10
## Function to convert quaternion representation of the optical axis rotation into a rotation matrix ##
def quaternion_to_rotation_matrix(q):
    x, y, z, w = q

    # Normalize the quaternion
    norm = np.sqrt(x * x + y * y + z * z + w * w)
    x /= norm
    y /= norm
    z /= norm
    w /= norm

    # Construct the rotation matrix
    rotation_matrix = np.array([
        [1 - 2 * y * y - 2 * z * z, 2 * x * y - 2 * z * w, 2 * x * z + 2 * y * w],
        [2 * x * y + 2 * z * w, 1 - 2 * x * x - 2 * z * z, 2 * y * z - 2 * x * w],
        [2 * x * z - 2 * y * w, 2 * y * z + 2 * x * w, 1 - 2 * x * x - 2 * y * y]
    ])

    return rotation_matrix

for index, eye_state_row in Eye_state_data.iterrows():
    # Define the centers of the eyeballs
    left_eye_center = np.array([eye_state_row['aligned eyeball center left x [mm]'], eye_state_row['aligned        eyeball center left y [mm]'], eye_state_row['aligned eyeball center left z [mm]']])
    right_eye_center = np.array([eye_state_row['aligned eyeball center right x [mm]'], eye_state_row['aligned      eyeball center right y [mm]'], eye_state_row['aligned eyeball center right z [mm]']])

    # Define the optical axes
    optical_axis_left_original = np.array([eye_state_row['optical axis left x'], eye_state_row['optical axis       left y'], eye_state_row['optical axis left z']])
    optical_axis_right_original = np.array([eye_state_row['optical axis right x'], eye_state_row['optical axis     right y'], eye_state_row['optical axis right z']])

    # Get quaternion
    q = np.array([eye_state_row['matched aligned rotation X'], eye_state_row['matched aligned rotation Y'],
                  eye_state_row['matched aligned rotation Z'], eye_state_row['matched aligned rotation W']])
user-a09f5d 06 November, 2024, 21:12:34
# Convert quaternion to rotation matrix
    rotation_matrix = quaternion_to_rotation_matrix(q)

    # Apply the rotation matrix to the optical axes
    optical_axis_left_aligned = rotation_matrix @ optical_axis_left_original
    optical_axis_right_aligned = rotation_matrix @ optical_axis_right_original

    # Calculate the end points of the optical axis lines
    left_eye_optical_axis_end = left_eye_center + optical_axis_left_aligned * optical_axis_length
    right_eye_optical_axis_end = right_eye_center + optical_axis_right_aligned * optical_axis_length

I used AI to help with the quaternion_to_rotation_matrix function, so please let me know if there are any problems with this approach!

user-a09f5d 06 November, 2024, 21:13:12

A few areas I'd need clarification: How do you calculate the eye state elevation and azimuth while accounting for the swapped axes? Keep in mind that gaze elevation and azimuth do not account for the axis swapping, and the gaze vector should also be rotated to match the aligned pose.

I now take the time matched values of elevation and azimuth from gaze.csv and then do the following:

# Convert to radians
elevation_radians = math.radians(eye_state_row['matched elevation [deg]'])
azimuth_radians = math.radians(eye_state_row['matched azimuth [deg]'])

# Calculate the end point of the ray
neon_gaze_ray_axis_x = gaze_ray_length * np.cos(elevation_radians) * np.sin(azimuth_radians) # Equation for left-right direction.
neon_gaze_ray_axis_y = gaze_ray_length * np.sin(elevation_radians)  # Equation height (up-down)
neon_gaze_ray_axis_z = gaze_ray_length * np.cos(elevation_radians) * np.cos(azimuth_radians) # Equation for depth (forward-backward)

neon_gaze_ray_axis_original = np.array([neon_gaze_ray_axis_x, neon_gaze_ray_axis_y, neon_gaze_ray_axis_z])

# Apply the rotation matrix to the optical axes
neon_gaze_ray_axis_aligned = rotation_matrix @ neon_gaze_ray_axis_original

I then use neon_gaze_ray_axis_aligned to plot the gaze ray vector. When setting the start and end position of the gaze ray it is now at this point that I switch the z and y so that y is on the vertical axes of the plot. I also do the same for the optical axes.

ray_end = aligned_origin + neon_gaze_ray_axis_aligned

#Set gaze ray
gaze_ray = (pv.Line([aligned_origin[0], aligned_origin[2], aligned_origin[1]],
                    [ray_end[0], ray_end[2], ray_end[1]]))
user-a09f5d 06 November, 2024, 21:13:19

After making the above correction the optical axis vectors do seem to point and move in the direction that I would expect. However, the gaze ray and optic axes still do not line up most of the time when I visualize them in PyVista, so I must still be going wrong somewhere.

Here’s an example for converting between Cartesian and spherical coordinates if it helps.

I noticed that in the example you provided the function for converting "spherical_to_cart" is different from the equations I am currently using to convert elevation and azimuth to XYZ (see code snippet above). Could this be the reason why the gaze ray and optical axes are not pointing in the same direction within my visualisation?

Thanks a lot for your help!

user-ac085e 06 November, 2024, 23:33:46

What do "section id"s represent in the fixation.csv data? I'm trying to figure out a way around my problem but I need to know what this is referencing exactly.

user-ac085e 07 November, 2024, 00:03:19

@user-d407c1 What is the "project data" download you are referring to? The timeseries.csv?

user-d407c1 07 November, 2024, 09:05:53

3D Space Gaze vector and object collision?

user-00729e 07 November, 2024, 11:08:39

Hello,

I am in the middle of running a study and suddenly have a problem with the realtime API trying to grab the camera images. It worked alright and suddenly stopped working. Do you know where this might be coming from? I get the following messages:

Stopping run loop for rtsp://192.168.0.101:8086/?camera=eyes Stopping run loop for rtsp://192.168.0.101:8086/?camera=world Error on stream: RTSPConnectionError('Unable to connect to 192.168.0.101:8086'). Reconnecting...

user-d407c1 07 November, 2024, 11:16:13

Hi @user-00729e ! Could you try disconnecting Neon, force-stopping the app, clearing the app’s data, reconnecting, and then checking if this resolves the issue? If not, please open a ticket in 🛟 troubleshooting , and we’ll continue assisting you there. When opening the ticket, please include additional relevant details like the Companion App version, companion device model, and real-time API version.

user-00729e 07 November, 2024, 11:09:46

I noticed that DEVICE.receive_gaze_datum() works just fine, but DEVICE.receive_eyes_video_frame() etc. will stop my code.

user-00729e 07 November, 2024, 11:18:16

I already tried restarting the device. I can try these steps, too and let you know. Thank you

user-00729e 07 November, 2024, 12:17:58

@user-d407c1 Clearing storage (not just cache) did the trick. Thank you so much 🙏

user-be5f86 07 November, 2024, 14:45:18

Real-Time Gaze & Pupil Data Collection

user-bda2e6 07 November, 2024, 23:04:04

Hi! I’m wondering if it’s possible to have a private real-time conversation with someone about something on using Neon and Psychopy together. Thank you!

user-ac085e 08 November, 2024, 00:56:41

Thank you @user-d407c1 for the clarification and the py code. With a small alteration (removing one of the two negative actions making the resulting variable a sum of the previous two rather than the difference) it worked! Thank you again.

user-d407c1 08 November, 2024, 09:21:57

Hi @user-ac085e ! I am glad that it worked for you, although I do not understand that change that you make, it does not make any sense unless we are not in the same page with what you want to achieve.

Chat image

user-f43a29 08 November, 2024, 09:05:38

Hi @user-bda2e6 , have you had your free 30-minute Onboarding session already? If not, please send us an email with the original Order ID: info@pupil-labs.com

Otherwise, we typically keep such communication to email or this Discord server.

If you think your question rather falls under a Workshop or Custom Consultancy, then you can also schedule a meeting about that.

user-bda2e6 08 November, 2024, 19:44:40

@user-f43a29 Thank you for the reply! Then I’ll ask the questions here first

user-f43a29 08 November, 2024, 09:06:22

LSL Connection Issue

user-5da3ee 08 November, 2024, 09:24:24

Is there a way to wirelessly stream the Neon recordings from the companion device to the PC? Right now we have to physically plug in the companion device, and it could be easier if we could do it wirelessly.

user-23a923 08 November, 2024, 09:24:35

Hi @user-5c527e , you could potentially use a third-party Android app that allows file transfer over a network/Internet. However, please note that it should probably not be a syncing program, as these could interfere with data that is being written to the harddrive during a recording.

user-ac085e 08 November, 2024, 18:36:53

Hi Miguel, Your diagram does represent what I wanted out of the data so thank you. The issue I think was simply the "time_diff_ms" variable was negative due to the way it was calculated in the code. Subtracting a negative value ended up adding that value to the "time to first fixation [ms]" value. I've attached a screenshot of the data to show.

Chat image

user-ac085e 08 November, 2024, 18:40:08

Specifically, this code had the variables swapped: # Calculating the time difference between enrichment and timeseries section start times in milliseconds merged_sections["time_diff_ms"] = ( merged_sections["section start time [ns]_timeseries"] - merged_sections["section start time [ns]_enrichment"] ) / 1e6

user-bda2e6 08 November, 2024, 19:48:58

So what I’m doing is screen based experiments in psychopy. I tried the demo in the documentation page and it worked fine. My questions are, when I check the save hdf5 option in psychopy and save the eye movement data in hdf5 files, there are MonocularEyeSampleEvent and BinocularEyeSampleEvent what is the sampling rate of the gaze data saved in these? What is the x and y range of the relative within-screen gaze data? Is it -1 to 1?

user-f43a29 11 November, 2024, 09:56:34

Hi @user-bda2e6 , I'll answer the PsychoPy questions first:

  • The format of the data files is specified here. The sampling rate of the recorded gaze data, as collected by and saved on the phone, is by default 200 Hz, unless you have changed the Gaze Data Rate setting. The sampling rate of the streamed data will inherit this setting, but the exact rate will depend on the quality & type of your network connection. Note that the streamed data is meant more for monitoring, reacting, and interactive purposes. We recommend starting a recording on the phone in parallel (c.f., the Eyetracker Record component in PsychoPy Builder). This will give you access to the full gaze data and allow you to easily use the rest of the tools in the Neon ecosystem.
  • Regarding Events, these are essentially "triggers". They let you label timepoints with relevant info, like "stimulus_1_started", "stimulus_1_ended", etc. See this link for more details about the NeonEvent component.
  • According to the PsychoPy documentation, the X & Y ranges of the mapped gaze data are in the units in use by the ioHub Display device. (see here for more details).
user-bda2e6 08 November, 2024, 19:53:25

What is the NeonEvent component and what does it do?

user-bda2e6 08 November, 2024, 19:53:55

I think I may have more questions but these are more pressing at the moment. Thank you very much!

user-bda2e6 08 November, 2024, 22:41:45

A new question is, the demo was running perfectly yesterday. But when I tried it again without changing anything, psychopy did not get the gaze coordinates from the glasses anymore. And these are the errors I got

Chat image

user-f43a29 11 November, 2024, 09:58:35

Regarding your issue running the demo, could you do the following?

  • Long press the Neon Companion App icon in the main Android launcher screen
  • Choose App Info
  • Click Storage & cache
  • Click Clear cache -> This will not delete any data or recordings; do not worry

Then, try the demo again. If that does not resolve your issue, then please open a support ticket in 🛟 troubleshooting

user-980c8e 11 November, 2024, 10:09:45

Hi, I'm doing a study on the use of a device (smartphone) whilst driving a train. This means that a driver is going to use a new application on their smartphone whilst driving a train. I would like to get insights on how much of their time is spent looking outside of the train, versus how much time they spend looking at their phone. Is this possible even though participants might pick up their phone and put it down somewhere else in front of them during a recording?

user-f43a29 11 November, 2024, 11:00:32

Hi @user-980c8e , yes, this is possible. Do you mean that your application will run on the same phone as the Neon Companion App or will it run on a second, separate phone?

user-980c8e 11 November, 2024, 11:01:31

Hi Rob, it will run on a seperate phone, which the user might pick up to use and put down somewhere else during the measurements.

user-f43a29 11 November, 2024, 11:34:31

Ok, then when the Neon recordings have been uploaded to Pupil Cloud, you can mark the time points when the participants start/stop looking out the window (or start/stop looking at the phone) with Events. You can of course also mark start/stop of looking at phone & window. You can then use the Events to define analysis windows when creating an Enrichment. Alternatively, if you download the data, then you will also be provided with the Events data and can use that in custom analysis piplines, say in Python or MATLAB.

If you or a colleague will be on the train at the same time and you have a local dedicated WiFi router (or hotspot) to connect Neon to your computer, then you can also use Neon Monitor to mark the recording with events in real-time.

If you'd prefer a more automatic solution, then you could try a computer vision algorithm that detects/segments objects. For instance, you could try running YOLO on the scene camera video to determine when the phone is in view and correlate that with the gaze/fixation data to also determine when they are indeed looking directly at the phone. This is just an idea; it has not been thoroughly explored.

user-980c8e 11 November, 2024, 11:52:34

Thank you for the help!

user-bda2e6 11 November, 2024, 18:54:17

@user-f43a29 Thank you very much for the reply! I have a follow-up questions. So you mentioned "The sampling rate of the streamed data will inherit this setting, but the exact rate will depend on the quality & type of your network connection." Does that mean theoretically the steamed data is also at 200Hz? If that's the case is it possible to store steamed data at 200Hz? Or is this impossible due to inevitable network reasons? Another question is, the eye tracking data saved in the hdf5 file in Psychopy, is it steamed data? Thank you!

user-f43a29 12 November, 2024, 09:41:37

Hi @user-bda2e6 , you are welcome!

The streamed data can certainly reach a streaming rate of 200Hz, but please note that screen-mapped gaze will be at 30Hz, since the AprilTags need to be detected in the scene camera feed to then map the matching gaze datum.

If you were only interested in Neon's base gaze signal (or 3D Eye State signal, that runs independent of screen-mapping), then whether your network setup supports a 200Hz stream is something that should be tested. For comparison of the extremes:

  • If you use a direct Ethernet-over-USB connection, you should most certainly achieve a 200Hz streaming rate, unless something has gone wrong.
  • If you use a work, public, or university WiFi connection, you might experience a drop in the streaming rate, due to a combined mixture of transmission latencies and traffic congestion from all other users. If you can avoid such networks, that is advisable.

A local, dedicated WiFi router is the typical method, as it can maintain high streaming rates and supports Neon's mobility. A hotspot on your phone or laptop is also a valid connection method.

You can of course store the streamed data and yes, the eyetracking data in PsychoPy's hdf5 files are the streamed data.

user-3ee243 12 November, 2024, 04:15:19

@nmt following up with yesterdays question, we are wondering what is the best solution to collect data with people wearing glasses? Any accessories that would help with the standard version of Neon?

nmt 12 November, 2024, 06:49:46

Hi @user-3ee243! Contact lenses work just fine with Neon - so if your participants have those, it would be one recommendation. We don't exactly recommend wearing Neon with third-party glasses, in case Neon's cameras are occluded. In that sense, there aren't really any accessories that can be added to the standard version of Neon. We do have a prescription lens kit frame that's specifically designed for wearers who require vision correction. It's called ‘I can see clearly now’, and it comes with interchangeable prescription lenses, covering -3 to +3 diopter in 0.5 steps. You can quickly and easily swap lenses thanks to a secure magnetic connection.

user-4710f9 12 November, 2024, 15:32:28

Hello everyone,

I wanted to consult with you for some ideas on data analysis. I'm a behavioral economist working with restaurants and chains worldwide on optimizing menus to promote dishes that are more profitable and beneficial for the restaurant. I use various tools and algorithms, and one of the key tools is eye-tracking glasses. I've been using the glasses for over two years and am very satisfied with both the product and the service from Pupil Labs. Most of the studies I conduct with the glasses involve paper menus. The data I collect includes heatmaps, decision-making time, menu scanning order, dwell time in specific areas, and more.

I’d love to get your input on two things: 1. Recently, I've been working with several fast-food chains that use self-ordering kiosks. My goal is to analyze customer behavior with these kiosks to identify behavioral patterns that can help improve the kiosk design and promote specific dishes. Do you have any ideas on how to analyze customer interactions with the kiosks? Today, I collected data from 30 participants at one of these chains, so I have data to start analyzing. 2. Do you have any additional suggestions for analyses on regular paper menus that could enhance my research in both menu promotion and decision-making support? I also have quite a bit of data that I’ve collected over time on various types of paper menus.

Thank you very much to anyone who can help! Best regards,
Raz

user-3ee243 12 November, 2024, 15:40:42

@nmt Thank you! Is the prescription lens kit listed on the website only compatible with Invisible and not any version of Neon? Say the frame only option?

nmt 13 November, 2024, 08:54:00

Hi @user-3ee243! The Invisible lens kit is incompatible with Neon. The frames are of a different design.

user-bda2e6 12 November, 2024, 16:34:28

@user-f43a29 Thank you for the reply! What is the difference between the gaze coordinates stored in the MonocularEyeSampleEvent tab and BinocularEyeSampleEvent tab in the hdf5 file? In the documentation, it says the monocular gaze position is the average gaze position of the two eyes. However, when I plot the gaze position from the Monocular tab, and that of both eyes in the Binocular tab at the same time, they coordinates are very far off. Are they actually storing very different things?

user-f43a29 13 November, 2024, 09:54:48

Hi @user-bda2e6 , of course, no problem!

With respect to Neon, the difference between the Monocular and BinocularEyeSampleEvent is contained in our documentation:

  • The binocular-based gaze data from Neon is contained in the MonocularEyeSampleEvent (similarly when using monocular gaze estimation). The idea is that Neon analyzes both eye images to produce a singular estimate that specifies the direction of a gaze ray. This data structure is more parsimonious with Neon‘s default output, since that does not contain a simultaneous separate gaze estimate for each eye, which is what the BinocularEyeSampleEvent is for.
  • 3D Eye State data is contained in BinocularEyeSampleEvent. This is because Neon independently and simultaneously estimates 3D Eye State for each eye, which aligns with the format of this data structure.

Regarding your screen-based experiment, may I ask how you plan to quantify accuracy? To clarify, the streamed gaze data, before it has been mapped to the screen, is essentially the same data as on the phone and on Pupil Cloud, since it is produced by the same gaze estimation pipeline.

user-bda2e6 12 November, 2024, 16:36:32

Another questions is, since I’m doing screen based experiments, would the streamed data be less accurate, in terms of the gaze position within the screen, than if I post-hoc process the raw data stored in the phone in Neon Player or Pupil Cloud?

user-bda2e6 12 November, 2024, 16:37:05

Thank you very much for being patient with my questions always!

user-bda2e6 14 November, 2024, 02:14:13

Thank you for the reply! @user-f43a29

user-bda2e6 14 November, 2024, 02:20:26

I was able to record and store the data in the Psychopy hdf5 file. The sampling rate is much lower than 200hz and I think my internet speed should be more than enough to stream the data at 200hz. I read on Psychopy forum that this may have something to do to the confidence interval. Only data that falls within the confidence interval is saved. Could you confirm if this is true? And if so, can I modify the confidence somewhere in the code?

user-f43a29 14 November, 2024, 02:37:05

Hi @user-bda2e6 , my colleague, @user-cdcab0 , pointed out that I missed a detail in my message earlier (which has since been updated).

You are working with screen-mapped (i.e., screen coordinate based) gaze in PsychoPy. To map gaze to the screen, each scene camera frame is analyzed to detect the AprilTags and then the matching gaze datum for that specific scene frame is mapped and saved. This means that the default real-time screen gaze setup will stream maximally at 30Hz, the sampling rate of the scene camera.

3D Eye State data, however, streams independently of that process and still stream at 200Hz, even in a screen-based context

If you want to post-hoc analyze screen-mapped gaze at 200Hz, then you can also run a recording in parallel and try the Marker Mapper Enrichment on Pupil Cloud. This will however not use the hdf5 files from PsychoPy, in case that is necessary for your analysis approach.

Confidence is related to Pupil Core, so I assume you were reading posts about that. Neon currently does not report a confidence metric.

user-bda2e6 14 November, 2024, 02:22:19

And if this is not doable, suppose my internet is more than able to handle it, is there anything I can do on the coding side to save more data samples in the hdf5 (200hz ideally)

user-bda2e6 14 November, 2024, 02:42:22

I see! Thank you! So in short, it is impossible to get a 200hz screen mapped data in real time. I have to post hoc process it

user-bda2e6 14 November, 2024, 02:42:30

Is that right?

user-cdcab0 14 November, 2024, 03:15:33

Indeed - if you want screen gaze coordinates at 200Hz in PsychoPy right now, you'd have to interpolate the data

user-bda2e6 14 November, 2024, 16:10:27

Thank you! @user-cdcab0

user-bda2e6 14 November, 2024, 16:12:33

May I ask if I can get the inertia data of the glasses with Psychopy? Like acceleration, rotation, etc

user-cdcab0 14 November, 2024, 20:58:40

PsychoPy provides a common eyetracker interface for gaze data, but not for inertial data, so you wouldn't be able to use any of Builder's built-in components or standard PsychoPy classes to get it.

If you're comfortable coding though, you can always connect to the eyetracker manually using the Realtime Python API, but I would not recommend using both PsychoPy's eyetracker interface along with manually using the realtime-api. You will want to choose one or the other, so if you really need IMU data in PsychoPy, you should use the realtime-python-api for your gaze data too.

Having said that, do you need inertial data in realtime or just post-hoc? If you do not need it realtime, then rather than writing code to use the realtime API manually and giving up PsychoPy's eyetracker interface, I'd suggest simply creating a Neon recording during your data collection - this will save inertial data on the companion device within the recording. You can use the "Neon Event" component in Builder to send event markers (e.g., "experiment start", "trial start", "trial end", etc.) from PsychoPy to the Neon recording to assist you in synchronizing data post-hoc.

user-1beb67 15 November, 2024, 02:27:22

hey! so I have 2 Neon devices in my lab and, as part of the Early Adopters, I got one of them linked to a key that grants it unlimited storage until the specified date. however, my other device is not under that, and from what I understood from reading previous messages in this channel, the add-on works for all devices, right? could anyone confirm that with me, I'm having some issues in data collection as one of the glasses just hit the maximum storage limit

user-f43a29 15 November, 2024, 09:36:13

Hi @user-1beb67 , could you send an email to [email removed] including the Serial Numbers of the devices in question? Thanks!

user-bda2e6 15 November, 2024, 19:19:06

@user-cdcab0 Is there any way to check if the QR codes are detected properly during run time?

user-cdcab0 16 November, 2024, 12:51:33

Not through an objective measure, but it's pretty easy to tell subjectively what might be wrong by just opening the scene camera preview on the companion device. It's nearly always one of three problems or a combination: * Tags are too small * Camera is over-exposed * Tag margins are too small

Since you're using our PsychoPy components, you don't have to worry about the margins, but the tag size and camera exposure will affect you. When you look at the scene camera preview, can you distinguish the tags?

user-bda2e6 15 November, 2024, 19:35:50

Also, I’m pretty sure the answer is no, but I wanted to ask just in case. Does the scene camera has more than one setting? The field of view for example? The current setting is slightly too wide for a screen-based experiment. And can I manually adjust the focus?

user-cdcab0 16 November, 2024, 12:54:03

You cannot adjust the field of view or focus, but with reasonable tag sizes that shouldn't be an issue for marker detection

user-bda2e6 17 November, 2024, 19:06:32

Thank you very much!

user-bda2e6 17 November, 2024, 19:08:03

May I ask where I can find the Neon Event Component timestamps saved in the raw data in the companion decide? Can I read it into Python?

user-cad8c8 17 November, 2024, 20:34:36

Hello! Having already used the pupil neon module for my 2D attention visualization research recently (using the gaze-controlled cursor demo: https://docs.pupil-labs.com/alpha-lab/web-aois/), My upcoming research needs working on 3D with quest 3. I mainly use Three.js and WebXR/AR and have zero Unity experience. Is it possible to have an analogue to gaze-controlled cursor demo where the webxr experience is controlled by gaze tracking? If there is a possibility with my bare-metal neon module, I would consider getting the Quest 3 frame mount addon. Would love to hear your suggestions on how to go about working on this. I would also be happy if there is a tutorial for doing this.

wrp 18 November, 2024, 05:04:14

Hi @user-cad8c8 👋 Yes, gaze interaction (gaze controlled "cursor") is possible. We have a demo for this in Unity here: https://docs.pupil-labs.com/neon/neon-xr/MRTK3-template-project/ - would this be suitable for your needs?

If you want to get get the mount for Neon in the Quest 3 - send us an email - sales@pupil-labs.com

Finally, future VR/AR/XR questions are best discussed in in 🤿 neon-xr

user-0d28eb 18 November, 2024, 07:45:27

Regarding the new Automated Event Annotations, how does the Pupil Cloud environment connect with the 4o model? Is this done via the OpenAI servers, and if so, what are the GDPR regulations for this?

We try to run most of our models locally, since many platforms (including OpenAI) by default store all data you provide. We use the pupil labs devices for research, and have strict regulations around spreading personally identifiable information, video & audio being 2 of the strictest data-sources. Can this feature be used without OpenAI using all the data we provide?

user-d407c1 18 November, 2024, 09:16:29

Hi @user-0d28eb 👋!

Thanks for raising this concern. Pupil Cloud is not connected to OpenAI. Rather, this is just an experiment running outside Cloud that showcases the power of combining our Cloud API with a Large Multimodal Model (GPT 4o, in this case), as an example.

If you check the GitHub repository, you’ll notice that the video is downloaded from Cloud, and then specific frames (not the whole video) are sent—along with the prompt—to OpenAI GPT 4o. We chose this model because it is one of the most advanced and straightforward to use (and not everyone has the resources to run an LLM locally).

That said, you can modify the code wrote by my colleague @user-480f4c , who might be able to provide pointers, to use any other open-source models with vision capabilities like Pixtral-12B or LLama-3.2-Vision. However, please note that we haven’t tested these models, so we can’t comment on the quality of their event detection or how to set them up for local use.

Let us know if you have further questions!

user-0d28eb 18 November, 2024, 09:24:06

Thank you for the additional info, I now realize the tool is not shown in the Pupil Cloud environment, but is a separate tool that can be downloaded and ran locally, using the OpenAI Api.

@user-480f4c It looks awesome, great job on this feature!

user-2b5d07 18 November, 2024, 14:06:58

Hello! I am currently working on tracking the eye movements of surgeons during laparoscopic procedures. Could you please confirm if it is possible to annotate events directly from a smartphone connected to the device and during the recording, rather than relying on a computer? Thank you!

user-f43a29 18 November, 2024, 22:05:02

Hi @user-2b5d07 , have you given Neon Monitor a try?

user-bda2e6 18 November, 2024, 16:43:05

@user-cdcab0 May I ask where I can find the Neon Event Component timestamps saved in the raw data in the companion decide? Can I read it into Python?

user-cdcab0 18 November, 2024, 16:46:13

In the native recording data, event data is saved in event.time and event.text. For your convenience, we provide pl-neon-recording to more easily work with native recording data

user-cdcab0 18 November, 2024, 16:55:38

pl-neon-recording is a native python library, so you can work directly with your data in python with it or use it to export - e.g.,

import pupil_labs.neon_recording as nr
import numpy as np

recording = nr.load('path/to/native/recording/')
np.savetxt('events.csv', recording.events.data, delimiter=',', fmt='%s')
user-cdcab0 18 November, 2024, 16:57:31

Or if you prefer a Pandas dataframe to work with:

import pupil_labs.neon_recording as nr
import pandas as pd

recording = nr.load('path/to/native/recording/')
df = pd.DataFrame(recording.events.data)
user-87d763 18 November, 2024, 19:59:57

Hello everyone. I have a question regarding the synchronization of two separate neon devices. Right now, our goal is to capture the time of mutal gaze between two participants who are wearing the neon eye-tracking devices. Having the exact time difference between devices <20ms is crucial to our data analysis. Currently, we have followed the source documentation to force time synchronization by force syncing to a master clock, making sure all devices are running on the same server, using ethernet instead of a crowded network, and using real-time API to calculate the device offset estimations. However, even when correction for offsets and making sure all other contingencies are in place, we cannot get the time to sync <20ms. On average the offset time between the two devices is anywhere from 30-200ms after all corrections are made. If anyone has any advice or ideas, we are welcome to any and all potential solutions. Thank you for your time!

user-f43a29 18 November, 2024, 22:04:26

Hi @user-87d763 , am I correct in understanding that you are seeing this post-hoc, in the datastreams from your two Neons after collecting data for your experiment?

user-87d763 18 November, 2024, 22:04:58

@user-f43a29 yes, exactly. It seems that there is some sort of drift between the phones

user-f43a29 18 November, 2024, 22:16:43

Ok, and just to confirm, you are sending custom timestamped Events that account for the clock offset between the central computer and each Neon's specific offset? Similar to lines 24-34 of this example

user-87d763 18 November, 2024, 22:19:53

I see. I think when I was sending the event it was recording at the time of arrival instead of the actual timestamp. I will incorporate these changes. Thank you

user-f43a29 18 November, 2024, 22:29:50

No problem. And just to say, accounting for that is important, but overall, it sounds like you have the following:

  • A central control computer
  • Neon 1 connected to phone A
  • Neon 2 connected to phone B

So, essentially, three clocks. So, you will need to get the offset for each Neon and make sure to use that offset when sending a timestamp corrected event to that specific device (i.e., you will run TimeoffsetEstimator twice; once for each Neon).

By sending the offset corrected Events, you will have corrected for the offset between the computer clock and phone A's clock, as well as for the offset between the computer clock and phone B's clock. However, there will still be the residual offset between phone A's clock and phone B's clock.

Since the offset corrected Events essentially represent the same point in time, then you can shift them to align phone A‘s and phone B‘s timelines, accounting for the remaining offset. If you wanted a way to validate this, you could flash a light in view of each Neon's scene camera at the same time as sending a test timestamp corrected Event.

Also, using Ethernet is not strictly necessary to achieve this type of time sync, in case the mobile nature of Neon is helpful here. Although, it is still important to avoid crowded networks, as you rightly point out.

user-bda2e6 18 November, 2024, 23:37:44

Hi, I’m using Neon with Psychopy. The eyetracker is running properly. However, each time the experiment finishes, there are many messages in the console, causing Psychopy to stop responding. Is there anyway to remove those messages?

Chat image

user-cdcab0 19 November, 2024, 02:18:44

Those messages are harmless and are not likely the cause of PsychoPy's non-responsiveness

user-bda2e6 19 November, 2024, 01:14:27

Another question is about the Neon Event Component. Does this component always only save a single timestamp? Even if the component has a duration? And if that’s the case and it has a duration, does it save the timestamp at the beginning or the end of the component?

user-cdcab0 19 November, 2024, 02:19:23

Yes, only a single timestamp regardless of the component's duration. If you want to mark the start and end of a stimulus, you'll want to use two events

user-bda2e6 19 November, 2024, 01:40:32

If I want to record the beginning and ending of a stimulus, say, a polygon appearing and disappearing at random times, what would be the best way to do this?

user-bda2e6 19 November, 2024, 02:21:42

Is there any way to remove them? They make the program freeze for a very long time after experiment stops

user-cdcab0 19 November, 2024, 02:24:51

Should be able to do that by opening the experiment settings in PsychoPy and changing the log level. Don't use "info" or "debug" - the others should be less verbose

user-bda2e6 19 November, 2024, 02:25:18

I will try that, thank you!!

user-bda2e6 19 November, 2024, 02:48:26

@user-cdcab0 Is there an easy way to record when a routine ends? Especially when it doesn't have a fixed duration, like when it ends only when a certain key is pressed

user-cdcab0 19 November, 2024, 09:29:29

No problem - I'm happy to help where I can. I will admit that I'm not a PsychoPy expert, but thankfully there is a nice PsychoPy online community as well.

There are a couple of ways to do what you describe, and the best one really depends on how your experiment is setup and how you plan to analyze your data. Each of these are separate examples: * If you're using a trial loop, PsychoPy already outputs a CSV file with the trial start and stop times * Create a logTrialEnd routine that only contains the Neon Event component - insert that routine immediately after your trial routine * Use the EndRoutine tab like you mentioned. I think this should do it: eyetracker.send_event("trial-end")

user-bda2e6 19 November, 2024, 02:48:42

Sorry for the one million questions I have and I really appreciate all your help

user-bda2e6 19 November, 2024, 03:10:36

For example, can I do something in the code component in the EndRoutine tab?

user-e7add5 19 November, 2024, 08:51:16

I have a question related to AOIs defined by tags. How do I analyze the data, if I analyze multiple areas like this? In other words, if I have one AOI at one direction and second AOI in second direction, both defined by different tags. To me it appeared that you can analyze one AOI at a time in the cloud.

user-f43a29 19 November, 2024, 09:18:20

Hi @user-e7add5 , may I just first confirm that you when you say "AOIs", you are referring to the Surfaces defined by Marker Mapper?

Just to clarify, in Pupil Cloud, "AOIs" for Marker Mapper refers to multiple sub-regions within a single Surface.

user-2b5d07 20 November, 2024, 10:59:48

Hello, I am facing an issue with my eye-tracker. It’s no longer working, and I have noticed that a component (the white one at the top right) has been partially torn off and requires soldering.
I am worried that there might be other components that have been damaged or dislodged without my noticing. Could someone please help me in identifying whether this is the case? Additionally, if possible, could you provide the schematic of the PCB so that I can verify and troubleshoot the connections?
Thank you very much !!

Chat image

user-480f4c 20 November, 2024, 11:01:05

Hi @user-2b5d07! Can you please open a ticket in our 🛟 troubleshooting channel? We'll assist you there in a private chat!

user-2b5d07 20 November, 2024, 11:06:49

Ok thank you

user-d71076 20 November, 2024, 14:44:51

Hello!

Is it possible to switch off IR LEDs? Sure, I can just cover it, but maybe it is possible to switch it off with a python code?

nmt 21 November, 2024, 03:00:15

Hi @user-d71076! This isn't possible via Python - but as you note, it would technically be possible to cover them. May I ask why you'd want to do so?

user-bda2e6 21 November, 2024, 01:44:40

I suspect the massages are causing the system to freeze because when I run a shorter experiment with shorter eye movement recording time, and when the experiment finishes, I can see the messages being printed to stdout in the console and the scrollbar shrinking. When the messages stop, the system also responds

user-bda2e6 21 November, 2024, 01:45:29

The same experiment can finish normally if I disable all the recording components while keeping the eye tracker connected

user-bda2e6 21 November, 2024, 01:49:47

And I’ve had a similar problem with another package, where too many messages being printed to stdout caused the system to freeze for a long time. Removing the output (in the backend code of the package) solved the problem completely

user-bda2e6 21 November, 2024, 01:51:47

In addition to that, I got the following errors occasionally

user-bda2e6 21 November, 2024, 01:54:06

message.txt

user-bda2e6 21 November, 2024, 01:54:39

I would appreciate it if you could look into both problems. Thank you!

nmt 21 November, 2024, 02:54:21

Hey @user-bda2e6! I think if you can send over your PsychoPy experiment to [email removed] that will be the most efficient way forward as @user-cdcab0 can take a look.

user-bda2e6 21 November, 2024, 02:55:39

Hi, @nmt thank you for the reply! And yes I do have an open ticket where I have told @user-cdcab0 that I will send him a sample experiment

user-453f5f 21 November, 2024, 09:58:18

Hi! Is there an easy way in pupil cloud to transfer all events with timestamps from one recording across other ones in the same project?

user-f43a29 21 November, 2024, 10:13:19

Hi @user-453f5f , may I first ask if the recordings in question were all made with the same Neon or were they made with multiple Neons at the same time (e.g., dyadic experiments)?

user-453f5f 21 November, 2024, 10:14:31

Multiple at the same time, started together at the same time from python code

user-f43a29 21 November, 2024, 10:23:17

Ok. And how did you create the events? DId you:

  • Send them with device.send_event('event_name')?
  • Send them with device.send_event('event_name', event_timestamp_unix_ns=offset_corrected_time) (using the TimeOffsetEstimator, as shown here; lines 24-34)?
  • Manually placed them by hand in the Pupil Cloud interface?
user-453f5f 21 November, 2024, 10:24:42

Manually in pupil 🙂

user-f43a29 21 November, 2024, 12:05:25

Ok, so, the Events names in Pupil Cloud can be copied across Recordings, as you've seen, but the Event timestamps are not copyable in Pupil Cloud.

Let me see if I can clarify. With respect to the timestamps, it is useful to keep in mind:

  1. The sensors in Neon run in parallel, with different sampling rates, and they do not always start at the same exact time, nor at the exact moment that you push the white record button (this is why you sometimes see the initial ~1.5 seconds of gray frames).
  2. If you send a recording_start command with the Real-time API, it will not be immediately received by Neon in that moment. Depending on your network connection, it will usually take a handful of milliseconds before it reaches the device. This also means that even if you could send recording_start at the same time for many Neons, they will not all receive it and start recording at the exact same timepoint.

While you can represent the timestamps relative to recording.begin (e.g., I want an Event 10 seconds into the recording), points 1 & 2 make it difficult to directly & accurately copy Event timestamps across recordings, even in a relative format.

Rather, if you want the same Event at the same time for multiple Neons, then you want to also account for each of their respective clock offsets with respect to the control computer and with respect to each other. With each extra device, the bookkeeping can get more complicated, so you may want to consider our Lab Streaming Layer integration, which can handle the synchronization automatically for you

If you want to do it yourself, then this is where the second option in my previous message comes into play. More details with respect to that can be found in these two messages:

user-f43a29 21 November, 2024, 12:06:43

You could also consider alternate sync methods, such as flashing a light in view of all Neon scene cameras or enabling audio and playing a tone. The onset of this event could then be detected and labelled in a post-hoc processing stage.

user-12d809 21 November, 2024, 20:26:41

Hi, is it possible to create heatmaps within AOIs? I'm trying to make a visualization that looks like the Heatmap, but for within a specific AOI rather than the whole surface. I've considered creating a new enrichment with new surface, but (a) I already have AOIs drawn and (b) drawing AOIs is more flexible than the 4 corners of a surface (or is there a way to add "points" when defining a surface?). Thanks!

user-741d8d 22 November, 2024, 10:54:16

Hi! I was wondering whether it is possible to receive norm_pos coordinates relative to a surface through the real-time API? I know this was possible using pupil core but I haven't found a way to do this with neon yet.

nmt 22 November, 2024, 15:38:36

Hi @user-741d8d! This is possible with Neon using the real-time-screen-gaze package: https://github.com/pupil-labs/real-time-screen-gaze

user-84f771 22 November, 2024, 14:09:00

Hello I am just wondering if anyone can help me with an issue I've encountered. When I run my code in python to start the recording of the eye tracker via a wireless(hotspotted) connection it works and syncs with my motion tracking platform, however, hotspot is not wireless therefore I am trying to establish the connection between my motion tracking software and pupil labs using an anker adapter that's connected directly to the Ethernet box - yet when I run the code to establish a connection it does not work. Can someone please advise :)!

user-f43a29 22 November, 2024, 16:49:13

Hi @user-84f771 , are your router & firewall configured to allow mDNS traffic and UDP traffic? If so, you can also try connecting directly to the IP address shown in the Stream section of the app. That link contains a code example.

nmt 22 November, 2024, 15:36:37

Hi @user-12d809! Given that the heatmap is essentially a blurred 2d histogram of fixations, conceptually, do you expect them to differ when computed within a given AOI compared to a larger AOI?

user-12d809 02 December, 2024, 20:41:59

Hi Neil, the difference is just in the scaling. Fixation hotspots that are outside of the AOI swamp the fixations within the AOI, on the visualization you can't see clearly the differences in fixations within the smaller AOI.

user-3ee243 22 November, 2024, 19:50:15

Question regarding the Neon: given that we have the just act natural bundle with frame + module, would it be possible to swap out the module and reattach it to other frame options like the I can see clearly now frame only + lens kit option? Or would there be hardware compatibility issue there? Thx!

nmt 25 November, 2024, 04:31:43

Hi @user-3ee243! Neon's modularity means that you can indeed swap out the module and reattach it to other frames. Neon is compatible with the frames in this section of our online shop. You can read how to swap the module in our documentation.

user-f22a6d 25 November, 2024, 11:05:36

Hi, I am trying to use the Neon Companion app with my Pupil Labs device. However, I'm using a Huawei tablet which doesn't have access to Google Play Store. Would it be possible to get the APK file for the Neon Companion app so I can install it directly on my device? If an APK isn't available, I would appreciate any alternative solutions you could suggest for using the Neon device with Huawei tablets. Thank you for your assistance.

user-d407c1 26 November, 2024, 07:09:02

Hi @user-f22a6d 👋 ! The Neon Companion App is unfortunately not compatible with Huawei tablets. It is designed to work with a specific selection of devices that we’ve tested, as the algorithms are optimized for their unique architectures.

You can view the list of compatible devices here.

If you’re looking for a tablet option, while not officially listed, and only has experimental support, we confirmed compatibility with the Samsung Galaxy Tab S9. Let us know if you have any other questions!

user-006317 26 November, 2024, 11:08:47

Hi! Could anyone share some experiences if you are doing free skiing (possibly including some reversal jump) with NEON glasses? With the "Ready set go!" module and sport bands, I would still like to check if that is sufficient for this case? Also, what is the upper line for your "high dynamic movement activities"?

user-f43a29 26 November, 2024, 16:37:27

Hi @user-006317 , may I ask if you intend to determine:

  • The gaze accuracy of Neon in such circumstances
  • The dependability of the scene camera's 30Hz sampling rate at higher movement speeds
  • Or, the secure fit of Ready set go! and/or if it fits comfortably under ski googles?
user-98b2a9 27 November, 2024, 11:23:16

Hi, I am not sure if this has been asked before, but we are recording the eye-tracking data via LSL (to combine it with EEG). Can we then afterwards use the LSL timestamps and combine them with the eye-tracking scene video? Thanks already 😊

user-cdcab0 28 November, 2024, 11:17:20

Hi, @user-98b2a9 - yes, you can synchronize your LSL data with the eye-tracking scene video. To do so, be sure your LSL inlet (e.g., LabRecorder) is configured to receive data from the ***_Neon Events Stream. Then, you should see LSL-timestamped recording.begin and recording.end events in your LSL data, and corresponding events in your Neon recording data. You can match these events to temporally align the data.

user-ccf2f6 27 November, 2024, 15:34:52

Hi there, I'm looking into the pl-rec-export utility on github: https://github.com/pupil-labs/pl-rec-export

The exported format looks different from the format of data downloaded from PL Cloud so I wanted to ask if there is documentation for the exported format using this utility? The current PL Data documentation is consistent with the PL Cloud downloaded data but not for the exported data from this utility. For ex., world timestamps, as exported with the utility in "world.csv" have the fields: file timestamp [ns] recording offset [secs] video offset [secs]

What are these offsets related to?

user-cdcab0 28 November, 2024, 11:30:19

Hi, @user-ccf2f6 - it seems we are lacking some documentation regarding the export format for this tool, but in the meantime, let me answer these questions here. * recording offset [secs] - timestamp in seconds measured from the recording start time. It's essentially the same as the previous column, but relative to the recording start rather than the unix epoch and in seconds rather than nanoseconds * video offset [secs] - timestamp in seconds measured from the video part's start time. It's important to understand that the video start time won't be at the exact same moment as the recording's start time - it's usually a bit delayed while all of the different sensors and recording processes kick off. Also, if you have a multi-part recording, the values in this column are relative to each part. This can happen, for example, if your Neon becomes disconnected in the middle of a recording and is reconnected. In the native recording data, you'd see two scene videos, and the values in this column would appear to "restart" from zero.

user-b03a4c 28 November, 2024, 10:30:13

Hi, I want to implement "Map Gaze Onto Website AOIs" and tried it. I referenced your github(https://github.com/pupil-labs/web-aois/tree/main) ,but gaze data isn't exported and I can't make any heatmap.

There is no problem in

  1. pl-web-aois-define(set AOIs) 2.pl-web-aois-record path-to-aoi-defs.json [https://example.com/] ( measurement in target website)

but, in 3.'pl-web-aois-process path-to-recording process-output-path'

I expected 'gaze.csv' is exported, but it isn't exported, in addition, there seems no error message (please see screenshot below).

I checked the code (process.py) and 'process' fuction returns "NONE" but I can't get the reason why.

Could you tell me any suggestion about my situation??

Chat image Chat image

user-cdcab0 28 November, 2024, 11:43:00

Hi, @user-b03a4c - the two functions that you added print statements to aren't made to return any data. Rather, the data is processed internally and output to files when appropriate. To troubleshoot from within the code, you'd have to dig deeper into those two functions. It's likely though that the problem can be diagnosed from the recording you captured in step 2 though. If you're able/willing to share it, I may be able to help

user-80c70d 28 November, 2024, 16:58:21

Hi there, when streaming the data via LSL, we get a much lower sampling rate (161). Do you have a suggestion what the issue could be?

user-d407c1 29 November, 2024, 08:27:22

Hi @user-80c70d ! The effective sample rate achieved by LSL can be influenced by the type of router/network used. How are you streaming it? With a dedicated router/network or a tethered connection, you may achieve higher sampling rates.

user-43244d 29 November, 2024, 06:31:37

Hi, for our current venture, we need a tool that is capable of marking the point where I am looking on the screen. Does the Pupil Labs Neon offer this feature?

user-d407c1 29 November, 2024, 07:23:44

Hi @user-43244d 👋 ! Definitely, would you need the gaze coordinates on the screen in real-time or post-hoc? Have you checked out our Marker Mapper?

This tutorial might also interest you if you’re looking to get the data in real-time.

If you plan to use PsychoPy or Psychtoolbox for stimulus presentation, I’d be happy to point you to the right resources too—just let me know!

user-43244d 29 November, 2024, 07:51:26

Hi @user-d407c1 that is exactly what we are looking for. Can you help me out with cost of this module along with the software. It'd be helpful if i can get a trial product.

user-d407c1 29 November, 2024, 08:20:18

Hi @user-43244d ! I'd assume you don’t have Neon yet? If not, just so you know, we are quite transparent with our pricing—you can find all the details directly on our website.

While we don’t offer trials, you’re welcome to contact [email removed] to schedule a demo video call and discuss potential solutions that may work for you.

user-43244d 29 November, 2024, 08:32:04

Hi @user-d407c1 , that's great to know. So, does the 'Bundle - Bare Metal' include the software for a lifetime and real-time eye tracking of where I am looking?

user-d407c1 29 November, 2024, 10:26:39

@user-43244d Just a few clarifications—though it might be more effective to discuss this on a call, as additional questions may arise:

  • Bare Metal Option: The bare metal does not include a frame; it’s just the PCB board. You would need to find a way to position it on the face.

  • Included Software and Resources: All bundles, including the bare metal option, provide access to the data collection software, Neon Player (an open-source offline software), and the tutorial code I linked earlier.

  • Marker Mapper vs. Surface Tracker: Neon Player includes a similar feature called the Surface Tracker. However, it does not offer data aggregation or AOIs (Areas of Interest). Marker Mapper is only available in Cloud.

  • Cloud Storage Limits: For Cloud usage, there is a 2-hour boundary in storage. If you’d like to upload unlimited recordings, you would need to add the Unlimited Storage Addon.

Let me know if you’d like to schedule a call or need further clarification!

user-fc0d5a 29 November, 2024, 10:17:27

Hello! We are planning to get a few sets of Neon for research, however we are unsure about the types of frames to choose. Would it be possible to get some recommendations? We will be aiming to do different types of research ranging from lab-based to outdoor environments (construction sites, shipyards etc.)

Our concern would be about the outdoor workplace. Specifically, if participants would have to wear safety helmets, earmuffs, or nose/mouth respirators, what would be an appropriate frame to choose? Would the "ready set go" frame be able to fit with these? I was also wondering how the "i can see clearly now" frame would do in these situations, in case participants need prescription lenses?

user-d407c1 29 November, 2024, 10:32:12

Hi @user-fc0d5a 👋 !

Yes, I think the Ready Set Go! frame would suit you well based on your description. Its flexible design and thin arms make it easy to fit under a helmet, and the headband will secure it so it doesn’t fall off.

Kindly note that you can easily swap across frames, so you can always choose a different one later or even make your own.

End of November archive