Hi, how did you do the normalization for x (gaze x [px]) and y (gaze y [px]) corrdinates of gaze in "gaze.csv"? I need to know the max and min value of these pixels to cut the plan into grids.
Hi @user-a97d77 , if you are referring to gaze.csv
from Pupil Cloud, then those coordinates are not normalized, but are rather in pixel units. They refer to positions in the scene camera image. For Neon, the scene camera image is 1600 pixels wide by 1200 pixels high.
I am receiving the Lab Streaming Layer from the Neon companion application. As expected and per the gaze data fields, I am receiving 16 channels (fields) per sample. Can someone confirm for me the order of the fields being output over LSL? I checked the docs (https://docs.pupil-labs.com/neon/data-collection/lab-streaming-layer/) but they appear out of date. I took a stab at the order as follows:
"pixels_x","pixels_y","pupil_diameter_left","eyeball_center_left_x","eyeball_center_left_y","eyeballs_center_left_z","optical_axis_left_x","optical_axis_left_y","optical_axis_left_z","pupil_diameter_right","eyeball_center_right_x","eyeball_center_right_y","eyeball_center_right_z","optical_axis_right_x","optical_axis_right_y","optical_axis_right_z"
Thank you!
What are you using to receive the LSL data stream? The order and format of the data comes in the stream header, and the software you're using to receive the data should receive and parse that
Hi, I have four projects (=subjects) where the 3d_eye_states.csv file does not contain any values but the headers. This occurred for all the recordings from these 4 consecutive subjects, but not for other recordings collected before or after. Do you have any idea on how to fix that?
Hi @user-edb34b , could you open a ticket in ๐ troubleshooting ?
Hello, I was wondering if there was any advice for helping the Neon better recognize the April tags? (Using Neon Player)
The lighting in our workspace is my main issue. Our setup requires lights to be off and the tags arenโt able to be placed on the screen.
Currently Iโve rigged up โlight boxesโ to backlight the tags, but Iโm getting really inconsistent results with that.
Is it possible there is some distance/size ratio that the April tags are more likely not to be readable?
Edit: I should add, weโve tried leaving the lights on or having lamps on in the room to partially light things. However, these results are inconsistent too as it seems the light is washing out the tags
If you can share your scene camera video (even just a single frame), I can better help you diagnose. Typically though, poor marker detection is due to markers being too small or not having enough margin around them. I also recommend adding more markers - many people just use 4, but if one fails to be detected, the mapping can be quite poor.
Also, in case you haven't seen it, the Surface Tracker plugin in Neon Player has image adjustments (brightness and contrast) that are applied under-the-hood before scanning each frame for tags. These are done "under-the-hood" to prevent these adjustments from affecting exported video - the only problem with that is that it can be hard to know what effect you're having when changing those values. So, there's also an "Image Adjustment" plugin where you can affect the scene video.
So start with the "Image Adjustment" plugin and tweak the values until YOU can see the tags clearly. Then apply those same values to the surface tracker plugin's brightness/contrast values.
Please keep in mind that these are kind of crude adjustments though, and really the best thing you can do is start with tags that are highly visible
Is it possible yet to use to Neon glasses as if it was a Pupil Core headset with a MOS rather than with MacOS and Linux as there had been a mention a while back of developing this facility and our eyetracking analysis software is developed for MOS so it would save us the hassle and expense of requiring one device to capture the data and then connected to another device to analyse the data with our software?
Hi @user-057596! Thank you for your question! Currently, using Neon glasses like a Pupil Core headset with MOS isnโt supported or on our roadmap. Our development efforts have focused on other priorities to enhance the native Neon experience. Linux and MacOS should continue to work.
Can we set the scene camera exposure using pupil_labs.realtime_api
?
Hi @user-56256e! No, this is not exposed via the real-time API
My understanding is that we have two kinds of gaze angle outputs from Neon: (1) cyclopian gaze angle in 2D coodinate [px] that is scene camera coordinate and (2) binocular 3D gaze unit vectors (optical axes for left and right eyes). I'm trying to convert the first cyclopian gaze data to 3D gaze unit vector. Do you have a conversion from pixel to angular coodinate for the first data? Alternatively, is it possible to generate a cyclopian gaze vector from the second data (is this how Neon calculates cyclopian gaze in pixel coodinate for (1)?)
Your understanding is almost correct. I'll provide some clarifications below:
1. Neon's gaze measurements are given as:
a) 2D coordinates in pixel space
b) The angle of a gaze ray in relation to the scene camera in degrees
You can easily convert the latter (b) to a vector by transforming from spherical to Cartesian coordinates, as demonstrated in this example.
I hope that makes sense!
Hi team, we are getting this message on our cloud on the workspace login for the owner. I was under the impression that we would have unlimited storage for all of our devices? We have currently two cores and four frames. Two frames were the latest additions.
Hi @user-37a2bd! Thanks for bringing this to our attention. I'll liaise with the relevant team and follow up asap
Third-party eye camera
Iโve just updated Neon Companion App and now on the world view on the App the gaze circle is staying on the same position as I move my head and gaze around but is working as it should be in the recording. Furthermore with the Neon Monitor when the page is loaded you are presented with a warning that the Video Preview May Not Be Available in IOS, also the video frozen and the gaze circle is moving in the opposite direction to the movement of your eyes and head
Hi @user-057596 ๐ ! Would you mind trying to clear the cache of the Companion App?
If that doesnโt resolve the issue, please follow up by opening a ๐ troubleshooting ticket with relevant details, including your Companion Device model and Companion App version.
Hi @user-d407c1 how do you clear the cache again?
Press and hold the Companion App icon in the app launcher (home screen), then select:
App Info โ Storage & cache โ Clear cache
You can also clear storage.
Note that the naming may vary depending on your Companion Device model and Android version.
Hello, sorry I have an issue when I want to delete a recorded file from the trash area. I want to free some space in pupil cloud, therefore delete some recorded videos, then there were moved to Trash area. and then tried to delete them premanently form trash memory by right click on it and select the delete , it gives me an "Internal Server Error" . what should I do for this matter? thanks in advance.
Hi @user-d54741 ! Could you create a ticket at ๐ troubleshooting with the recording ID?
Hello everyone, I need just a clarification (maybe its a silly question). By buying the "bundle - I can see clearly now " is it possible to detach the neon module and put it in the frame mount for the pico or quest 3? Because it is not clear to me if buying the frame mount for quest 3, for example, the NEST PCB is integrated ( I suppose yes) and so I can then just take the neon module mounted on another frame and mount on the other frame. Thank you very much in advance!
Hi @user-a4aa71! You're correct. You can swap the module between frames. This video shows you how: https://docs.pupil-labs.com/neon/hardware/swapping-frames/#swapping-frames. So it's possible to purchase one ICSCN bundle, and just the pico or quest 3 frame.
Hi, I used pupil NEON device, to record about 10 situations. I transfered these recording into gaze overlayed MP4 files using NEON player (using 'Fixation detector' and 'World video exporter').
But, a few files can not convert into gaze overlayed MP4, although I can see gaze overlayed movie at my local PC . I can confirm that 'Fixation detector' worked well, but NEON player app crashed in the process of 'World video exporter'.
Could you check my recording files if I send you these files?
Hi @user-b03a4c! Yes, we can take a look for you. Can you please open a ticket in ๐ troubleshooting and we can coordinate there ๐
Good morning! I am working with the Pupil Neon glasses. I was wondering if you have instructions of how to include the Picture in Picture with the video of the eye and blinks for displaying in the pupil lab cloud .
Hi @user-3bcaa4! Thanks for your question. The eye videos are not playable in Cloud. Blinks are visualised when the red gaze circle turns grey, and also in the timeline of the Cloud player window. If you want to watch the eye videos alongside the main recording, you could use Neon Player, our free desktop software. You'd need to enable the eye overlay plugin: https://docs.pupil-labs.com/neon/neon-player/#neon-player
Hi team, I have another question about our recordings. We have one recording in which we noticed that sometimes, one video frame to the next stays completely the same while the fixation moves. We think this should not be the case (e.g., in the background, people are walking and suddenly freeze for two frames before their movements continue as if nothing has happened). We have a screen recording that we could share of the effect.
Hi @user-80c70d! Do you mind opening a ticket in our ๐ troubleshooting channel and sharing the recording ID for this recording, along with a screen rec showing this issue you report? We can continue the conversation there in a private chat.
Hello - Recorded a session and received this message when I tried to add to a project: "1 recording is processing, has errors, or blocked. This recording can not be added to the project." I can view the enture recording on the cloud, it seems OK - anything I can do to be able to add it to a project for enhancements? Thanks in advance!
Hi @user-796c70 ! Can you please open a ticket in our ๐ troubleshooting channel? We'll assist you there in a private chat.
Hi guys. I was chatting with some of my collaborators, and they were interested in the Neon eye trackers, but apparently have some on-site restrictions on wifi/bluetooth devices, so they'd not be able to use the companion android device with the Neon. Is there any work being done to have a version of the recording software on the PC compatible with the Neon? Or would they be better off using the Core version?
Hi @user-d086cf! Could you share more details about your colleagues' setup?
Note that WiFi is not required to collect data with Neon. An internet connection is only needed for the initial setup (e.g., downloading the app from the Google Play Store), see these instructions on setup. Once everything is set up, you can record data without being connected to WiFi.
Recordings are first saved locally on the phone. After data collection, you can connect the phone to WiFi to upload the recordings to Pupil Cloud.
In both the monocular and binocular hdf5 output, there are always three times, device time, logged_time, and time. I know that logged_time should be the Psychopyโs experiment clock. The problem is logged_time always have many duplicate timestamps
Hi, @user-bda2e6 - there have been some updates to the PsychoPy plugin recently that deal with clocks/synchronization. Could you try uninstalling and then re-installing the plugin so that you have the latest version? Let me know if there are still uncertainties after that
This is not a big problem since I can interpolate the timestamps myself. However, when there are duplicate timestamps in adjacent rows, which row corresponds to the actual moment that timestamp occurs, the first appearance or the last?
This question: โwhen there are duplicate timestamps in adjacent rows, which row corresponds to the actual moment that timestamp occurs, the first appearance or the last?โ
Again thank you so much for the help
Hi, I'm facing an issue with empty csv files after using the Reference Image Mapper and haven't found a solution through search here. Here's what I have: a recording (with special events) that lasts 1.27 min, "enrichments" successfully identified fixation points on reference image (visually), but 1) I can't visualize the heatmap (it seems there are no points, so I see only the reference image) 2) all csv files are empty. Where could the problem be?
Hi @user-12efb7! Could you please open a ticket in our ๐ troubleshooting channel and we'll assist you there in a private chat?
This question: โwhen there are duplicate
Hi, I am planning a new experiment using neon pupil labs and I am considering using it outside in the dark. Do the glasses work in the dark? Thank you
Hi @user-cc6fe4 , NeonNet works in complete darkness, thanks to the IR illuminators in the module. The scene camera's exposure can be adjusted to optimize it for dark conditions. It might be easiest to do a test or two and then check in with us again with how it works out.
Hello, I am using Neon during an experiment where participants interact with a collaborative robot in the construction of components. I need to log an event every time a participant looks at the robot (e.g., "look.robot.start" and "look.robot.stop"). Ideally, I would like to generate this event programmatically via the API. However, I am currently unable to reliably track this event, even on the Cloud.
I have tried creating a Reference Image Mapper of the scene and drawing an AOI on the robot, but I have encountered two main issues: 1) I am unsure whether this approach will work, as my AOI is dynamic rather than static. 2) I cannot verify this approach, since I can only see the AOI on the reference image and not on the video. I have also attempted to attach several AprilTags to the robot and create a Marker Mapper. However, in this case, I would need to create different surfaces for different parts of the robot (since as the robot moves, some AprilTags may disappear while others appear), anyway I do not see a way to create multiple surfaces within a single enrichment.
Would you be able to suggest a reliable solution for my problem? Specifically, I need to: 1) Accurately track when participants look at the cobot and log an event each time this occurs 2) (Ideally) Generate these events in real-time via API Thank you very much in advance for your help!
Hi @user-a09a75 , do you know the position of the robot and the articulation/pose of it's joints over time?
Hi general question: after we finish recording on the Neon for a study, what would be the best recommended method in terms of sharing the data - including the video, and all metrics collected - to other collaborators who might not have an account for pupil cloud? Any simple and accessible way people have used?
Hi @user-3ee243 , all of your colleagues are allowed to make free accounts on Pupil Cloud. Has this not worked for them? Then, you can invite them as Collaborators to that Workspace. That would be the easiest way and you would all have access to the same interface, analysis tools, and metrics.
Hello! Could somebody tell me whether it is possible to export the recorded scene camera video with the gaze overlay (the red circle) from the cloud? Whenever I choose any version of 'Download' the only video I get is the scene camera video without gaze overlay. Thank you! ๐
Hi @user-b57ada , sure, you want to give the Video Renderer Visualization a try. Let us know if you need more info.
Hi Pupil Labs, I am purchasing the frame "Every move you make" (ID: 20250311035037)
Is the CAD design available? Or other design specs? Do you have an estimation of the accuracy of the physical tool? Or even (if it possible to export/import it) the rigid body definition from the OptiTrack software?
Thank you,
Agostino
Hi @user-97997c! We don't really have this information โ the marker cluster geometries and their pose with respect to Neon's scene camera will need to be established by the user in motion capture coordinates. That said, could you share more details about your proposed use case with the Every Move You Make frame and OptiTrack?
Hello.
I'm parsing data from neon companion + pupil cloud.
I want to obtain the average position of each fixation for each eye separately. To do this, I am analyzing the eye data for each fixation, obtained from 3d_eye_states.csv. This file provides the position of each eye separately (eye) and its unit gaze vector (v).
With this, I can make a projection, for example, at a distance of 1000mm and obtain a distant point in the gaze direction of each eye. It would be something like:
objectPoints = eye + 1000 * v
I then project this point back to screen coordinates using:
cv::projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs, imagePoints);
For rvec and tvec, I am using null matrices since I understand that the eye positions are already given in camera coordinates, so no translation or rotation is needed.
I do this for all data lines corresponding to the fixation. In the end, I compute the average of the obtained pixel points.
This gives me the average fixation point for each eye, in pixels.
However, these points are not symmetric with respect to the fixation coordinates stored in fixations.csv. I expected them to be symmetric. They are near but not symmetric.
Am I missing something in the projection, or have I misunderstood something?
Additionally, if I do something similar using the azimuth and elevation of the fixation from fixations.csv, calculating a distant point from the camera using polar coordinates and projecting it with cv::projectPoints, I do obtain the coordinates indicated as the fixation center in fixations.csv.
What is happening here? Am I doing something incorrectly?
Thank you
Hi @user-cbf227! Thanks for your question. A clarification is needed which should help clear things up.
The eye state measurements you're referring to characterise the optical axis of each eye, represented as a vector originating at the centre of the eyeball and pointing towards the centre of the pupil, essentially characterising eyeball orientation. This is not the same as gaze.
Neon's gaze measurements are a separate data stream, represented as a 2D gaze point in scene camera coordinates, along with the elevation and azimuth of a gaze ray. It's important to note that the origin of the gaze ray is the scene camera, not the centre of each eyeball.
While both measurements (gaze and eye state) are provided by NeonNet, they are not directly comparable as they characterise slightly different properties of the visual system, and therefore, not symmetric with respect to the fixation coordinates.
If you want dual monocular fixations, you essentially have some options: 1. Perform a person-specific calibration routine that generates dual monocular gaze, i.e., transform the optical axes to gaze/visual axes. 2. Wait for NeonNet to provide dual monocular gaze and binocular gaze estimates concurrently. We have it on the roadmap for NeonNet to output left eye, right eye, and binocular gaze at the same time. These will be available in the raw data, though I can't share a concrete timeline for the release. 3. It's also currently possible in the Companion app to toggle between left, right, and binocular gaze, although I appreciate it only allows you to save one at a time.
I hope this helps! Let me know if you have follow-up questions.
Hello! Is gaze data recorded by Neon saved onto a micro SD card on the companion device?
Hi @user-6952ca! Recordings are saved onto the Companion device's internal filesystem. Not an SD card.
Hello - a question ... Is it possible to view a recording immediately after completion on a connected laptop device, before it is uploaded to the pupil cloud? We'd like to allow a pitcher to view a windup and delivery immediately after the pitch is completed without a wait - thanks
Hi @user-796c70! It's of course possible to playback the recordings on the Companion device immediately for review. Have you already tried that / is the screen big enough? If not, there are a couple of options. Firstly, to answer your last question, yes, you can use the Monitor App to real-time stream the eye tracking footage. You can read more about that in this section of the docs. Secondly, it's possible to transfer your recordings to a laptop running Neon Player, our free desktop software. You can load the recording into Neon Player. This isn't immediate as you need to transfer them, e.g. via USB. But it would be relatively quick and easy.
Also, if its not possible to view it immediately - is it possible to send the live stream to another app via the neon.local:8080 port?
good morning, does anybody happen to know if and where I can set the detection threshold for fixations in neon player? thx in advance
Hi, @user-688acf - sorry, but it's not configurable. Although, the current version of Neon Player uses pl-rec-export
under the hood for fixation calculation. This tool doesn't directly expose any timing thresholds as configurable parameter, but it is open source, so one could tune it to their specific needs
Hi ๐
Iโm working with the Pupil Labs Neon glasses and the Pupil Labs Asynchronous API for streaming data. Thereโs an issue with the framerate that isnโt entirely clear from the API guide. While the gaze data is captured at 200Hz, the scene video (which we use to detect the screen in real time) is only recorded at 30Hz. The problem arises when combining these two streams: the recommended implementation syncs the gaze data to the scene videoโs framerate, meaning you lose the high-frequency gaze data.
Iโve been experimenting with a workaround to decouple the two streams (still a work in progress), but Iโm wondering if thereโs a more elegant solution. Specifically, is it possible, by design, to overlay the gaze data onto the scene video while maintaining the full 200Hz framerate of the gaze data? Of course, I could have missed something in the documentation, please let me know if that is the case. Thank you!
Hi, @user-a7588f - I suppose you're using the receive_matched_scene_video_frame_and_gaze
function?
Instead, you can use device.receive_scene_video_frame()
and device.receive_gaze_datum()
to pull the data separately.
If called with no arguments, these functions will block until the requested data is ready, but you can specify a timeout_seconds
argument here to control how long the call will block waiting for data before giving up. Using a value of 0
will cause the function call to return immediately - either with the data (if it was available) or None
if it was not.
So something like this would get you 200Hz gaze with 30Hz scene video (depending on factors like the speed of your network, workload of the pc, etc):
... snip ...
gaze = None
scene_frame = None
while True:
gaze = device.receive_gaze_datum(0) or gaze
scene_frame = device.receive_scene_video_frame(0) or scene_frame
if None in (gaze, scene_frame):
time.sleep(0.5)
continue
frame = scene_frame.bgr_pixels
cv2.circle(
frame,
(int(gaze.x), int(gaze.y)),
radius=80,
color=(0, 0, 255),
thickness=15,
)
cv2.imshow("Scene camera with gaze overlay", frame)
if cv2.pollKey() & 0xFF == 27:
break
Thanks Neil - I'm looking to replay the captured video pretty immediately in slow motion on a screen that is larger than the devices running the companion app. Dealing with athletes that are warmed up and can't wait for a USB transfer, they can wait 20-30 seconds, but not longer before getting back to activity. If like to review there last play with them immediately after completion and before the next one?
Hello, I am wondering if anyone ran into a problem when trying to permanently delete videos from the drive, they dont delete?
Hi @user-21cddf! If the recordings you're trying to delete are currently part of an enrichment, that might be why you cannot delete them. This is a failsafe.
I am also wondering if the scanning recording for the reference image mapper has to be from the same device as the videos or from the same day? Ive been having issues where an identical scanning recording does not create a good match for the model, even if the context, lighting etc. are identical I would also appreciate any tips on using the reference image mapper in general:)
It should not matter. But so that we can provide more concrete feedback, would you be able to invite [email removed] to your workspace so we can take a look?
Hi there!
We're hesitating between upgrading our older Pupil Core (with the original eye camera on a single eye) to the newer binocular Pupil Core, or switching to a Neon. Could you please help me clarify a few things ?
1/ Is it fair to say that compared to the Core, the Neon is actually a downgrade in terms of accuracy (reported as 1.3ยฐ with offset correction for the Neon, and 0.60ยฐ after calibration for the Core)? Is it the case that better resistance to slippage for the Neon somehow offsets the loss in accuracy in actual everyday use?
2/ Is it the case that the "Just act natural" frame would be the most suitable for in-lab studies in terms of generally fitting well to all head shapes? I think I understand the "better safe than sorry" frame fits more tightly and could be less suitable for some participants. What about the "Is this thing on" and "Nothing to see here" options - is there a meaningful difference in fit or in accuracy?
3/ Is it correct that I can buy the Neon with a "Just act natural" frame, add an empty additional "All fun and games" frame for cheap, and easily swap between the two?
4/ Is it correct that you have to use the Neon Companion app on a smartphone and cannot connect the Neon directly to a computer? I'm a bit confused about this. Our current setup with the Core is that we use PsychoPy to display auditory stimuli, and at key moments in the task we have PsychoPy send a notification to Pupil Remote which we then use to synchronize timestamps. How exactly could we make this work if we can't actually connect the Neon to the computer? (Note that participants in this study don't actually look at the computer so we can't even use the world camera to synchronize the two.)
5/ After ordering, what is the approximate delay to receive a Core vs. a Neon ?
Thanks for all your help ! :)
Hi @user-a4e164! Certainly, I'll respond to your points below: 1. No, not really. It is true that Core can get you very accurate data, but this requires very controlled conditions, e.g., head very stable, 2D calibration pipeline (not robust to slippage), short recordings and repeated calibrations, constant lighting, etc. In practical everyday use, Neon provides equal to or better accuracy than Core and it is very robust to slippage. It's just much easier to obtain high-quality data over the course of testing. It also provides rich datastreams. 2. This really depends on the types of studies. Because Neon is modular, it can be fitted into different frames best suited to a specific use case. The frame closest to Core is called 'Is This Thing On'. It has no peripheral visual field occlusion and works well for many lab studies. However, 'Better Safe Than Sorry' is also an option. Both of these frames are more snug fitting that Just Act Natural. There is no difference in accuracy across framesโthe same module is used. 3. Correct! 4. Also correct. We actually have a Neon-PsychoPy integration. Neon connects to PsychoPy over a local network; you can use a tethered connection via a USB hub and Ethernet for low-latency communication. But if you just want to send events to Neon, this is possible via our real-time API. 5. Our target fulfilment is within 10 days + 3 days shipping for both Core and Neon. However, we can provide a more concrete estimate when generating a quote. If you'd like, we can also arrange a demo and Q&A session via video call. Just let me know, and I can send a link!
Hi @user-a09a75 , glad to hear it worked out with the gaze overlay!
All of the datastreams from Neon are timestamped from the same high-precision clock, so all of Neon's data are synchronized by default. The Native Recording Data and Pupil Cloud's Timeseries Data are in principle the same, so the approach is equally valid for both formats. Pupil Cloud re-processes gaze at 200Hz and computes some additional streams
The pl-neon-recording library used in @user-d407c1 s YOLO example is a Python interface to work with the open data saved on the phone. Internally, it uses pyav and interpolates when sampling streams
Regarding the LSL synchronization, when Stream to LSL
is enabled and you send an Event, then it is automatically saved in the Neon recording timeline, as well as LSL's. It sounds like you are already doing this
Note that converting to different timezones does not change approaches to synchronization. It can even obscure that process. Timezone conversion changes how the timestamps are visually represented for display purposes. For accuracy, it should be easier to stick with the default formats and save the timezone conversion for a final step
Since your analysis is already done post-hoc, I'd personally stick with doing it like that. I'd assume it is easier than migrating to a real-time implementation
If you want to apply YOLO or similar real-time models to your situation, I'd recommend checking out one of our Support Packages. Then, we can dedicate resources to assist with code & implementation. Briefly, whenever gaze is on an object that YOLO has detected, you can send an Event
Ok everything clear! I just want to stress a little bit the issue of post-hoc events alignment. So, Right now, the procedure Iโm following consists of:
1) Sending events to Neon with "stream to LSL" enabled, so they are recorded both in the Neon timeline and in the LSL timeline. These include default events like record begin, record end, and experimental session start and end. - When downloaded from the Neon Cloud, these events have timestamps in UTC. - When recorded via LSL, the timestamps are in the LSL time domain, which (if I understand correctly) are timestamps relative to when LabRecorder was started.
2) Synchronizing post-hoc the other events (gaze on cobot or not) in the LSL timeline. This is the tricky part because these events only reference the Neon timeline and their timestamps are only in UTC.
So, why I converted Neon timestamps from UTC to CET: Since LSL XDF files include metadata that provides the recording start time in CET, I can use this as a reference and align all timestamps in the recording relative to that start timestamp. This is why, for convenience, I also converted Neon timestamps from UTC to CET, allowing me to compare and align LSL and Neon timestamps more easily.
I hope this explanation clarifies my approach! If you have any suggestions for a better or more direct method, Iโd love to hear them! ๐
Motion Capture w. Neon
Reference Image Mapper
Hello! I'm using Pupil Neon for the first time, and I'm encountering an issue where there's an offset between where I'm actually looking and where my gaze is being detected. I've performed manual offset correction and inter-eye distance measurement in the app, but there's still some offset remaining in certain areas.
https://www.youtube.com/watch?v=08z9t3BZ7OI The video is a screen capture of the recording from the app. I'm actually looking at the position of the white circle. As you can see in the last part, there's still a significant offset remaining in the left edge area of the screen.
Is there any way to address this issue? Thank you!
Hi @user-2d96f8 ๐ ! May I ask, are you wearing your own spectacles under Neon? Can you share a screenshot of your eye images so we can better understand what's happening.
Feel free to open a ticket in ๐ troubleshootingto share that image.
Hi! I was wondering if any of you could point me towards where I can find the script to convert raw Pupil Lab data into csv files
Hi @user-e13d09 ๐ ! To convert from Native Recording Format to CSV files locally you can use pl-rec-export
.
Lemme know if you need help with
Regarding I'm testing the same vdo clip with 5 people, is it possible to aggregate gaze point of each subjects in the SAME frame of analysis?
Because I love to see distribution of attention at the certain frame
Plz kindly suggesting us
Hi @user-afb0c1 , are you using Pupil Cloud?
Hi, It would be really helpful if we could develop with the realtime API without needing physical devices. Are there any mocks or simulations available to support this kind of development?
Hi @user-2d96f8 , we currently do not offer something like that, but perhaps members of the community have created it.
You could approach this by taking the data from a Neon recording and streaming it over the network from a local server. You would need to simulate at least some of the functions described in the Under the Hood section of the Real-time API docs. You would probably also want to simulate the sampling rates of the different datastreams.
I'm not sure though if it would be the most robust way to develop, since there is always a chance for discrepancy from the official version running in the Neon Companion app and anytime the app is updated, the mock may similarly need to be updated.
May I ask what your use case is?
Hello! I have been attempting to run Face Mapper data overnight, but Pupil Cloud has been stuck in the "Processing" stage for over 10 hours. I wanted to check if the website is currently experiencing issues. Is there anything I can do to resolve this and get it working again?
Thank you for your assistance.
Hello, same for me with both the Face Mapper and the Marker Mapper, the enrichments are still "Processing" for 24h now. Thanks for your assistance.
I can confirm that I am also experiencing the same issue with pupil cloud as @user-fea107 and @user-edb34b. I have been trying to run a RIM enrichment and it has also been stuck on the processing stage for almost 24h.
Hi @user-a09f5d , @user-fea107 , @user-edb34b ๐!
Cloud is currently under an exceptionally high load, which might result in longer enrichment processing times.
First of all apologies, as you may experience some delays until results are available. The team is actively working to speed things up and prevent this from happening in the future.
We appreciate your patience !
Hi, I am working with AOI gaze data, to detect what percentage of gaze data is not detected by the Pupil Cloud enrichment and neon glasses. Is there a way to calculate that? I have tried looking at the data where gaze position on surface x and y are empty, but it was not the case in any raw data. Is there any better way to detect the gaze coverage percentage by the eye-tracking system?
Hi @user-665dc2 , since you mention on surface
, you are looking at the gaze.csv
file from the Marker Mapper Enrichment, correct?
To be sure I can provide the best answer, may I ask for more clarification about what is meant by gaze coverage percentage
and gaze not detected
?
Is there a way to realize when the system failed to detect gaze due to low light, not detecting the april markers, or other external/internal disturbances? for example.
Hi!
I'm looking into timestamps for the eyetracker's world video. It seems like world_timestamps.csv
has different length compared to the timestamps obtained from neon_recording.scene.ts
(world_timestamps is longer for most of my recordings), upon closer inspection it also seems like their values can differ substantially when refering to the same frame in the video. I was wondering if this is expected or if I'm missing something? Any help is appreciated, thanks in advance!
Hi @user-41d16e ๐ ! Some sensors can have a slight startup delay. Rather than discarding that early data, in Cloud we fill the scene camera feed with gray/empty frames โ this ensures no data is lost. pl-neon-recording will not generate those empty frames by default and timestamps start at the beginning of the scene camera feed.
Is this what you are observing? Or are the timestamps completely different? If you share some example data (through ๐ troubleshooting ) we can have a look.
I donโt think thatโs what I want to do.
Imagine that we have 4 people who seeing the same vdo clip 1mi. 30 sec (separately 1 by 1).
Then I love to see the distribution of fixate of all 4 people at a certain time.
Could we do it on a PupilLab?
Hi @user-afb0c1 , I see. So, something like a beeswarm visualization? While that is currently not possible on Pupil Cloud, you could build on the code in the Scanpath Alpha Lab guide to do this.
Otherwise, feel free to make a ๐ก features-requests !
I meant, on a PupilCloud
@user-f43a29
Hi, we are having an issue with our recording. We are recording two 30-minute sessions. While everything works perfectly fine for some recordings, others (and we could not find an error on our end) do not have any events in the recordings. This is most often but not always the first out of the two 30-minute recordings. Specifically, we only get the events 'recording.start' or 'recording.end' but not any of the synchronization triggers via LSL and also do not get any of these.
Hi @user-98b2a9 , could you open a Support Ticket in ๐ troubleshooting ? We will follow-up there.
Sure, Thanks!
Hi all, for the IMU in the neon, are the X, Y and Z directions what you would typically expect? X is subject's left-right / medial-distal, Y is up and down towards the ceiling or floor / superior-inferior, Z is forward-back / anterior-posterior?
Hi @user-24f54b , the IMU's axes are documented & diagrammed here. You might also be interested in our IMU Transformations Alpha Lab guide.
Hey there, was wondering if anyone had undertaken a project that involved gaze-contingent displays/stimuli that respond to the eyetracking data in real time. I know the latency of such a system would depend on one's own network, since the coordinates are transmitted wirelessly, but I was hoping somebody could share what latency they were able to achieve with this type of setup?
also if this is not the best channel to send this question, would much appreciate being redirected, thanks!
Hi @user-4a0c14 ๐ ! Have you seen our Build Gaze-Contingent Assistive Applications Alpha-Lab article? Not only you can see an example there, but the code is open-source so you can run it locally and test what latency you see in your own setup.