Okay. I will report here again if the app still works funky after we replace the module with the new one. Please keep me updated on the surface issue!
Hey, @user-594678 - does the gaze circle in the recording follow your expectations, or does it take unusual paths like the lines in the marker mapper?
Hi pupil team! How can i get the coordinate values from the cloud?
Hi @user-2eeddd ππ½ ! By coordinates, do you mean the gaze data in x,y coordinates in scene camera space? If so, you can download the raw data by right-clicking on the recording > Download > Timeseries data. This will download a folder that includes several csv files - among them a gaze.csv file that has the gaze x [px]
and gaze y [px]
values, representing the x/y-coordinate of the mapped gaze point in world camera pixel coordinates. See also our documentation for more details on the exported data https://docs.pupil-labs.com/export-formats/recording-data/neon/#gaze-csv
Hey Pupil labs team, I'd like to have an offline version of the way pupil cloud show the video footage with the gaze data . Is it possible to download this video or is there a script that I may use to add gaze to video? Thank you in advance for your response.
Hey again Pupil labs team, I think my inquiry got covered by the latest questions. Could you please let me know about this? Thanks.
Hello Pupil team. I'm trying to find the gaze direction on the real-time API. Is there a way to obtain this or the depth of gaze (coordinate z)?
Hi @user-58da9f ! As of today, this is not possible, NeonNet leverages both eye images to give a gaze estimate and has no cognition of an eye model. By the end of the month, we will offer pupillometry in the Cloud using a new neural network, which will also output eye state.
Later in the year, we will try to implement this new neural network directly on the phone, so you can have eye state and pupillometry in real time. Thanks to that, you will be able to estimate the Z component based on the eye's convergence.
Kindly note, that estimating depth from convergence might not be the most accurate way, specially when looking farther than a meter. If you require highly accurate depth, my recommendation would be to use an RGB-D sensor and synchronise it with Neon. If you would like, I can develop further on this approach.
Hello! Does Neon have the Prescription Lens Kit similar to the Invisible?
Hi @user-5543ca ! Yes! We do have a prescription lens kit, both the bundle and the frame "I can see clearly now", includes hot-swappable prescription lenses that are attached to the frame magnetically. This allows you to quickly exchange the lenses between participants, or remove them if they do not need them or wear contact lenses. The lens kit includes lenses from -3Dpt to +3Dpt in steps of 0.5Dpts. We also have an optional extended range if you feel like you need more diopters.
Is it being sold as a bundle βI can see clearly nowβ? together with the prescription lenses kit?
Thank you, Miguel
Hello! I was wondering if you would recommend merging the blink dataframe with the gaze dataframe in an attempt to create a closer number of gaze outputs across people? There just seems to be a huge inconsistency across people even when shown the same stimlus.
Hi, @user-328c63 - can you tell us a little more about the inconsistency you're seeing?
Hi I have a few questions that regarding Neon before purchasing that I hope you can help me with: 1) Does the real-time Python API use the deep-learning gaze estimation? If no, then what is it doing? Something similar to Invisible? 2) Is accelerometer and IMU data accessible through the python API? And what about pupil-data once that is ready?
Hi, @user-648ceb - some responses to your questions 1) Yes, gaze data provided over the real-time API does use a deep-learning approach 2) IMU data and device orientation is available with the Python API
Currently, pupillometry is being developed as a feature in the Cloud. By the end of Q3/Q4, you will be able to obtain pupil size estimations post-hoc. Our next step will be to implement pupillometry on the Companion Device, allowing real-time measurement. This advancement will enable you to monitor your participantsβ pupil diameter changes as they respond to different stimuli, such as conversations or exercises. However, we cannot provide an exact timeframe for real-time measurements, it is unlikely that this feature will be available before the end of this year.
Hi, @user-a64a96 - sorry about that! Pupil Cloud offers the "Gaze Overlay" visualization. If you run that on the recordings in your project, you'll then be able to download scene videos with the gaze circle displayed on top of it
To create the Gaze Overlay visualization 1. Open your project in Pupil Cloud 2. Click on "Analysis" on the bottom-left 3. Click "New Visualization" at the top 4. Select "Video Renderer" 5. Make sure "Show Gaze" is checked (on by default). Edit other settings to your preferences 6. Click "Run" at the top
Once the video renderer is done processing, you can right-click on the item on the analysis screen in this list of visualizations and click "Download"
Hi Pupil labs team, we do not get the βGaze Overlayβ as an option in our Pupil Cloud. Is there a reason for this?
It's not listed as "Gaze Overlay" anymore. Rather, it's a "Video Renderer" visualization with a "Show Gaze" option.
If you follow the steps I outlined in the message just above yours though, you should be able to find it
The "bounding box" seems to change like this. I had orignally thought that if i define the QR codes at the beginning of the stimulus, it should hold until the end since they are constantly in view.
Hey @user-328c63 Once you've defined the surface at the beginning of the recording, it should remain consistent throughout, as long as at least three markers are correctly detected by the enrichment. However, it appears from the screenshot that only one marker was detected. I noticed that the markers lack a sufficiently large white border around them, which is essential for accurate detection. It could be worth printing larger markers and strategically placing additional ones around the projector area. This approach will help ensure that an adequate number of markers remain detectable throughout the entire recording.
Additionally, please ensure that the scene is adequately illuminated and try to avoid sudden movements that may lead to motion blur.
Please take a look here for further information about proper markers setup (https://docs.pupil-labs.com/enrichments/marker-mapper/#setup)
Hi, I'm wondering which data types are used as inputs for the gaze estimation neural network implemented in neon, to generate gaze.csv
. Does the network only use binocular IR camera videos as inputs, or does it also use other types of data (e.g., scene camera) as inputs for better estimation?
Hi @user-299825! The estimation is done using only binocular pairs of eye images to produce the raw gaze signal (and blinks too).
For fixations the scene video is used too to compensate for VOR head movements.
Hi, we found in our data set that gaze data and blinks overlap each other in terms of time, which is somewhat counterintuitive because there is no source of information with closed eyes. I suppose it is due to the fact that the gaze estimation is continuously made, even when the source of information is temporarily missing. If this is the case, can we properly interpret the gaze values as estimated gaze directions βbehind the closed eye lidsβ? Additionally, it would be appreciated If you could inform us of the start and end criteria of blinks (e.g. touch of the upper and lower eyelid or x % of eyeballs covered with eye lids etc.) so that we can interpret the results more thoroughly!
Hi @user-44d9d4! It's accurate to say that raw gaze data may contain unexpected coordinates during blink events. It is conceivable to filter these out using the detected blinks' start and end timestamps. These are reported in the 'blinks.csv' export.
However, it's important to understand the strengths and limitations of the blink detector. For instance, under typical viewing conditions, it performs well. That is to say, the start and end timestamps can be considered relatively accurate in correlating with the actual start and end of a blink. That said, in activities that trigger very dynamic eye and head movements, such as football, we've observed false positive blink detections due to the way (I expect) the blink detector leverages optical flow patterns in its computation.
Currently, in Cloud, there's no definitive method to verify the quality of the blink detection result. But soon, we will introduce an eye overlay visualisation that should significantly aid manual quality control.
You can find a much more detailed overview of the blink detector in this whitepaper, which should help you interpret the results better: https://docs.google.com/document/d/1JLBhC7fmBr6BR59IT3cWgYyqiaM8HLpFxv5KImrN-qE/export?format=pdf
hi Pupil Labs I was recently trying the neon monitor and found that the stream continues even after quitting the companion app. Is the stream not related to the companion app? How can I trigger the stream on/off then?
It runs in the background. If you want to stop everything, you can long-press on the app icon, click on information, and then select "Force Stop"
Hi @marc I appreciate your helpful reply! Just to make sure I understand correctly, the network can do "calibration" implicitly for the raw gaze estimation only using IR eye image pairs, as it does not need to solve correspondences between the world and the eyes (as normal camera calibration does) - since the relative positions of the world cam and the eye cams are fixed within a device, and the network learned the correspondences under this configuration. Do I understand it correctly?
I have one more follow-up question: is there any temporal integration in the gaze estimation process? In other words, is the network doing the mapping (IR_cam1_t, IR_cam2_t) -> gaze_t
for each time point t
independently, without using data at adjacent time points, say (t-1)
or (t+1)
? I'm curious about this because I'm analyzing the autocorrelations of the gaze data.
Yes that understanding is correct. The fixed camera geometry is indeed one prerequisite for calibration-free gaze estimation.
The network itself does not integrate temporally, but makes predictions independently for every pair of eye images. A slight smoothing is applied to the signal though to compensate the fluctuations that are typical for ML Models a bit. This is done using a so-called 1-Euro filter, which should in theory smooth out fixations a bit while leaving saccade movements mostly untouched.
Hi Pupil labs
i have a question. 1. what do you mean by timestamp in gaze.csv? What's the standard? 2. can i know the standard of coordinates in gaze.csv? 3. There is start time, end time, what is the unit and when is the time from? in fixation.csv
Hi, @user-be0f3e ππ½ - please see this page (https://docs.pupil-labs.com/export-formats/recording-data/neon/#gaze-csv) for answers to these questions and more π
@user-648ceb - please note my updated response to your question regarding the availability of pupillometry data over the realtime API: https://discord.com/channels/285728493612957698/1047111711230009405/1149297253257859144
Hi Pupil teams! I was wondering whether you know about mismatch between the number of frame images with the world_timestamp. I have a recording that the video has 3026 timestamps (in the world_timestamps.csv). When I checked and pulled the video using openCV, the number of frame is correctly 3026, but then I realized that the recording actually ends at the frame number 1975 (and is correctly the end scene of the recording). From the frame number 1976 to 3026, the frame image could not be retrieved. I checked that my other recordings also show the same problem, although the gap between the two numbers is not that huge. For example, in my other recording, the number of world_timestamp and the number of frame numbers is 2490, while frames could not be retrieved from 2035 frame. Any suggestions for how to fix this issue?
Hello. I see that the "worn" algorithm is not yet ported over to the neon yet. Is there another value we can use in place of this to know when the system is detecting the presence (or lack thereof) of eyes in the eye cameras? It would be valuable for us to be able to know this data in realtime. Thanks!
Hi @user-44c93c! You are correct in noting that the worn value for Neon is not yet available. We are in the process of implementing it, but it is unlikely to be ready within this year. Additionally, there is no turnkey solution to detect the presence (or absence) of eyes in the eye cameras' view in real-time. One possibility would be to do some post-processing of the raw camera streams, which can be accessed through our real-time API: https://docs.pupil-labs.com/neon/real-time-api/introduction/#streaming-eye-video.
Hi @user-594678! OpenCV is known to drop frames and so that would explain the mismatch you're seeing. Is there a reason why you're trying to grab frames with OpenCV?
Hi Neil, thanks for answering. I wanted to find some objects positions in the frames in a freeviewing condition. Is there any other methods that youβd recommend to load video frames?
Hey Pupil Labs team, I have a quesiton concerning the surface positions, overlapping surfaces, surface warping, and how gaze coordinates is converted from 3D space into 2D space. Is there someone I can contact?
Hi @user-07e923 ! Apologies for the delay answering, the message totally slip my radar. We can answer you here, or if you prefer we can answer you by email through [email removed]
Do you have a concrete question or would you like a general overview of how this is done?
Hey @nmt , I did some more research and found this library called "decord" and with that all frames are correctly retrieved! I'm leaving the message here so that other users can look for if they encounter the same issue π
Thanks for sharing that! We often use pyav
(https://pyav.org/docs/stable/), but decord
(https://github.com/dmlc/decord) looks like it has good performance metrics. I'll be checking it out myself π
Hi team again! I was wondering whether there is a way that I can easily convert the gaze points in pixels to visual angle (Are you planning on developing this feature by any chance?). I don't know much about Azimuth and Elevation, but those values do not seem like the same with the visual angle. I also tried to compute visual angle from the gaze pixels based on the scene camera's visual angles, but it looks like that I need to do some non-linear computation to correctly get the values. Do you have any suggestions that I can try?
Hi @user-594678 ! Azimuth and elevation are gaze angles in degrees from a cyclopean eye that has its origin on the scene camera. Are you looking for the kappa angle? to provide you with some kind of angle like that (as in how much it subtends one gaze point from the optical axis), one needs to know the location of the eyeball's centre or an estimation of it. We plan to include new features such as pupillometry and eye state estimation soon. In fact, by the end of September, these features will be available on the Cloud, and with the help of the eye state estimation, you will be able to get some sort of visual angle (depending on what you are looking for)
Hi @user-d407c1 Can I get an update on when the pupillometry features will be available on the Cloud?
Is there a way I can play the video frame-by-frame in Pupil Cloud or with another Pupil software?
Hi @user-8c6b7a! To add to the reply of @user-d407c1, you can find the full list of the shortcuts on the help menu of Pupil Cloud. You can find it in the top-right of the UI (the question mark icon) > Keyboard Shortcuts
Hi @user-8c6b7a ! In Cloud, if you press shift + the right or left arrow on your keyboard, you will move one scene frame (0.03s) back or forward.
Oh, I see. Is the maximum neon sampling rate 30 frames per second? Thanks!
Neon's scene camera has a sampling rate of 30 Hz, the eye cameras run at 200 Hz. I hope this helps!
just wanted to confirm that audio is just extracted via the audio channel in the exported scene video?
Hi @user-147316 ! Yes, the audio containers are stored in the scene video.
Hi pupil team. As i know, the origin is in the top-left corner of the image(gaze.csv). if the screen moves, dose the origin move too? If so, is there a way to fix the origin without fixing the eyetracker?
Hi @user-855d20!
Gaze coordinates in Neon are given in the world camera space, in other words, relative to the glasses. To obtain the gaze coordinates on a monitor, or any other surface space, you will need to use either the Marker Mapper or the Reference Image Mapper. These tools will help you localising and transforming the gaze coordinates from the scene camera to a contained surface or a reference image https://docs.pupil-labs.com/enrichments/
Assume you use the Marker Mapper, you will find a new gaze.csv https://docs.pupil-labs.com/export-formats/enrichment-data/marker-mapper/ with the coordinates gaze position on surface x [normalized], and gaze position on surface y [normalized] which indicates the coordinates of gaze at the screen/surface space.
Hi,
I just did two consecutive recordings with Neon and I get an empty file from the imu.csv in the cloud, I recall this was an old error, but that you could pull the file manually and the data might be there. Is there such an alternative way to check?
Thanks,
Hi @user-b55ba6 ππ½ ! Which version of the Neon App are you using? You can find this information in the info.json file of a recording or in the Neon App Settings
> About Neon Companion
.
Hi Wee Kiat Apologies for the delay
Hi, is marker mapper only available in neon monitor app?
Hi @user-2eeddd π The Marker Mapper enrichment is available for both Neon and Invisible recordings, and you can access it only on Pupil Cloud. You can find more details about the Marker Mapper enrichment here: https://docs.pupil-labs.com/enrichments/marker-mapper/
Hi What tool can I use to scan the QR code on Neon? Thank you
Hi @user-4bc389 ! Do you mean the QR code shown when clicking on Stream? Most phone cameras include QR readers nowadays, otherwise you can directly use the URL written below. If you mean the pseudoQR code on the module, that is for internal reference.
Thank you for your reply. I would like to check the serial number for registration purposes, but I have used various tools to scan QR codes, but there is no information displayed
Hi, the code on the back on the module is the serial number and its encoded using https://de.wikipedia.org/wiki/2D-Code#DataMatrix
There are readers for this in the play/app store. for example: QRbot
An alternative way to get the serial number is to open the info pane in the Neon Companion App: Connect the Hardware, open the app and click on the 'i' icon.
Thanks
Hi @user-d407c1, thanks for your reply and sorry for my late response. I think azimuth and elevation are what I was looking for. Thanks for the clarification.
I have a following question. I plotted the gaze in pixels together with azimuth, and found out that they are not completely overlap. Do you know why this is the case and how I can fix this?
Hi @user-594678 ! Thanks for following up, kindly note that the pixel to angle are not exactly linear, due to the projection and the distortion of the scene camera that it non linear. To better understand the non linear distortions, please have a look at https://docs.pupil-labs.com/neon/how-tos/advance-analysis/undistort/
Also, I wonder whether if I directly convert a pixel coordinates in a frame into a visual angle using the camera's field of view (132 x 81 degress), would that be good to compare with azimuth and elevation values? I want to calculate a point in a frame in visual angle and compare it with the gazes of an observer.
Hello Nadia. Thank you for your response. I am using version 2.6.33-prod. Attached is a pic of the .json file.
thanks @user-b55ba6 - Could you send us an android.log.zip
from one of your affected recordings too please so we can inspect it? See also this message https://discord.com/channels/285728493612957698/1047111711230009405/1141699539229749259
Hey, is it possible to apply the gaze offset correction after the recording has been uploaded to the cloud?
Hi @user-a0cf2a ! This is currently not possible in Cloud, you will need to download the data and apply it yourself directly over the data. If you would like to see this function on Cloud, you can upvote it here https://feedback.pupil-labs.com/pupil-cloud/p/post-hoc-offset-correction
I just sent an email with the attached file. Thanks!
thanks @user-b55ba6! We'll have a look and let you know asap π
Hi @user-480f4c . Attached in a new email is the log from a previous recording where the timestamp in the imu.csv does not change (time frozen) while the gaze data has a changing timestamp. What might be happening? The log I mentioned previously (empty imu data) came immediately after this one I am mentioning in this message).
Thanks @user-b55ba6! We got the email, we'll follow up asap!
Hi team! We are testing out how we can access Neon Monitor on a browser to monitor our data collection in real time, but we are having trouble getting it to work reliably. It will work on some computers but not others, or work in one instance but when we open another instance a minute later on the same computer it no longer shows up. We are doing internal testing to see what might be the winning combination, but I was wondering if you had some troubleshooting tips? I assume it has something to do with some combination of security or internet connection settings. Thank you!
Hi @user-ae76c9 ! wouldn't you by any chance be using Eduroam or some kind of institutional network? You can also type in the browser the IP address rather than neon.local:8080
which might be less prone to errors if for example you have a custom DNS. Additionally, kindly note that some networks share the same ID but are not the same local network
Hi! I am conducting an experiment to observe eye movements after rotation. My computer is Mac, and the glass is Neon. I have a few questions regarding Neon. Q1: Is it necessary to connect Neon to a computer and use Pupil Capture to collect eye movement data such as pupil position data? If Neon is only connected to a mobile phone (Pupil Companion), can we only obtain a video of eye movement? Is that correct? Q2: How should I use the raw data exported from Pupil Cloud? I am unable to import them into Pupil Player as it indicates that I need to update the version of Pupil Player, but I cannot find the installation package for the latest version.
Hi @user-d714ca ππ½ ! Thanks for reaching out.
a) The recommended workflow is to connect Neon to the Companion Device and use the Neon Companion App to collect data. Using the Neon App, you can leverage Neon's technology by collecting calibration-free and slippage-invariant gaze data thanks to Neon's deep learning approach. Recordings made with the Neon App are uploaded to Pupil Cloud (see here for Neon's ecosystem: https://docs.pupil-labs.com/neon/getting-started/understand-the-ecosystem/). You can get gaze, fixation, blink, and imu data. Pupillometry and eye state will be added in the upcoming weeks as well! You can have a look at the exported data here https://docs.pupil-labs.com/export-formats/recording-data/neon/ and the full list of available data streams here: https://docs.pupil-labs.com/neon/basic-concepts/data-streams/
b) Alternatively, if you use Neon using Pupil Capture, you essentially use Neon as if it was a Pupil Core headset (e.g., calibration is required). You can find details on the Neon-Capture combination here: https://docs.pupil-labs.com/neon/how-tos/data-collection/using-capture/ Following this approach, you will get the same data you would get with Pupil Core (e.g., https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter)
Nadia hi!
You've mentioned about the pupilometry data will be added in the upcoming weeks to Neon. I wonder, will It be necessary to change any hardware to make it available or just a software update ? Thank you !
Hi @user-7556ec ππ½ ! Pupillometry and eye state will be added as a free software update when they become available. Therefore, no need for a hardware change π
Thank you very much! It really helps!
Thank you for the suggestion! We are currently having an issue where Monitor will load once, but if we refresh the page or try a second time, it will no longer load. Do you have any idea why that might be? Thanks again!
Hi @user-ae76c9 ! Would you mind checking the network panel at the developer tools at your web browser? This might provide some hints, if you never had a look at this panel, here you have a brief 101 https://www.youtube.com/watch?v=e1gAyQuIFQo