πŸ‘“ neon


Year

user-37a2bd 01 November, 2023, 10:18:28

Hi Team, I ran 6 RIMs for a project I am working on and the heatmaps got generated for 5 out 6 of them. Can anyone help me identify why the heatmap wasn't generated for that particular RIM? Thanks.

user-ac085e 01 November, 2023, 19:15:49

We are still able to start the recording manually during this issues.

user-d714ca 02 November, 2023, 07:52:13

Hello! Has new version of Neon Companion been released? I can not update it through the store.

user-480f4c 02 November, 2023, 07:58:12

Hi @user-d714ca πŸ‘‹πŸ½ ! We recently released a new version of the Neon Companion App (2.7.4-prod). You should be able to download the latest one from Google Play Store. Please let us know if you run into any issues.

user-37a2bd 02 November, 2023, 07:54:59

Hey Team, The code for the scanpath image and video seems to be taking a very long time for one single RIM. Is there a workaround or is this how it is going to be? I ran a single RIM which took 7 hours to complete on python. The RIM had 5 viewers in total. Does anyone know how long it took for the demo project to run? I am talking about the example showed on the Scanpath Image and Video section on the Alpha Labs page.

user-c2d375 02 November, 2023, 08:23:31

Hi @user-37a2bd πŸ‘‹ the time it takes to compute the scanpath script can vary depending on the duration of the recordings included in the RIM. It's not uncommon for it to take a significant amount of time, especially if the RIM contains lengthy recordings. Additionally, the processing time can vary based on the computational resources of your computer.

user-37a2bd 02 November, 2023, 08:26:10

Does 7 hours sound about right?

user-670152 02 November, 2023, 10:46:42

Hi, How can I get saccade data? duration amplitude etc?

user-480f4c 02 November, 2023, 10:52:07

Hi @user-670152 πŸ‘‹πŸ½ ! Although we have not confirmed experimentally that Neon can capture saccades, implicitly saccades equate to the inter-fixation interval (or gaps) between fixations using our fixation detector. For more details about our fixation detector, please see our documentation: https://docs.pupil-labs.com/neon/basic-concepts/data-streams/#fixations

user-670152 02 November, 2023, 11:01:40

Thank You. That's why I wrote. I'm thinking about buying a neon eye tracker. Currently, in the absence of saccade parameters, I need pass

user-a64a96 02 November, 2023, 11:04:37

Hey Pupil Labs, I need to have a high resolution of scene camera video. The video resolution is 1600X1200 and it is around 700 MB but the scene video in the "raw android data" is 1.1 GB. I was wondering why there is this difference. Is it because of different encoding? I also was wondering if 1600X1200 is the max resolution for the neon. Thank you in advance for your response.

user-d407c1 02 November, 2023, 15:25:49

Hi @user-a64a96 ! Do you mean from the scene video from Cloud and the scene video at the raw data? Yes! the scene camera can't record at a greater resolution, if you need more resolution, you could fix something like an Insta 360 or a GoPro and sync it.

user-ccf2f6 02 November, 2023, 20:59:08

hi Pupil Labs, I wanted to ask about the "worn" bool that's returned with the realtime API for Neon glasses. It seems to be always true even when the user is not wearing the glasses. Is this still in development? should we just ignore it then?

user-d407c1 03 November, 2023, 08:22:07

Hi @user-ccf2f6 ! This is still under development, yes you can ignore them for now. Apologies for the inconvenience.

user-37a2bd 03 November, 2023, 11:46:49

Hi Team, I tried running the following code from the Alpha Labs Page to map gaze onto a screen recording - https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/ The output I got was at a 0.5x speed. Is that by default? The example shown is in normal speed but the output I got was 0.5x speed.

user-d407c1 03 November, 2023, 12:13:01

Hi! How did you recorded the screen? was it with OBS? What settings did you have there? may the screen recording be of a different frame rate?

user-37a2bd 03 November, 2023, 12:19:12

I will try doing that! Thanks Miguel!

user-858e7e 05 November, 2023, 13:30:49

Hi! I asked a question some time ago I did not follow up on (sorry) but as it turns out my colleague almost immediatly found where the problem was. I have another question (more of a clarification): the docs say pupil cloud accounts for distortion, but is that only when you explicitly chose to undistort, and only for video? If I download the raw data, there is no option to get the undistorted gaze directly, I have to undistort it with OpenCV? Thanks!

user-d407c1 06 November, 2023, 09:17:01

Hi @user-858e7e ! The downloaded data is not undistorted, if you would like to undistort the raw data, you can find here how to do so following this guide https://docs.pupil-labs.com/neon/how-tos/advance-analysis/undistort/

marc 06 November, 2023, 10:11:44

To add on to @user-d407c1 's pesponse: What we mean when we say Pupil Cloud compensates for camera distortion is that all algorithms correctly work around it. So e.g. the Marker Mapper explicitly takes the distortion into account when mapping gaze data onto a surface. When you download a recording from Pupil Cloud the data is not explicitly undistorted though, but it's in the original video's format.

user-e757d2 05 November, 2023, 18:39:56

Hello, when I finished uploading the recording and went to check the cloud, I found an error in one of the recordings. Should I keep waiting or export the footage directly from my phone to my computer myself? As I want to deal with this footage as soon as possible, if I need to go and export it to my computer myself before analysing it on the cloud, how should I do it?

Chat image

user-7e5cb9 05 November, 2023, 19:44:26

My OnePlus 10 Pro upgraded to Android 13. Instructions to rollback or downgrade to the previous version are only available for OnePlus 8 and 8T. Can someone provide me the instructions and official Rollback APK for the OnePlus 10 Pro?

user-d407c1 06 November, 2023, 09:11:06

Hi @user-7e5cb9 ! We do support A13 on the One Plus 10 Pro, so there is no need to rollback. That said, if for any reason you would like to roll back, we can follow up with instructions.

user-a64a96 06 November, 2023, 10:10:43

Hey Pupil Labs team, Is there a method to perform offline drift correction for Neon? To the best of my knowledge, there is no specific calibration for Neon. However, when we examined the scene video with gaze overlay, we noticed that there is some drift between the "target" and gaze. Thank you in advance for your help.

user-d2d759 06 November, 2023, 10:21:19

Hi there, We are conducting a study that involves getting participants to read text displayed on a smartphone while wearing the Neon eye tracker.

We'd like to know how to extract two parameters from the timeseries data (.csv files) that's downloaded from Pupil Cloud. 1) How fast the eyes are moving/scanning while reading (speed). 2) How much the eyes have moved (amplitude) when reading.

Could you provide some information for the above, any codes or methods would be helpful. Thanks.

nmt 08 November, 2023, 19:02:50

Hey @user-d2d759! Those metrics aren't computed explicitly, although they could be calculated post-hoc, assuming a few conditions are met. However, before we delve into that, I'm curious about how much accuracy you require. For example, do you need to discriminate between words, characters, or even smaller eye movements?

user-b55ba6 08 November, 2023, 12:15:36

Hello. Regarding the units of yaw (-180,180), would it be right to say 0 is the magnetic north?

user-328c63 08 November, 2023, 21:31:28

During a subject, the eye-trackers stopped recording because I am out of storage on my phone. If I delete video data on the phone, will it still exist on the cloud? I'm assuming yes, but I also wanted to make sure. Additionally, how much cloud storage do we have?

nmt 09 November, 2023, 08:55:46

Hi @user-328c63. Have your recordings already uploaded to Cloud? That is, do they show as fully uploaded and playable in Cloud?

user-8fdb0c 09 November, 2023, 13:26:27

Face mapper enrichment to data

user-613324 09 November, 2023, 19:08:09

From the Pupil Cloud, I downloaded the "raw android data" of a recording. In that data folder, there is eye-video named "Neon Sensor Module v1 ps1.mp4", and also a file named "Neon Sensor Module v1 ps1.time", I assume this is the timestamp file of the eye-video. But how to open this .time file?

user-d407c1 10 November, 2023, 07:45:24

Hi @user-613324 ! If you do not mind, is there any specific reason you would like to access the .time files from the RAW format over the CSV files we offer?

That said, you can open them with Python and numpy, like this:

time_file = np.fromfile(str(path), dtype="<u8")

user-e40297 10 November, 2023, 08:52:11

I experience difficulties with the world cam. I see a red dot (gaze) on a black screen.

user-d407c1 10 November, 2023, 08:58:12

Hi @user-e40297 ! is this happening on Cloud or in the Companion device? and if in the latest, does it also happen on the preview?

user-fb5b59 10 November, 2023, 10:09:05

Hey, short questions: is it possible to process the NEON recordings with the PupilPlayer? So just copying it from the mobile device to the computer and running PupilPlayer, e.g., to get the gaze overlay. Or is this only possible by using the Cloud?

user-d407c1 10 November, 2023, 10:11:34

Hi @user-fb5b59 ! Neon recordings made with the Companion App are not compatible with Pupil Player, that said, we are working on an offline app similar to Pupil Player that will allow you to do exactly what you describe. We are aiming to have this ready within in this month.

user-fb5b59 10 November, 2023, 13:45:03

We previously integrated the PupilInvisible Device by using the "zyre" library in C++, so we were able to directly revceive the eye streams, gaze data, world video etc. in our system based on C++. I assume, that this is no longer possible, so we have to switch to using the HTTP REST API, websocket connections etc.?

user-44c93c 10 November, 2023, 21:24:46

Sorry to keep bugging you about this, but any updates on the availability of pupillometry? Thanks!

nmt 10 November, 2023, 21:58:21

Hey @user-44c93c πŸ‘‹. It's now available in Cloud πŸ™‚

user-795302 13 November, 2023, 12:22:06

Hi,

I'm having issues using the cloud to produce heatmaps via the Marker Mapper. When running the % doesn't go above 0 and I also keep getting Internal Server Errors. Enrichment ID: 9f2aa5f1-a695-48dc-b5e9-166f1ccec08b

user-6592a5 13 November, 2023, 15:54:16

Hi,

I'm currently working on my thesis on eye tracking where we use pupil cloud. The enrichment 'Face Mapper' is very interesting for our research, although, I was wondering if it is possible to determine where on the face a fixation falls (eye region, mouth region, nose region). When downloading the enrichment data to an Excel file, coordinates of the eyes, mouth and nose are given, but not really whether the fixation falls in these regions?

user-480f4c 14 November, 2023, 09:06:38

Hi @user-6592a5 πŸ‘‹πŸ½ ! To get the information of whether the wearer's fixation falls in the regions of eyes/mouth/nose, you could use the coordinates given in face_positions.csv and those given in the fixations_on_face.csv. For example, you can filter fixation on face == TRUE in the fixations_on_face.csv to get only the fixation that were detected on the face, and then see if the coordinates of the fixation (fixation x/y [px]] ) for a given timepoint match with the coordinates given for the eyes/mouth/nose in the face_positions.csv.

user-28ebe6 13 November, 2023, 21:32:49

Hey @&288503824266690561 , quick question about the marker mapper. When computing surface coordinates does pupil labs apply an M-estimator Sample Consensus (MSAC) or similar algorithm that filters out spurious measurements? In that case can I assume that any time a 'gaze detected on surface' row reads true that spurious measurements have already been filtered out or do I need to do my own filtering? I have notices that in the pupil cloud ocassionally the surface spazes out and seems way off but I am unsure if each time this happens it is assosiated with a "false" value as it would seem the website seems to think that the surface is detected even if ocassionally the estimated surface varies wildly in size and shape from the true surface. I would send a screenshot to show you what I am talking about but I cannot seem to be able to log in to the pupil cloud right now. I ordinarily log in with a google account.

nmt 14 November, 2023, 12:04:47

Please could you share a screenshot to demonstrate the point? Cloud login is working now πŸ™‚

user-d407c1 14 November, 2023, 13:31:06

Hi @user-28ebe6 ! We do not apply any additional filters to the detector of the markers but report them as given by the detector. Could it be the size of the markers or the illumination precludes them from being properly detected?

Other than a screenshot, would you be willing to share a recording?

user-d714ca 14 November, 2023, 01:27:13

Hello! I cant sign in to my pupil cloud with google, what can I do?

wrp 14 November, 2023, 02:30:39

Thanks for the report. I see the issue and have elevated this to my team. Will update when this has been resolved.

user-b5a8d2 14 November, 2023, 02:29:12

me neither

user-d714ca 14 November, 2023, 07:42:08

Yes, it works. Thank you very much for your help

user-b03a4c 14 November, 2023, 10:25:57

Hi, im ken. I want to know how to split long recordings into 3min part to conduct reference image mapper.

user-d407c1 14 November, 2023, 10:46:42

Hi @user-b03a4c! Currently, you can not use events to define the scanning recording. The scanning recording has to be an independent (less than 3 minutes long) recording that is added to the project. In simple terms, this recording will be used to determine the "3D relationship" of the reference image and then try to locate it on the other recordings.

If you would like to use events to determine a section of your recording as a scanning recording, feel free to suggest it at https://feedback.pupil-labs.com/pupil-cloud

user-4724d0 14 November, 2023, 14:18:38

Hi, I did some recordings which are now fully uploaded to the cloud. I am trying to watch the video rendering with the fixation and gaze overlay, but these keep dropping out (although both are selected) or don't show up at all. A notice for 'internal server error' keeps popping up. Is there anything I could be doing to avoid this problem?

user-d714ca 14 November, 2023, 15:26:08

hello! my pupil cloud always report error:'{"code":500,"message":"internal server error","request_id":"3b8df27c19c0ca4a89d2334d1ff18f1b","status":"error"}", what should I do?

user-d407c1 14 November, 2023, 15:56:21

Hi @user-4724d0 and @user-d714ca ! We have experienced some issues with our servers, but our team is working on this, I will let you know as soon as they fix it, apologies for the inconvinience

user-d407c1 14 November, 2023, 16:21:10

@user-4724d0 @user-d714ca @user-3c0775 this has been solved, if you are still experiencing issues please let us know.

user-3c0775 14 November, 2023, 16:59:16

Working great, thank you!

user-29f76a 16 November, 2023, 10:07:38

Hi, anyone have experienced an error when using OpenCV package to adjust the RGB mode? I was trying to define Area of Interest.

Chat image

user-cdcab0 16 November, 2023, 10:11:49

Is the file referenced on line 4 an actual image file (png, jpg, etc)?

user-29f76a 16 November, 2023, 10:30:30

oh my bad. It's folder. But when I change it to the actual image file, this happens (it's loading and wont run)

Chat image

nmt 16 November, 2023, 10:34:35

Let's move this convo to πŸ’» software-dev as it's not really a Neon discussion

user-29f76a 16 November, 2023, 10:35:05

Okay. Sorry

nmt 16 November, 2023, 10:35:29

No need to apologise πŸ™‚

user-6c4345 16 November, 2023, 13:02:57

Error found - the Neon module was not properly connected in the frame Hi I have just received my "Neon - Just Act Natural" package with the One+ 10 Pro, but the gasses will not connect when plugged in.

user-d407c1 16 November, 2023, 13:41:35

Hi @user-6c4345 ! Could you please use a 1.5mm hex Allen key to delicately tighten the screws connecting the module to the nest. This ensures a secure connection, which might be the reason why the module is not recognised.

If after doing so, you are still experiencing issues, please contact info@pupil-labs.com

user-28ebe6 16 November, 2023, 18:53:16

Hey, @&288503824266690561 question: for the enrichment files how come the gaze.csv and the surfacepositions.csv files are of different lengths? Should they not have the same number of data points?

user-cdcab0 16 November, 2023, 21:57:12

What about frames where the surface is out of view or otherwise undetected? Or frames where the gaze is not on the surface?

user-ccf2f6 16 November, 2023, 20:22:56

Hi Pupil Labs I had a question about the async and sync methods in pupil-labs-realtime-api. Does the sample rate of gaze differ in two i.e. would async method provide a higher resolution of gaze data stream than sync? I can see it in the api implementation that the sync method pops out the last frame from a queue which might theoretically result in frame drops. Does the async method ensure that we stream every frame from the queue to the connection?

user-d407c1 21 November, 2023, 08:55:46

Hi @user-ccf2f6 ! Here you have a high level description of both APIs. https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/simple-vs-async-api.html

Please also have a look at https://discord.com/channels/285728493612957698/1004382042340991076/1004716382430167081, as you mention the simple API grabs the last frame, meaning if you are not fast enough consuming it, you may miss frames, while the async queues them, so you won't miss them.

user-ccf2f6 20 November, 2023, 22:22:35

Hi PL, could you please help me with this

user-28ebe6 16 November, 2023, 22:00:00

Some data points are missing corner locations in the surface_positions.csv and in the case when the gaze is not detected on the surface there is a β€œfalse” in the β€œgazeDetectedOnSurface” field so I doubt that would explain the disparity.

user-cdcab0 16 November, 2023, 22:07:38

I don't know the exact conditions, but I don't think a record is guaranteed in these cases. Someone who's more familiar may chime in, but I'd examine your data to see if the missing records correspond time ranges where the surface is not visible/detected and/or the gaze is off-surface

user-d407c1 17 November, 2023, 09:14:27

Hi @user-6c4345 ! We have received the email and we will contact you promptly with a solution.

user-6c4345 17 November, 2023, 09:14:50

Thank You

user-25fb27 17 November, 2023, 10:45:50

@marc why is that the frames are flickery? sometimes it works but immediately after few seconds it's flickery.

nmt 17 November, 2023, 10:52:22

Hey @user-25fb27! Could you please contact info@pupil-labs.com with your original order ID? Mention that I sent you. Thanks!

user-91a92d 17 November, 2023, 15:58:32

Hello, I have been trying to use the pupil neon with pupil capture v3.6.0 (my experiment needs real time head tracking) but I can't manage to get a to gaze position. I have tried to set ROI for my eyes and the neon calibration. The gaze is really shaky and and I often observe a low confidence level on one eyes. Any tips or documentation I good follow ? thanks

user-9d72d0 17 November, 2023, 16:04:03

Hello! Currently I am not able to see the eyes camera in the app. Is there any setting to reactivate it?

user-d407c1 20 November, 2023, 07:57:02

Hi @user-9d72d0 ! Could you try unplugging and replugging Neon, tightening the screws on the module and reinstalling the Companion app checking that the permissions are properly set. If this does not solve the issue, please contact info@pupil-labs.com

user-28ebe6 17 November, 2023, 16:48:12

Ahhhh I think I see what's going on, when I inverted the time difference to get update frequency it appears that the gaze updates as fast as the infared eye tracking does (150-400Hz) while the surface_positions updates at the camera frame rate (30Hz and under)

user-cdcab0 17 November, 2023, 22:21:40

Ah, good eye. If you wanted/needed those surface gazes at 200Hz, you could interpolate. That's about the best you could do anyway given the frame rate limit of the scene camera

user-6592a5 18 November, 2023, 13:02:20

Hi Nadia, Thank you for the help! I have tried to apply and use the tips, but I still stumble over 1 aspect. The fixation data in 'fixations_on_face' (fixation in world camera pixel coordinates) are different from the fixation data in 'face_positions' (fixation in image coordinates in pixels). I don't really understand how I can compare these to determine which fixations fall in the mouth/eye/nose region? My apologies for the many questions, is there by chance a detailed manual/information somewhere? This one on Pupil Labs Neon is fairly common.

user-480f4c 20 November, 2023, 07:46:41

Hi @user-6592a5 πŸ‘‹πŸ½ ! The fixations_on_face.csv provides you with the fixations of the Neon user on faces that are detected in the scene camera. It does not specify whether the user fixates on the mouth or nose of the detected face, but it gives you the x,y coordinates of the fixations. In contrast, the face_positions.csv provides you with the coordinates of the eyes/mouth/nose (e.g., x,y coordinates where the nose is detected in image coordinates in pixels), rather than the fixations of the user on these facial landmarks. Note that both the fixation coordinates and the eye/mouth/nose coordinates have the same coordinate system, that is they are given in scene camera coordinates.

Therefore you can follow the approach mentioned in my previous message (https://discord.com/channels/285728493612957698/1047111711230009405/1173911864288227429) by using the information of: a) where did the user fixate? - this would be reflected in the fixation coordinates on the face (from the fixations_on_face.csv) . b) what appeared on the scene camera when the user fixated at this specific area and timepoint? - this would be reflected in the coordinates that the eyes/mouth/nose appear in camera space (from the face_positions.csv).

Some additional considerations: - The coordinates of the eyes/mouth/nose are represented as one single point in scene camera space. It will probably make sense to define a bigger area of interest around those points and check whether the fixation coordinates fall within the defined area. - Detecting accurately the unique facial landmarks highly depends on the distance. Keep that in mind as the distance is also affecting the size of the area of interest (see previous point) you might want to define around the points representing the eyes/mouth/noise positions.

I hope this helps!

user-d407c1 20 November, 2023, 07:59:46

Hi @user-91a92d πŸ‘‹ ! Do you happen to have recording you can share? We'll be better equipped to provide you with feedback if you share a rec that includes the calibration choreography.

user-91a92d 24 November, 2023, 17:34:56

Here the video with my pupils. I am using natural feature calibration because screen markers never turn green( may be due to screen luminosity) and it for my experiementation (outdoor inside a car) it will be more practical. I hope that you can notice that the gaze is "jittery" and pupils often lost. I have noticed a difference is my eyes "lighting" (from infrared)

user-d407c1 20 November, 2023, 11:30:41

Hi @user-8825ab ! I am answering to you here, since it seems like you are using Neon or Invisible, could you please try to log out and log in again in the app and see if this resolves your issues? https://docs.pupil-labs.com/neon/troubleshooting/#recordings-are-not-uploading-to-pupil-cloud-successfully

user-8825ab 20 November, 2023, 11:34:15

thank you

user-8825ab 20 November, 2023, 11:34:20

it works now

user-8825ab 20 November, 2023, 12:20:50

Hi team again, I uploaded my recordings on icloud but it shows an error on the recording, i wonder what shall i do to make sure it is fine

user-d407c1 20 November, 2023, 12:58:09

Hi @user-8825ab ! Could you contact info@pupil-labs.com with the recording IDs? You can find those in Cloud by right clicking over the recording and selecting view recording info.

user-8825ab 20 November, 2023, 12:59:33

Thank you

user-8825ab 20 November, 2023, 13:55:56

hi Miguel, it's me again, I wonder if we can export the recordings to our personal storge as the data confidential policy, and we don't have a gaze overlay function in our app, I wonder if this is something we should pay separately?

user-d407c1 20 November, 2023, 14:11:52

Hi @user-8825ab ! Pupil Cloud is secure and GDPR compliant. You can read more about the privacy policies and security measurements in this document https://docs.google.com/document/d/18yaGOFfIbCeIj-3_GSin3GoXhYwwgORu9_7Z-grZ-2U/export?format=pdf

As of now, we don't offer localised versions of Cloud to be run on your servers, but we do have a basic offline app that will allow you to explore the data https://github.com/pupil-labs/neon-player.

user-8825ab 20 November, 2023, 14:29:07

OK THANK YOU, can we download the video with the gaze information(i meant the red circle) on?

user-d407c1 20 November, 2023, 14:33:41

Sure! In Cloud you can download the video with gaze overlayed by going adding a recording to the project, then navigating to Analysis in the project view and then, clicking on new visualisation > New Video Renderer.

user-8825ab 20 November, 2023, 14:35:27

when i try to add it, i didn see the overlayed function here

Chat image

user-8825ab 20 November, 2023, 14:35:37

is there some way we can add this function

user-d407c1 20 November, 2023, 14:38:02

Hi! We will soon make it available there, but until we modify the UI, you need to click on "Analysis" at the bottom left, and then on the "+ New Visualisation".

Chat image

user-8825ab 20 November, 2023, 14:40:44

ok, then can we download the recording ?

user-d407c1 20 November, 2023, 14:44:50

Yes, after the new visualisation with the gaze overlay has been computed, you will find it on the analysis tab as "Completed" and you will be able to download it, either by right-clicking on it and selecting "Download" or by normally clicking on it and pressing the "Download" button at the bottom left.

user-8825ab 20 November, 2023, 14:42:00

then can we download the recording with the gaze overlay?

user-8825ab 20 November, 2023, 14:56:31

ok great, thank you very much for your support

user-9c53ac 20 November, 2023, 19:59:42

Hello! Where can I find the option to play back the recorded session and stream it using Python? The Neon player can handle playback but lacks a network API. Additionally, why does the recorded data downloaded from the cloud not include eye videos?

user-d714ca 21 November, 2023, 02:30:24

Hello! I have a recording with a failed fixation pipeline, can we fix it?

Chat image

user-480f4c 21 November, 2023, 08:22:35

Hi @user-d714ca! Please reach out to [email removed] and a member of the team will assist you as soon as possible.

user-d407c1 21 November, 2023, 08:47:22

Hi @user-9c53ac ! Reg. your first question, there is no possibility right now to stream back a recorded session, could you please develop what would be the use-case for it?

Reg. sour second question, currently, if you want to download the eye videos, you need to download the original files.

To do so in Cloud, you will need to enable the RAW sensor data download in the workspace.

See this message on how to enable it: https://discord.com/channels/285728493612957698/1047111711230009405/1108647823173484615 Kindly note that this RAW sensor data is the same that you will obtain by exporting the recordings directly from the phone.

user-9c53ac 22 November, 2023, 09:17:26

Playback API is essential and useful since it's not always possible to be on site and develop. It's much more easier to do the development with the recorded session.

user-7413e1 21 November, 2023, 14:09:38

Hello, I'm trying to use Neon Monitor but when I try to access the Monitor app through the URL it stops and doesn't show me the page, which tags as 'Not Secure'. It says: "The site doesn’t use a private connection. Someone may be able to view and change the information you send and get through this site. To resolve this issue, the site owner must secure the site and your data with HTTPS."

Can you please help?

user-d407c1 21 November, 2023, 14:14:12

Hi @user-7413e1 ! That's probably your firewall, antivirus or some setting in your network. Try using the IP address below the neon.local:8080 or using a different network

user-7413e1 21 November, 2023, 14:18:08

Thank you Miguel. Sorry I wasn't accurate earlier: I used both the URL and the IP Address. Still doesn't work. The network I'm using is eduroam from my academic institution.

user-d407c1 21 November, 2023, 14:21:25

Unfortunately, Eduroam is known to be quite restrictive, I would recommend using a different network if possible or setting up a hotspot https://support.microsoft.com/en-us/windows/use-your-windows-pc-as-a-mobile-hotspot-c89b0fad-72d5-41e8-f7ea-406ad9036b85#:~:text=your%20data%20plan.-,Select%20Start%20%2C%20then%20select%20Settings%20%3E%20Network%20%26%20Internet%20%3E%20Mobile,Internet%20connection%20with%20other%20devices.

If using a different network, these are the requirements https://docs.pupil-labs.com/neon/how-tos/data-collection/monitor-your-data-collection-in-real-time.html#connection-problems

user-7413e1 21 November, 2023, 14:25:32

Also - false alarm, just refreshed and it doesn't work again

user-7413e1 21 November, 2023, 14:23:50

I'm using my personal phone hotspot - now the page opens, but it is not showing the video (grey screen). How can I solve this? Also, how secure would this be?

user-c6dc34 21 November, 2023, 14:29:27

Hello everyone, we use the eye tracking glasses in an autonomous driving setup. We have big problems with the exposure. Is there a way to control the camera parameters via the Python API? Is it possible to tell the Backlight Compensation which image area it should pay attention to?

nmt 22 November, 2023, 02:27:25

Hey @user-c6dc34 πŸ‘‹. The backlight compensation setting changes the auto exposure target value. 0 is darker, 1 is balanced 2 is brighter. Depending on the specific use-case, there's usually a setting to favour a given area of the environment, but we'd need to learn more to make a recommendation. Could you elaborate a bit on your use-case, e.g. is it a driver in a real car or an operator sat at a monitor? Note that there's no way to control this setting via the API.

user-231fb9 21 November, 2023, 18:00:46

Hi, I'm working with the neon glasses and just made some footage. But right now the upload has been going on for a few hours and it's still 0% uploaded to the server. The loading bar is turning, but not doing much more. Is this normal? Or is there maybe another way to upload the footage to pupil clouds?

nmt 22 November, 2023, 02:29:51

Please visit https://speedtest.cloud.pupil-labs.com/ on the phone and execute the speedtest. This will tell us whether the phone can communicate with our servers properly.

nmt 22 November, 2023, 02:22:18

Hi @user-7413e1! What device are you using Neon Monitor with?

user-7413e1 22 November, 2023, 10:46:38

I've tried with windows laptop and with my personal phone (apple iphone). Either way it is not working reliably. I've tried using Edge browser and it seems to work sometimes, but when i refresh it usually stops working. Also, keep receiving a 'Not secure' message which is concerning.

user-d2d759 22 November, 2023, 05:26:51

HI there, I'm wanting to quantify the x,y gaze/fixation co-ordinates in cm units, the raw data files downloaded from pupil cloud gives these co-ordinates as pixels. I was wondering what's the best way for me to convert the px unit generated by Neon world camera to physical units? Thanks

nmt 22 November, 2023, 06:06:23

Hi @user-d2d759 πŸ‘‹. May I ask what the goal is here? Do you mean to map gaze to some object in physical space, such that you can quantify gaze shifts in terms of cm relative to the object?

user-d2d759 22 November, 2023, 06:10:07

Hi @nmt, the goal is to get the distance the eyes have moved when reading each line of text and time taken by the eyes to read per line of text. So i'm wanting to know what's the best way to convert px into cm in order to get these parameters. Thanks.

user-9c53ac 22 November, 2023, 09:34:07

or may be share a gist to playback

nmt 22 November, 2023, 09:44:38

Hi @user-9c53ac! What sort of development work do you have in mind? If you just need to playback the recording, why not download the scene camera + gaze overlay footage (https://docs.pupil-labs.com/enrichments/gaze-overlay/#gaze-overlay) and play that back using any one of the various command-line tools that are currently available?

user-9c53ac 22 November, 2023, 10:11:54

thanks but i want finer access to individual frames and its timestamps locally without having to use cloud. but its ok, i will develop it myself

nmt 22 November, 2023, 11:03:53

We can recommend using Chrome Browser on your PC for the best experience. The connection is local - no internet is needed. Other devices on the local network can also view the stream, so that's likely why you got that message.

user-7413e1 22 November, 2023, 12:30:52

Also - what do you mean no internet is needed? I though you need the two systems to be on the same network for it to connect?

user-7413e1 22 November, 2023, 12:30:06

I switched to Edge because Chrome is not working at all (ever).

user-c6dc34 22 November, 2023, 11:28:10

Hi Neil, thanks for the quick reply. We work with real vehicles on real roads. More specifically, we do image stitching between the vehicle camera and the scene camera of the eye tracking glasses. For this we do feature matching and if the image is overexposed or underexposed then unfortunately this does not work.

nmt 23 November, 2023, 05:10:27

Backlight 0 would be the best setting for a driver with bright light entering through the windscreen. In my experience, you can see out of the windscreen, but the car's interior would be darker. Have you already tried that?

nmt 22 November, 2023, 11:47:56

Very interesting. We've experimented with this ourselves, I.e. feature matching to combine an external video feed with Neon. Is your main focus what's going on outside of the car, from the driver's perspective?

user-c6dc34 22 November, 2023, 11:49:53

Yes, we combine it with Semantic Segmentation, Depth etc. to anwere the qestion What the driver was looking at.

nmt 22 November, 2023, 12:44:49

Monitor app

user-4724d0 22 November, 2023, 16:30:55

Hello! I am also having a problem during live-stream recordings. I am using a hotspot from my iphone to connect both my macbook and the neon phone companion. This works fine to launch the recording in the monitor app, but I sometimes lose the live gaze tracking: meaning that I still have the live world camera feed, but the live gaze indicator (red circle) is stuck in the center of the screen. Is there anything I can do to be better able to monitor the recordings live?

user-db5254 23 November, 2023, 03:36:34

Hi there. I tried to use the basic offline app by dragging the folder directory over. However, when I dropped it onto the program, it loads for a few seconds then crashes. Why would this be?

nmt 23 November, 2023, 03:52:24

Hey @user-db5254! Can you please confirm which eye tracker you're using?

nmt 23 November, 2023, 05:12:41

Monitor App 2

user-a4067e 23 November, 2023, 07:21:19

Hi everyone

I want to buy the Neon model. However, I have a few questions. When will the eye state and pupil diameter feature arrive? What is the ease of access to data? It is important that I have access to raw data. Also, will there be any problems with updates?

user-480f4c 23 November, 2023, 07:53:14

Hi @user-a4067e πŸ‘‹πŸ½ . Regarding your questions:

user-186965 23 November, 2023, 09:49:54

Hello Everybody! we are currently doing a gaze tracking study with Neon glass. In our experiments, we want to use the Marker mapper to map the gaze data in the defined surface we want. For that, we have fixed AprilTag markers from tag36h11 family in the four corners of the screen (like one instance in the screenshot) in static positions. After we did some recordings and then wanted to run the marker mapper enrichment in pupil cloud, it doesn’t work. It is not able to define a surface at all although all the markers are visible in the screen. Could you please help what could be the possible reason behind it? Could it be because of the size or position of the marker or something else we are missing? Thanks. I am providing one recording id as well if that helps - 2975987a-d462-4981-86f0-5b3251dbc51c

Chat image

user-480f4c 23 November, 2023, 09:58:18

Hi @user-186965 πŸ‘‹πŸ½ ! Could you please share more information on what happens when you try to create and run the enrichment? When creating the enrichment in your project, you should be able to see whether the markers are detected or not. If they are detected, they are displayed in green colour (see the example I'm attaching). Is that the case? If not, do you see any error?

Chat image

user-186965 23 November, 2023, 13:54:44

Hi @user-480f4c, thanks for your reply! So I tried to create the enrichment again and I found out that sometimes it worked and able to detect the markers but sometimes did not work with internal server error even though with the same setup. I am attaching the screenshots for both the scenarios. Could it be because of some kind of processing error or so? and is there any way to avoid this in future?

Chat image Chat image

user-b55ba6 23 November, 2023, 13:43:37

Hi. I have been looking at the 3D state ".csv" file. If I understand correctly optical axis xyz are the x,y and z component of the directional vector of the eye in the coordinate system described in the documentation, but there is no Torsional information. Am I right? Is there a way to get information on torsion eye movements?

user-d407c1 23 November, 2023, 14:00:49

Hi @user-b55ba6! Torsional eye movements as in rotation about the line of sight are not reported. To be able to get information about that, one would need a really detailed view of the Iris, such that you can track some landmark, then define the baseline and track it.

user-b55ba6 23 November, 2023, 14:01:07

Thanks!

user-480f4c 23 November, 2023, 17:08:20

Could you please share the ID of the affected enrichment? You can get it by locating the enrichment in Pupil Cloud, right-clicking on it, and selecting 'Copy enrichment ID.'

user-186965 24 November, 2023, 08:56:27

These three enrichments I have currently - 457f2c86-bdb6-49de-8840-7e3fa26ea91f, ab474f9b-47bb-45aa-9e75-a7771cb1a583, 3918949b-0618-4745-a699-59f288e2f145. For the first one, there is internal server error sometimes while detecting but the other two works fine.

user-480f4c 24 November, 2023, 09:19:19

Hi @user-186965! Our Cloud team has looked into your enrichment and it should run fine now - could you please have a look and confirm that it has been completed?

user-186965 25 November, 2023, 13:51:19

Hey, yes it worked, many thanks πŸ™‚

nmt 27 November, 2023, 04:47:37

Neon with Core pipeline and Pupil Capture

user-37a2bd 27 November, 2023, 10:59:16

Hi team. I am currently working on a project in the cloud. I have 11 RIMs in the enrichment sections. None of the RIMs have been processed till now. They have been running for almost 3 hours now. Any suggestions? I've tried refreshing the page multiple times. Doesn't seem to work. The data is 2 days old.

user-c6dc34 27 November, 2023, 11:39:48

Yes we habe tried diffrent settings. I think that the autoexposure doesnt work "correctly" for our usecase because the windscreen just covers a small fraction of the image so the exposure gets adjusted to the dark cockpit. I think that it would be realy helpful if i could adjust the area to which the autoexposure attends to. So to focus on the windscreen. Isn't there any possebility? Thanks!:)

nmt 27 November, 2023, 11:50:39

Autoexposure + backlight 0 would favour the windscreen rather than the dark cockpit. Have you tried this?

user-a64a96 27 November, 2023, 13:54:26

Hey Pupil Labs, I'm interested in generating a gaze + scene video similar to the cloud player on my local machine. I'm aware that some sensors may start a few seconds after the video begins. Could you please direct me to the synchronization documentation or the respected Neon Player GitHub code? Unfortunately, I couldn't find either, and I thought there might be documentation on your website. Thank you!

user-480f4c 27 November, 2023, 14:02:14

Hi @user-a64a96 πŸ‘‹πŸ½ ! In Neon Player you can easily export the scene video with the gaze overlay. You can download it here: https://docs.pupil-labs.com/neon/neon-player/

user-c6dc34 27 November, 2023, 14:49:34

Backlight 0 works finde but we have to adjust the exposure manually. Autoexposure doesnt work well.

nmt 27 November, 2023, 14:51:40

For context, the backlight compensation adjustment only works/takes effect in auto exposure mode. So with manual exposure, that's probably why you can't see out of the window.

user-b55ba6 27 November, 2023, 17:02:31

Hi. I am looking at the "Neon Sensor" eye video. I am interested in getting the timestamp of the eye frames. Eye video has 19476 frames, but gaze.csv has 19473 datapoints. The first 40 frames, however are saturated and very white, which raises the doubt if the 19473 datapoints align to the 19476 frames. Do they?

user-b55ba6 27 November, 2023, 19:01:40

Hey. Does anybody have a suggestion on what would be best to calculate eye movement velocity (to detect saccades). I know there are some algorithms that are pixel based, but I am interested in applying a threshold of 65 deg/s. I see some suggestion to use azimuth and elevation angles. There is also the 3d cartesian coordinates from the 3d_eye_states.csv file. Anybody?

nmt 28 November, 2023, 11:34:48

If you want to implement a saccade filter, elevation and azimuth are reasonable variables to work with - they describe gaze shifts in degrees, so you can compute deg/s

user-01e7b4 27 November, 2023, 22:58:07

Hi - I am doing data collection with children, so the neon is not always worn properly during the study. The exported 3d eye states and gaze files have data for every frame, even when the eye is not in view. Similar to @user-b55ba6, I want to use the eye video recordings that are in the export folder - but the number of frames in this video do not match the number of frames in the 3d eye state and gaze files (by ~10 frames). I have a few questions: 1) Is there a way to get the timestamps of the eye videos? 2) Do you have any recommendations for other ways to identify frames when the 3d eye state and gaze data are wrong (because there is no eye in view)? and 3) Will there be a way in the future to export the eye videos as an overlay on the world video? Thanks!

user-480f4c 28 November, 2023, 12:22:28

Hi @user-01e7b4 πŸ‘‹πŸ½ ! Regarding your questions, see my points below:

1) To access the timestamps of the eye videos you can read the .time file that comes with the Raw Android Data: np.fromfile('Neon Sensor Module v1 ps1.time', '<u8') 2) I'm not sure I fully understand your question. Do you mean that you want to filter those gaze samples when the user is wearing Neon vs. when they are not? I'd appreciate if you could elaborate on that. 3) You can export the world video with gaze overlay + eye video overlay using the Neon Player: https://docs.pupil-labs.com/neon/neon-player/. Please make sure to use the Raw Data Android folder from Pupil Cloud. If this option doesn't show, you will have to enable it in your workspace. Please see this relevant message: https://discord.com/channels/285728493612957698/1047111711230009405/1123238795815428166

Regarding the frame mismatch, I've created a relevant thread to discuss about that in more detail: https://discord.com/channels/285728493612957698/1178742667060969544 Could you also reply to my questions there? That will help us identify the root cause of the issue.

user-480f4c 28 November, 2023, 10:59:48

Eye frames timestamps

user-b4e139 28 November, 2023, 11:22:29

Hi, I have a question regarding the pupilometry update. After updating the companion app, I can see the 3d states file when I export the data after uploading it to the cloud, but not the pupil diameter. Is the output a csv file as well?

user-480f4c 28 November, 2023, 11:26:10

Hi @user-b4e139 πŸ‘‹πŸ½ ! You can find the pupil data in the 3d_eye_states.csv file (column: pupil diameter [mm]). Please have a look at our docs for the full list of the data available in this file: https://docs.pupil-labs.com/neon/data-collection/data-format/#_3d-eye-states-csv

user-01e7b4 28 November, 2023, 15:19:35

Thank you for the quick reply! The Neon player is great, and I can use that to identify frames where the tracker is not being worn. Our main concern is that there is gaze and eye state data for all frames of the recording (and a vis circle being generated) - even when the eye is not in view of the camera. For example, the glasses were pushed so that the participant's forehead was in view instead.

nmt 29 November, 2023, 05:59:16

Hi @user-01e7b4. Thanks for sharing your concern. Currently, we do not have an automated method to determine whether the gaze and eye state measurements are of high quality, or not when the glasses are not being worn or are far from the wearer's eyes. At present, this process requires manual review. However, this is certainly on our radar and we are exploring options.

user-042d90 28 November, 2023, 16:46:17

Hi, pupil labs! I have a problem about runing the anaylysis through enrichment by uploeading the reference image mapper. I tried it with two recordings I made, but it all said error that "The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image." Can you please tell me where I did it wrong here please?

Chat image Chat image

nmt 29 November, 2023, 06:03:31

Hi @user-042d90 πŸ‘‹. It seems like you might be using the wrong enrichment, but just to be sure, can you describe what your intention is with those QR code markers?

user-a5c587 28 November, 2023, 19:48:48

Hi there! A while back I had asked a question surrounding the cloud softwear and our new neon glasses, when one of the staff messaged back and mentioned i could get a free tutorial session since we purchased the neon glasses, how do I go about scheduling that?

nmt 29 November, 2023, 06:04:40

Hi @user-a5c587 πŸ‘‹. Please contact info@pupil-labs.com to schedule an onboarding session, and mention your original order ID in the email πŸ™‚

user-44c93c 28 November, 2023, 21:30:45

Hello. Regarding the new pupil diameter data streams in cloud, you state that "The algorithm does not provide independent measurements per eye but reports a single value for both eyes." A few questions. 1) How does the algorithm arrive at a single value? Are you taking the average of both eyes to compute this value? Something else? 2) Are there plans to eventually provide different values for left and right eye diameter? If so, is there an ETA for that?

user-480f4c 01 December, 2023, 11:21:38

Hi @user-44c93c! Neon's pupil-size measurements are provided as a single value, reported in millimetres, quantifying the diameter of the 3D opening in the center of the iris. Currently, our approach assumes that the left and right pupil diameter is the same. According to physiological data, this is a good approximation for about 90% of wearers. Note, however, we do not assume that pupil images necessarily appear the same in the left and right eye image, as apparent 2D pupil size strongly depends on gaze angle. We are working on an approach that will robustly provide independent pupil sizes in the future. We cannot give a concrete ETA for that though.

user-44c93c 28 November, 2023, 22:01:10

Also, for pupil core data collection through Pupil Capture, the CSV outputs for pupil diameter included "confidence" and "model confidence" values. I don't see these in the pupil cloud pupil data csv files. Is there a confidence value being computed somewhere? Is it automatically trimming bad data or including it? Thanks!

user-480f4c 01 December, 2023, 11:22:32

@user-44c93c, regarding your second question, currently, we do not provide a quality metric, like confidence, for Neon's eye state or gaze outputs. While they are robust to things like headset slippage, accuracy will degrade at a certain point, e.g. when the glasses are too far from the eyes. This is certainly on our radar and we are exploring options.

End of November archive