Hi Team, I ran 6 RIMs for a project I am working on and the heatmaps got generated for 5 out 6 of them. Can anyone help me identify why the heatmap wasn't generated for that particular RIM? Thanks.
We are still able to start the recording manually during this issues.
Hello! Has new version of Neon Companion been released? I can not update it through the store.
Hi @user-d714ca ππ½ ! We recently released a new version of the Neon Companion App (2.7.4-prod). You should be able to download the latest one from Google Play Store. Please let us know if you run into any issues.
Hey Team, The code for the scanpath image and video seems to be taking a very long time for one single RIM. Is there a workaround or is this how it is going to be? I ran a single RIM which took 7 hours to complete on python. The RIM had 5 viewers in total. Does anyone know how long it took for the demo project to run? I am talking about the example showed on the Scanpath Image and Video section on the Alpha Labs page.
Hi @user-37a2bd π the time it takes to compute the scanpath script can vary depending on the duration of the recordings included in the RIM. It's not uncommon for it to take a significant amount of time, especially if the RIM contains lengthy recordings. Additionally, the processing time can vary based on the computational resources of your computer.
Does 7 hours sound about right?
Hi, How can I get saccade data? duration amplitude etc?
Hi @user-670152 ππ½ ! Although we have not confirmed experimentally that Neon can capture saccades, implicitly saccades equate to the inter-fixation interval (or gaps) between fixations using our fixation detector. For more details about our fixation detector, please see our documentation: https://docs.pupil-labs.com/neon/basic-concepts/data-streams/#fixations
Thank You. That's why I wrote. I'm thinking about buying a neon eye tracker. Currently, in the absence of saccade parameters, I need pass
Hey Pupil Labs, I need to have a high resolution of scene camera video. The video resolution is 1600X1200 and it is around 700 MB but the scene video in the "raw android data" is 1.1 GB. I was wondering why there is this difference. Is it because of different encoding? I also was wondering if 1600X1200 is the max resolution for the neon. Thank you in advance for your response.
Hi @user-a64a96 ! Do you mean from the scene video from Cloud and the scene video at the raw data? Yes! the scene camera can't record at a greater resolution, if you need more resolution, you could fix something like an Insta 360 or a GoPro and sync it.
hi Pupil Labs, I wanted to ask about the "worn" bool that's returned with the realtime API for Neon glasses. It seems to be always true even when the user is not wearing the glasses. Is this still in development? should we just ignore it then?
Hi @user-ccf2f6 ! This is still under development, yes you can ignore them for now. Apologies for the inconvenience.
Hi Team, I tried running the following code from the Alpha Labs Page to map gaze onto a screen recording - https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/ The output I got was at a 0.5x speed. Is that by default? The example shown is in normal speed but the output I got was 0.5x speed.
Hi! How did you recorded the screen? was it with OBS? What settings did you have there? may the screen recording be of a different frame rate?
I will try doing that! Thanks Miguel!
Hi! I asked a question some time ago I did not follow up on (sorry) but as it turns out my colleague almost immediatly found where the problem was. I have another question (more of a clarification): the docs say pupil cloud accounts for distortion, but is that only when you explicitly chose to undistort, and only for video? If I download the raw data, there is no option to get the undistorted gaze directly, I have to undistort it with OpenCV? Thanks!
Hi @user-858e7e ! The downloaded data is not undistorted, if you would like to undistort the raw data, you can find here how to do so following this guide https://docs.pupil-labs.com/neon/how-tos/advance-analysis/undistort/
To add on to @user-d407c1 's pesponse: What we mean when we say Pupil Cloud compensates for camera distortion is that all algorithms correctly work around it. So e.g. the Marker Mapper explicitly takes the distortion into account when mapping gaze data onto a surface. When you download a recording from Pupil Cloud the data is not explicitly undistorted though, but it's in the original video's format.
HelloοΌ when I finished uploading the recording and went to check the cloud, I found an error in one of the recordings. Should I keep waiting or export the footage directly from my phone to my computer myself? As I want to deal with this footage as soon as possible, if I need to go and export it to my computer myself before analysing it on the cloud, how should I do it?
My OnePlus 10 Pro upgraded to Android 13. Instructions to rollback or downgrade to the previous version are only available for OnePlus 8 and 8T. Can someone provide me the instructions and official Rollback APK for the OnePlus 10 Pro?
Hi @user-7e5cb9 ! We do support A13 on the One Plus 10 Pro, so there is no need to rollback. That said, if for any reason you would like to roll back, we can follow up with instructions.
Hey Pupil Labs team, Is there a method to perform offline drift correction for Neon? To the best of my knowledge, there is no specific calibration for Neon. However, when we examined the scene video with gaze overlay, we noticed that there is some drift between the "target" and gaze. Thank you in advance for your help.
Hi there, We are conducting a study that involves getting participants to read text displayed on a smartphone while wearing the Neon eye tracker.
We'd like to know how to extract two parameters from the timeseries data (.csv files) that's downloaded from Pupil Cloud. 1) How fast the eyes are moving/scanning while reading (speed). 2) How much the eyes have moved (amplitude) when reading.
Could you provide some information for the above, any codes or methods would be helpful. Thanks.
Hey @user-d2d759! Those metrics aren't computed explicitly, although they could be calculated post-hoc, assuming a few conditions are met. However, before we delve into that, I'm curious about how much accuracy you require. For example, do you need to discriminate between words, characters, or even smaller eye movements?
Hello. Regarding the units of yaw (-180,180), would it be right to say 0 is the magnetic north?
If you have calibrated the IMU as described here: https://docs.pupil-labs.com/neon/how-tos/data-collection-with-the-companion-app/calibrate-the-imu-for-accurate-yaw-orientation/#calibrate-the-imu-for-accurate-yaw-orientation, then that is correct π
During a subject, the eye-trackers stopped recording because I am out of storage on my phone. If I delete video data on the phone, will it still exist on the cloud? I'm assuming yes, but I also wanted to make sure. Additionally, how much cloud storage do we have?
Hi @user-328c63. Have your recordings already uploaded to Cloud? That is, do they show as fully uploaded and playable in Cloud?
Face mapper enrichment to data
From the Pupil Cloud, I downloaded the "raw android data" of a recording. In that data folder, there is eye-video named "Neon Sensor Module v1 ps1.mp4", and also a file named "Neon Sensor Module v1 ps1.time", I assume this is the timestamp file of the eye-video. But how to open this .time file?
Hi @user-613324 ! If you do not mind, is there any specific reason you would like to access the .time files from the RAW format over the CSV files we offer?
That said, you can open them with Python and numpy
, like this:
time_file = np.fromfile(str(path), dtype="<u8")
I experience difficulties with the world cam. I see a red dot (gaze) on a black screen.
Hi @user-e40297 ! is this happening on Cloud or in the Companion device? and if in the latest, does it also happen on the preview?
Hey, short questions: is it possible to process the NEON recordings with the PupilPlayer? So just copying it from the mobile device to the computer and running PupilPlayer, e.g., to get the gaze overlay. Or is this only possible by using the Cloud?
Hi @user-fb5b59 ! Neon recordings made with the Companion App are not compatible with Pupil Player, that said, we are working on an offline app similar to Pupil Player that will allow you to do exactly what you describe. We are aiming to have this ready within in this month.
We previously integrated the PupilInvisible Device by using the "zyre" library in C++, so we were able to directly revceive the eye streams, gaze data, world video etc. in our system based on C++. I assume, that this is no longer possible, so we have to switch to using the HTTP REST API, websocket connections etc.?
Sorry to keep bugging you about this, but any updates on the availability of pupillometry? Thanks!
Hey @user-44c93c π. It's now available in Cloud π
Hi,
I'm having issues using the cloud to produce heatmaps via the Marker Mapper. When running the % doesn't go above 0 and I also keep getting Internal Server Errors. Enrichment ID: 9f2aa5f1-a695-48dc-b5e9-166f1ccec08b
Hi,
I'm currently working on my thesis on eye tracking where we use pupil cloud. The enrichment 'Face Mapper' is very interesting for our research, although, I was wondering if it is possible to determine where on the face a fixation falls (eye region, mouth region, nose region). When downloading the enrichment data to an Excel file, coordinates of the eyes, mouth and nose are given, but not really whether the fixation falls in these regions?
Hi @user-6592a5 ππ½ ! To get the information of whether the wearer's fixation falls in the regions of eyes/mouth/nose, you could use the coordinates given in face_positions.csv
and those given in the fixations_on_face.csv
. For example, you can filter fixation on face == TRUE
in the fixations_on_face.csv
to get only the fixation that were detected on the face, and then see if the coordinates of the fixation (fixation x/y [px]
] ) for a given timepoint match with the coordinates given for the eyes/mouth/nose in the face_positions.csv
.
Hey @&288503824266690561 , quick question about the marker mapper. When computing surface coordinates does pupil labs apply an M-estimator Sample Consensus (MSAC) or similar algorithm that filters out spurious measurements? In that case can I assume that any time a 'gaze detected on surface' row reads true that spurious measurements have already been filtered out or do I need to do my own filtering? I have notices that in the pupil cloud ocassionally the surface spazes out and seems way off but I am unsure if each time this happens it is assosiated with a "false" value as it would seem the website seems to think that the surface is detected even if ocassionally the estimated surface varies wildly in size and shape from the true surface. I would send a screenshot to show you what I am talking about but I cannot seem to be able to log in to the pupil cloud right now. I ordinarily log in with a google account.
Please could you share a screenshot to demonstrate the point? Cloud login is working now π
Hi @user-28ebe6 ! We do not apply any additional filters to the detector of the markers but report them as given by the detector. Could it be the size of the markers or the illumination precludes them from being properly detected?
Other than a screenshot, would you be willing to share a recording?
Hello! I cant sign in to my pupil cloud with google, what can I do?
Thanks for the report. I see the issue and have elevated this to my team. Will update when this has been resolved.
me neither
Yes, it works. Thank you very much for your help
Hi, im ken. I want to know how to split long recordings into 3min part to conduct reference image mapper.
Hi @user-b03a4c! Currently, you can not use events to define the scanning recording. The scanning recording has to be an independent (less than 3 minutes long) recording that is added to the project. In simple terms, this recording will be used to determine the "3D relationship" of the reference image and then try to locate it on the other recordings.
If you would like to use events to determine a section of your recording as a scanning recording, feel free to suggest it at https://feedback.pupil-labs.com/pupil-cloud
Hi, I did some recordings which are now fully uploaded to the cloud. I am trying to watch the video rendering with the fixation and gaze overlay, but these keep dropping out (although both are selected) or don't show up at all. A notice for 'internal server error' keeps popping up. Is there anything I could be doing to avoid this problem?
hello! my pupil cloud always report error:'{"code":500,"message":"internal server error","request_id":"3b8df27c19c0ca4a89d2334d1ff18f1b","status":"error"}", what should I do?
Hi @user-4724d0 and @user-d714ca ! We have experienced some issues with our servers, but our team is working on this, I will let you know as soon as they fix it, apologies for the inconvinience
@user-4724d0 @user-d714ca @user-3c0775 this has been solved, if you are still experiencing issues please let us know.
Working great, thank you!
Hi, anyone have experienced an error when using OpenCV package to adjust the RGB mode? I was trying to define Area of Interest.
Is the file referenced on line 4 an actual image file (png, jpg, etc)?
oh my bad. It's folder. But when I change it to the actual image file, this happens (it's loading and wont run)
Let's move this convo to π» software-dev as it's not really a Neon discussion
Okay. Sorry
No need to apologise π
Error found - the Neon module was not properly connected in the frame Hi I have just received my "Neon - Just Act Natural" package with the One+ 10 Pro, but the gasses will not connect when plugged in.
Hi @user-6c4345 ! Could you please use a 1.5mm hex Allen key to delicately tighten the screws connecting the module to the nest. This ensures a secure connection, which might be the reason why the module is not recognised.
If after doing so, you are still experiencing issues, please contact info@pupil-labs.com
Hey, @&288503824266690561 question: for the enrichment files how come the gaze.csv and the surfacepositions.csv files are of different lengths? Should they not have the same number of data points?
What about frames where the surface is out of view or otherwise undetected? Or frames where the gaze is not on the surface?
Hi Pupil Labs I had a question about the async and sync methods in pupil-labs-realtime-api. Does the sample rate of gaze differ in two i.e. would async method provide a higher resolution of gaze data stream than sync? I can see it in the api implementation that the sync method pops out the last frame from a queue which might theoretically result in frame drops. Does the async method ensure that we stream every frame from the queue to the connection?
Hi @user-ccf2f6 ! Here you have a high level description of both APIs. https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/simple-vs-async-api.html
Please also have a look at https://discord.com/channels/285728493612957698/1004382042340991076/1004716382430167081,
as you mention the simple API grabs the last frame, meaning if you are not fast enough consuming it, you may miss frames, while the async
queues them, so you won't miss them.
Hi PL, could you please help me with this
Some data points are missing corner locations in the surface_positions.csv and in the case when the gaze is not detected on the surface there is a βfalseβ in the βgazeDetectedOnSurfaceβ field so I doubt that would explain the disparity.
I don't know the exact conditions, but I don't think a record is guaranteed in these cases. Someone who's more familiar may chime in, but I'd examine your data to see if the missing records correspond time ranges where the surface is not visible/detected and/or the gaze is off-surface
Hi @user-6c4345 ! We have received the email and we will contact you promptly with a solution.
Thank You
@marc why is that the frames are flickery? sometimes it works but immediately after few seconds it's flickery.
Hey @user-25fb27! Could you please contact info@pupil-labs.com with your original order ID? Mention that I sent you. Thanks!
Hello, I have been trying to use the pupil neon with pupil capture v3.6.0 (my experiment needs real time head tracking) but I can't manage to get a to gaze position. I have tried to set ROI for my eyes and the neon calibration. The gaze is really shaky and and I often observe a low confidence level on one eyes. Any tips or documentation I good follow ? thanks
Hello! Currently I am not able to see the eyes camera in the app. Is there any setting to reactivate it?
Hi @user-9d72d0 ! Could you try unplugging and replugging Neon, tightening the screws on the module and reinstalling the Companion app checking that the permissions are properly set. If this does not solve the issue, please contact info@pupil-labs.com
Ahhhh I think I see what's going on, when I inverted the time difference to get update frequency it appears that the gaze updates as fast as the infared eye tracking does (150-400Hz) while the surface_positions updates at the camera frame rate (30Hz and under)
Ah, good eye. If you wanted/needed those surface gazes at 200Hz, you could interpolate. That's about the best you could do anyway given the frame rate limit of the scene camera
Hi Nadia, Thank you for the help! I have tried to apply and use the tips, but I still stumble over 1 aspect. The fixation data in 'fixations_on_face' (fixation in world camera pixel coordinates) are different from the fixation data in 'face_positions' (fixation in image coordinates in pixels). I don't really understand how I can compare these to determine which fixations fall in the mouth/eye/nose region? My apologies for the many questions, is there by chance a detailed manual/information somewhere? This one on Pupil Labs Neon is fairly common.
Hi @user-6592a5 ππ½ ! The fixations_on_face.csv
provides you with the fixations of the Neon user on faces that are detected in the scene camera. It does not specify whether the user fixates on the mouth or nose of the detected face, but it gives you the x,y coordinates of the fixations. In contrast, the face_positions.csv
provides you with the coordinates of the eyes/mouth/nose (e.g., x,y coordinates where the nose is detected in image coordinates in pixels), rather than the fixations of the user on these facial landmarks. Note that both the fixation coordinates and the eye/mouth/nose coordinates have the same coordinate system, that is they are given in scene camera coordinates.
Therefore you can follow the approach mentioned in my previous message (https://discord.com/channels/285728493612957698/1047111711230009405/1173911864288227429) by using the information of:
a) where did the user fixate? - this would be reflected in the fixation coordinates on the face (from the fixations_on_face.csv
) .
b) what appeared on the scene camera when the user fixated at this specific area and timepoint? - this would be reflected in the coordinates that the eyes/mouth/nose appear in camera space (from the face_positions.csv
).
Some additional considerations: - The coordinates of the eyes/mouth/nose are represented as one single point in scene camera space. It will probably make sense to define a bigger area of interest around those points and check whether the fixation coordinates fall within the defined area. - Detecting accurately the unique facial landmarks highly depends on the distance. Keep that in mind as the distance is also affecting the size of the area of interest (see previous point) you might want to define around the points representing the eyes/mouth/noise positions.
I hope this helps!
Hi @user-91a92d π ! Do you happen to have recording you can share? We'll be better equipped to provide you with feedback if you share a rec that includes the calibration choreography.
Here the video with my pupils. I am using natural feature calibration because screen markers never turn green( may be due to screen luminosity) and it for my experiementation (outdoor inside a car) it will be more practical. I hope that you can notice that the gaze is "jittery" and pupils often lost. I have noticed a difference is my eyes "lighting" (from infrared)
Hi @user-8825ab ! I am answering to you here, since it seems like you are using Neon or Invisible, could you please try to log out and log in again in the app and see if this resolves your issues? https://docs.pupil-labs.com/neon/troubleshooting/#recordings-are-not-uploading-to-pupil-cloud-successfully
thank you
it works now
Hi team again, I uploaded my recordings on icloud but it shows an error on the recording, i wonder what shall i do to make sure it is fine
Hi @user-8825ab ! Could you contact info@pupil-labs.com with the recording IDs? You can find those in Cloud by right clicking over the recording and selecting view recording info.
Thank you
hi Miguel, it's me again, I wonder if we can export the recordings to our personal storge as the data confidential policy, and we don't have a gaze overlay function in our app, I wonder if this is something we should pay separately?
Hi @user-8825ab ! Pupil Cloud is secure and GDPR compliant. You can read more about the privacy policies and security measurements in this document https://docs.google.com/document/d/18yaGOFfIbCeIj-3_GSin3GoXhYwwgORu9_7Z-grZ-2U/export?format=pdf
As of now, we don't offer localised versions of Cloud to be run on your servers, but we do have a basic offline app that will allow you to explore the data https://github.com/pupil-labs/neon-player.
OK THANK YOU, can we download the video with the gaze information(i meant the red circle) on?
Sure! In Cloud you can download the video with gaze overlayed by going adding a recording to the project, then navigating to Analysis in the project view and then, clicking on new visualisation > New Video Renderer.
when i try to add it, i didn see the overlayed function here
is there some way we can add this function
Hi! We will soon make it available there, but until we modify the UI, you need to click on "Analysis" at the bottom left, and then on the "+ New Visualisation".
ok, then can we download the recording ?
Yes, after the new visualisation with the gaze overlay has been computed, you will find it on the analysis tab as "Completed" and you will be able to download it, either by right-clicking on it and selecting "Download" or by normally clicking on it and pressing the "Download" button at the bottom left.
then can we download the recording with the gaze overlay?
ok great, thank you very much for your support
Hello! Where can I find the option to play back the recorded session and stream it using Python? The Neon player can handle playback but lacks a network API. Additionally, why does the recorded data downloaded from the cloud not include eye videos?
Hello! I have a recording with a failed fixation pipeline, can we fix it?
Hi @user-d714ca! Please reach out to [email removed] and a member of the team will assist you as soon as possible.
Hi @user-9c53ac ! Reg. your first question, there is no possibility right now to stream back a recorded session, could you please develop what would be the use-case for it?
Reg. sour second question, currently, if you want to download the eye videos, you need to download the original files.
To do so in Cloud, you will need to enable the RAW sensor data download in the workspace.
See this message on how to enable it: https://discord.com/channels/285728493612957698/1047111711230009405/1108647823173484615 Kindly note that this RAW sensor data is the same that you will obtain by exporting the recordings directly from the phone.
Playback API is essential and useful since it's not always possible to be on site and develop. It's much more easier to do the development with the recorded session.
Hello, I'm trying to use Neon Monitor but when I try to access the Monitor app through the URL it stops and doesn't show me the page, which tags as 'Not Secure'. It says: "The site doesnβt use a private connection. Someone may be able to view and change the information you send and get through this site. To resolve this issue, the site owner must secure the site and your data with HTTPS."
Can you please help?
Hi @user-7413e1 ! That's probably your firewall, antivirus or some setting in your network. Try using the IP address below the neon.local:8080
or using a different network
Thank you Miguel. Sorry I wasn't accurate earlier: I used both the URL and the IP Address. Still doesn't work. The network I'm using is eduroam from my academic institution.
Unfortunately, Eduroam is known to be quite restrictive, I would recommend using a different network if possible or setting up a hotspot https://support.microsoft.com/en-us/windows/use-your-windows-pc-as-a-mobile-hotspot-c89b0fad-72d5-41e8-f7ea-406ad9036b85#:~:text=your%20data%20plan.-,Select%20Start%20%2C%20then%20select%20Settings%20%3E%20Network%20%26%20Internet%20%3E%20Mobile,Internet%20connection%20with%20other%20devices.
If using a different network, these are the requirements https://docs.pupil-labs.com/neon/how-tos/data-collection/monitor-your-data-collection-in-real-time.html#connection-problems
Also - false alarm, just refreshed and it doesn't work again
I'm using my personal phone hotspot - now the page opens, but it is not showing the video (grey screen). How can I solve this? Also, how secure would this be?
Hello everyone, we use the eye tracking glasses in an autonomous driving setup. We have big problems with the exposure. Is there a way to control the camera parameters via the Python API? Is it possible to tell the Backlight Compensation which image area it should pay attention to?
Hey @user-c6dc34 π. The backlight compensation setting changes the auto exposure target value. 0 is darker, 1 is balanced 2 is brighter. Depending on the specific use-case, there's usually a setting to favour a given area of the environment, but we'd need to learn more to make a recommendation. Could you elaborate a bit on your use-case, e.g. is it a driver in a real car or an operator sat at a monitor? Note that there's no way to control this setting via the API.
Hi, I'm working with the neon glasses and just made some footage. But right now the upload has been going on for a few hours and it's still 0% uploaded to the server. The loading bar is turning, but not doing much more. Is this normal? Or is there maybe another way to upload the footage to pupil clouds?
Please visit https://speedtest.cloud.pupil-labs.com/ on the phone and execute the speedtest. This will tell us whether the phone can communicate with our servers properly.
Hi @user-7413e1! What device are you using Neon Monitor with?
I've tried with windows laptop and with my personal phone (apple iphone). Either way it is not working reliably. I've tried using Edge browser and it seems to work sometimes, but when i refresh it usually stops working. Also, keep receiving a 'Not secure' message which is concerning.
HI there, I'm wanting to quantify the x,y gaze/fixation co-ordinates in cm units, the raw data files downloaded from pupil cloud gives these co-ordinates as pixels. I was wondering what's the best way for me to convert the px unit generated by Neon world camera to physical units? Thanks
Hi @user-d2d759 π. May I ask what the goal is here? Do you mean to map gaze to some object in physical space, such that you can quantify gaze shifts in terms of cm relative to the object?
Hi @nmt, the goal is to get the distance the eyes have moved when reading each line of text and time taken by the eyes to read per line of text. So i'm wanting to know what's the best way to convert px into cm in order to get these parameters. Thanks.
or may be share a gist to playback
Hi @user-9c53ac! What sort of development work do you have in mind? If you just need to playback the recording, why not download the scene camera + gaze overlay footage (https://docs.pupil-labs.com/enrichments/gaze-overlay/#gaze-overlay) and play that back using any one of the various command-line tools that are currently available?
thanks but i want finer access to individual frames and its timestamps locally without having to use cloud. but its ok, i will develop it myself
We can recommend using Chrome Browser on your PC for the best experience. The connection is local - no internet is needed. Other devices on the local network can also view the stream, so that's likely why you got that message.
Also - what do you mean no internet is needed? I though you need the two systems to be on the same network for it to connect?
I switched to Edge because Chrome is not working at all (ever).
Hi Neil, thanks for the quick reply. We work with real vehicles on real roads. More specifically, we do image stitching between the vehicle camera and the scene camera of the eye tracking glasses. For this we do feature matching and if the image is overexposed or underexposed then unfortunately this does not work.
Backlight 0 would be the best setting for a driver with bright light entering through the windscreen. In my experience, you can see out of the windscreen, but the car's interior would be darker. Have you already tried that?
Very interesting. We've experimented with this ourselves, I.e. feature matching to combine an external video feed with Neon. Is your main focus what's going on outside of the car, from the driver's perspective?
Yes, we combine it with Semantic Segmentation, Depth etc. to anwere the qestion What the driver was looking at.
Monitor app
Hello! I am also having a problem during live-stream recordings. I am using a hotspot from my iphone to connect both my macbook and the neon phone companion. This works fine to launch the recording in the monitor app, but I sometimes lose the live gaze tracking: meaning that I still have the live world camera feed, but the live gaze indicator (red circle) is stuck in the center of the screen. Is there anything I can do to be better able to monitor the recordings live?
Hi there. I tried to use the basic offline app by dragging the folder directory over. However, when I dropped it onto the program, it loads for a few seconds then crashes. Why would this be?
Hey @user-db5254! Can you please confirm which eye tracker you're using?
Monitor App 2
Hi everyone
I want to buy the Neon model. However, I have a few questions. When will the eye state and pupil diameter feature arrive? What is the ease of access to data? It is important that I have access to raw data. Also, will there be any problems with updates?
Hi @user-a4067e ππ½ . Regarding your questions:
Hello Everybody! we are currently doing a gaze tracking study with Neon glass. In our experiments, we want to use the Marker mapper to map the gaze data in the defined surface we want. For that, we have fixed AprilTag markers from tag36h11 family in the four corners of the screen (like one instance in the screenshot) in static positions. After we did some recordings and then wanted to run the marker mapper enrichment in pupil cloud, it doesnβt work. It is not able to define a surface at all although all the markers are visible in the screen. Could you please help what could be the possible reason behind it? Could it be because of the size or position of the marker or something else we are missing? Thanks. I am providing one recording id as well if that helps - 2975987a-d462-4981-86f0-5b3251dbc51c
Hi @user-186965 ππ½ ! Could you please share more information on what happens when you try to create and run the enrichment? When creating the enrichment in your project, you should be able to see whether the markers are detected or not. If they are detected, they are displayed in green colour (see the example I'm attaching). Is that the case? If not, do you see any error?
Hi @user-480f4c, thanks for your reply! So I tried to create the enrichment again and I found out that sometimes it worked and able to detect the markers but sometimes did not work with internal server error even though with the same setup. I am attaching the screenshots for both the scenarios. Could it be because of some kind of processing error or so? and is there any way to avoid this in future?
Hi. I have been looking at the 3D state ".csv" file. If I understand correctly optical axis xyz are the x,y and z component of the directional vector of the eye in the coordinate system described in the documentation, but there is no Torsional information. Am I right? Is there a way to get information on torsion eye movements?
Hi @user-b55ba6! Torsional eye movements as in rotation about the line of sight are not reported. To be able to get information about that, one would need a really detailed view of the Iris, such that you can track some landmark, then define the baseline and track it.
Thanks!
Could you please share the ID of the affected enrichment? You can get it by locating the enrichment in Pupil Cloud, right-clicking on it, and selecting 'Copy enrichment ID.
'
These three enrichments I have currently - 457f2c86-bdb6-49de-8840-7e3fa26ea91f, ab474f9b-47bb-45aa-9e75-a7771cb1a583, 3918949b-0618-4745-a699-59f288e2f145. For the first one, there is internal server error sometimes while detecting but the other two works fine.
Hi @user-186965! Our Cloud team has looked into your enrichment and it should run fine now - could you please have a look and confirm that it has been completed?
Hey, yes it worked, many thanks π
Neon with Core pipeline and Pupil Capture
Hi team. I am currently working on a project in the cloud. I have 11 RIMs in the enrichment sections. None of the RIMs have been processed till now. They have been running for almost 3 hours now. Any suggestions? I've tried refreshing the page multiple times. Doesn't seem to work. The data is 2 days old.
Yes we habe tried diffrent settings. I think that the autoexposure doesnt work "correctly" for our usecase because the windscreen just covers a small fraction of the image so the exposure gets adjusted to the dark cockpit. I think that it would be realy helpful if i could adjust the area to which the autoexposure attends to. So to focus on the windscreen. Isn't there any possebility? Thanks!:)
Autoexposure + backlight 0 would favour the windscreen rather than the dark cockpit. Have you tried this?
Hey Pupil Labs, I'm interested in generating a gaze + scene video similar to the cloud player on my local machine. I'm aware that some sensors may start a few seconds after the video begins. Could you please direct me to the synchronization documentation or the respected Neon Player GitHub code? Unfortunately, I couldn't find either, and I thought there might be documentation on your website. Thank you!
Hi @user-a64a96 ππ½ ! In Neon Player you can easily export the scene video with the gaze overlay. You can download it here: https://docs.pupil-labs.com/neon/neon-player/
Backlight 0 works finde but we have to adjust the exposure manually. Autoexposure doesnt work well.
For context, the backlight compensation adjustment only works/takes effect in auto exposure mode. So with manual exposure, that's probably why you can't see out of the window.
Hi. I am looking at the "Neon Sensor" eye video. I am interested in getting the timestamp of the eye frames. Eye video has 19476 frames, but gaze.csv has 19473 datapoints. The first 40 frames, however are saturated and very white, which raises the doubt if the 19473 datapoints align to the 19476 frames. Do they?
Hey. Does anybody have a suggestion on what would be best to calculate eye movement velocity (to detect saccades). I know there are some algorithms that are pixel based, but I am interested in applying a threshold of 65 deg/s. I see some suggestion to use azimuth and elevation angles. There is also the 3d cartesian coordinates from the 3d_eye_states.csv file. Anybody?
If you want to implement a saccade filter, elevation and azimuth are reasonable variables to work with - they describe gaze shifts in degrees, so you can compute deg/s
Hi - I am doing data collection with children, so the neon is not always worn properly during the study. The exported 3d eye states and gaze files have data for every frame, even when the eye is not in view. Similar to @user-b55ba6, I want to use the eye video recordings that are in the export folder - but the number of frames in this video do not match the number of frames in the 3d eye state and gaze files (by ~10 frames). I have a few questions: 1) Is there a way to get the timestamps of the eye videos? 2) Do you have any recommendations for other ways to identify frames when the 3d eye state and gaze data are wrong (because there is no eye in view)? and 3) Will there be a way in the future to export the eye videos as an overlay on the world video? Thanks!
Hi @user-01e7b4 ππ½ ! Regarding your questions, see my points below:
1) To access the timestamps of the eye videos you can read the .time file that comes with the Raw Android Data: np.fromfile('Neon Sensor Module v1 ps1.time', '<u8')
2) I'm not sure I fully understand your question. Do you mean that you want to filter those gaze samples when the user is wearing Neon vs. when they are not? I'd appreciate if you could elaborate on that.
3) You can export the world video with gaze overlay + eye video overlay using the Neon Player: https://docs.pupil-labs.com/neon/neon-player/. Please make sure to use the Raw Data Android folder from Pupil Cloud. If this option doesn't show, you will have to enable it in your workspace. Please see this relevant message: https://discord.com/channels/285728493612957698/1047111711230009405/1123238795815428166
Regarding the frame mismatch, I've created a relevant thread to discuss about that in more detail: https://discord.com/channels/285728493612957698/1178742667060969544 Could you also reply to my questions there? That will help us identify the root cause of the issue.
Eye frames timestamps
Hi, I have a question regarding the pupilometry update. After updating the companion app, I can see the 3d states file when I export the data after uploading it to the cloud, but not the pupil diameter. Is the output a csv file as well?
Hi @user-b4e139 ππ½ ! You can find the pupil data in the 3d_eye_states.csv file (column: pupil diameter [mm]
). Please have a look at our docs for the full list of the data available in this file: https://docs.pupil-labs.com/neon/data-collection/data-format/#_3d-eye-states-csv
Thank you for the quick reply! The Neon player is great, and I can use that to identify frames where the tracker is not being worn. Our main concern is that there is gaze and eye state data for all frames of the recording (and a vis circle being generated) - even when the eye is not in view of the camera. For example, the glasses were pushed so that the participant's forehead was in view instead.
Hi @user-01e7b4. Thanks for sharing your concern. Currently, we do not have an automated method to determine whether the gaze and eye state measurements are of high quality, or not when the glasses are not being worn or are far from the wearer's eyes. At present, this process requires manual review. However, this is certainly on our radar and we are exploring options.
Hi, pupil labs! I have a problem about runing the anaylysis through enrichment by uploeading the reference image mapper. I tried it with two recordings I made, but it all said error that "The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image." Can you please tell me where I did it wrong here please?
Hi @user-042d90 π. It seems like you might be using the wrong enrichment, but just to be sure, can you describe what your intention is with those QR code markers?
Hi there! A while back I had asked a question surrounding the cloud softwear and our new neon glasses, when one of the staff messaged back and mentioned i could get a free tutorial session since we purchased the neon glasses, how do I go about scheduling that?
Hi @user-a5c587 π. Please contact info@pupil-labs.com to schedule an onboarding session, and mention your original order ID in the email π
Hello. Regarding the new pupil diameter data streams in cloud, you state that "The algorithm does not provide independent measurements per eye but reports a single value for both eyes." A few questions. 1) How does the algorithm arrive at a single value? Are you taking the average of both eyes to compute this value? Something else? 2) Are there plans to eventually provide different values for left and right eye diameter? If so, is there an ETA for that?
Hi @user-44c93c! Neon's pupil-size measurements are provided as a single value, reported in millimetres, quantifying the diameter of the 3D opening in the center of the iris. Currently, our approach assumes that the left and right pupil diameter is the same. According to physiological data, this is a good approximation for about 90% of wearers. Note, however, we do not assume that pupil images necessarily appear the same in the left and right eye image, as apparent 2D pupil size strongly depends on gaze angle. We are working on an approach that will robustly provide independent pupil sizes in the future. We cannot give a concrete ETA for that though.
Also, for pupil core data collection through Pupil Capture, the CSV outputs for pupil diameter included "confidence" and "model confidence" values. I don't see these in the pupil cloud pupil data csv files. Is there a confidence value being computed somewhere? Is it automatically trimming bad data or including it? Thanks!
@user-44c93c, regarding your second question, currently, we do not provide a quality metric, like confidence, for Neon's eye state or gaze outputs. While they are robust to things like headset slippage, accuracy will degrade at a certain point, e.g. when the glasses are too far from the eyes. This is certainly on our radar and we are exploring options.