@user-839751 Here are the instructions for more recent versions of MacOS, such as 14.5. Note that there is an order to connecting the cables.:
AX88179A
) and all other Ethernet adapters under "To devices using".ifconfig
. This will probe the Ethernet adapter in a way that establishes a Bridge to share the connection and assign it an IP.ifconfig
command and waiting about a minute, the QR code and IP address should be displayed in the stream section of the Neon Companion app. You might have to wait longer than you think.discover_one_device
command of our real-time API.Let us know how it goes.
Hi there, I am trying to stream the scene video over RTSP to another app on the companion device in order to do some image processing locally. This is possible, but seems to only work when connected to a WiFi-network. However, I assume that the network is not used to actually transmit the video (since I am using eduroam, which shouldn't allow it). Is there any way to stream the video without any WiFi-network? Cellular network is available.
Hi @user-151827 , the streaming only happens over WiFi (including hotspots) or via Ethernet connection. It happens over standard ports that are usually allowed, so I do not think that eduroam specifically blocks such data in general, but it could also be that your institution has configured their personal eduroam settings to allow it. However, do note that you will not have the best data transmission over eduroam. The quality will be quite variable.
We recommend a local, dedicated WiFi router, but you can also use Ethernet over the tested USB hub if that's necessary for your use case.
Streaming via the real-time API over a cellular network is not possible, but you can live stream the display by joining a Google Meet session with the phone and sharing the screen.
@user-151827 , my apologies, my colleague just gave me a tip and upon re-reading your message, I see that you are running an additional app on the Companion device
Hi @user-f43a29 , thanks for your response! My issue is that I would like to access the video stream in an Android app on the Companion Device itself. This app would take the video stream as an input, process the images (with some OpenCV operations), and output the results. A very minimal proof of concept of this works, as I'm able to watch the RTSP video stream in the other app, but only when connected to a WiFi-network.
I am wondering if there is any way to work around this? The reason I assume that the video isn't actually streamed over eduroam is that I can access the stream in the other app when connected to eduroam. However, when trying to stream video to a laptop over eduroam, that doesn't work (I checked this with @user-d407c1 before and he confirmed that eduroam sometimes doesn't allow the streaming).
@user-151827 I'll coordinate with my colleagues and will likely have a response by next week.
Yes, indeed, @user-d407c1 just pointed out your use case to me π
Thank you @user-f43a29 , I appreciate your help!
Hey there! I'm trying to get the heat map from my recordings but it's not working π any help?
Hi @user-1e53d0ππ½ . How are you trying to obtain the heatmap? Heatmaps are generated based on enrichments. Have you checked our documentation ? For a general introduction to Pupil Cloud, I'd recommend having a look at our onboarding video.
@user-d407c1 Some weeks ago, you suggested using ExoPlayer to handle the incoming RTSP stream in our custom Android App. However, I noticed large delays (>1 second) when doing so and found similar issues online. Is this something you are aware of and have a solution/workaround for, or are there better alternatives? I have found a user-made RTSP client that does not buffer and that has minimal latency (<10 ms), but maybe there is a way to do it with an officially supported package.
Hi @user-151827 ! Unfortunately I have no experience with those packages, so I can not make any recommendation. I did suggested ExoPlayer as it is the one on the official Android documentation, nothing else, but that package looks quite promising, so if you implemented it let us know how it goes.
Hi, what is the best way to measure inter-eye distance? Is there a specific procedure to ensure accuracy?
Hi @user-baddae ! In clinical and optometric environments, itβs common to use a pupilometer, but a ruler works just as well. Hereβs how you can measure: 1. Ask the wearer to look at a distant object (to avoid eye convergence). 2. Position the ruler in front of their eyes and measure from the center of one pupil to the center of the other.
You can use a small light to help locate the center of each pupil, but ensure they are looking at a distant object.
Neon hardware robustness
Hi, i had a issue that, when i use realtime api to get IMU info, i want to get the Euler angles (YAW/PITCH/ROLL), but i just can get the quaternion, how can i trans the quaternion to (or get) the Euler angles in real time
Hi @user-a2d00f , are you using the real-time API via Python?
@user-a2d00f Ok, then you can use the SciPy package to transform the quaternion values to Euler angles with the following code:
from scipy.spatial.transform import Rotation as R
euler_angles = R.from_quat([quat_x, quat_y, quat_z, quat_w]).as_euler(seq="XZYβ, degrees=True)
It takes into account the appropriate axis labels for the IMU. The angles (in degrees) will be returned in an array in the order [pitch, yaw, roll]. Note that the real-time API does not provide the Euler angles in real-time, only the quaternion. The Euler angles are computed for you on Pupil Cloud and via Neon Player.
Hey! Is there a way to know which frame of the scene camera video correlates to data points, to match up the frames of the video to the collected data? I have everything downloaded from pupil cloud but I only have the timestamps of the datapoints, not their relation to the frames of the video
Hey @user-baddae, may I ask what's your goal of correlating data points to the scene camera? For context, gaze is sampled at 200 Hz, and the scene camera is sampled only at 30 Hz. This means you're going to get a lot more gaze than scene camera data at any given frame.
We're trying to do some image processing with the data and the video feed
Okay. You can get the scene camera's timestamps from Pupil Cloud by downloading the timeseries data. The timestamps are found in the world_timestamps.csv. Each row is the timestamp per frame in ns.
Using this .csv, you can then correlate the gaze data from gaze.csv. You might need to first include/exclude data using timestamps as the criteria, to find gaze within a specific frame.
Ah ok thank you!
Dear Neon Support,
I am experiencing an issue while using the real-time-screen-gaze feature and would like to ask for your assistance.
As per the instructions on GitHub, I have obtained gaze data from the Realtime API and passed it to the instance for GazeMapper processing. However, I am not receiving gaze positions mapped to the screen coordinates.
I have tried using 4 or 8 markers and increased their size, but the issue persists.
Could you please provide a solution?
Here are the versions of the packages I am using:
γ»pupil_labs_realtime_api 1.3.1 γ»real_time_screen_gaze 1.0.2
This is usually the result of poor marker visibility/detectability. In most environments, you will need to adjust the exposure time in the Companion device. If you open the scene camera preview in the companion app, you'll be able to see what the camera sees, and this is a good place to start. You'll also want to be sure that your markers are large enough and have large-enough, white margins around them. If you're unsure, share an image from your scene camera with your markers in view, and we can troubleshoot from there
Hi . I have a few queries about Neon output .
Hi @user-f6ec36, thanks for reaching out π Feel free to type your questions here.
We would like to have the following metrics for a particular study wee. Can we get it from the cloud software.
the client would like in excel format
These are the fixation metrics we currently provide on Pupil Cloud. They can be obtained by drawing areas of interest, see here. All the data that we provide can be downloaded as .csv files.
We would need some help with this. Can we write to support or sales requesting a session
Hi @user-f6ec36 - we have received your email, and we will reply today to schedule the Onboarding Workshop you requested. We can then schedule a call and address any questions you might have regarding metrics and how to obtain them from Pupil Cloud.
Thanks Nadia . Appreciate the same.
Dear sir Excuse me. Could you please answer the following?
Q1 https://docs.pupil-labs.com/neon/neon-player/blink-detector/
Regarding blinks.csv, Duration [ms]: Duration of the blink in milliseconds." in blinks.csv matches which of the following?γIf not, which one is it close to?
(1) The time range between when the eyelid starts to close and when it finishes opening. This is the duration of the blink as described in general paper such as https://ieeexplore.ieee.org/document/8125844 or https://www.sciencedirect.com/science/article/abs/pii/S1746809421004808. (2) The time interval between the pupil beginning to be hidden by the closing of the eyelid and the pupil being fully visible by the opening of the eyelid. This time width is calculated according to whether the pupil is hidden or not.
Q2 Is it safe to treat this Duration value as the correct data for Duration in the paper?γοΌAlternatively, the blink duration of NEON is not good to use in the paper because if it necessarily indicates the duration of the blink?οΌ
Q3 Is the above the same for both NEON and Invisible devices?
Hi @user-5c56d0, thanks for contacting us π. Blinks are detected via an eyelid opening and an eyelid closing event. This detection is the same between Neon and Invisible (Q3).
For Q1, I recommend reviewing the white paper from the link. The description from the white paper about how blinks are detected is similar to (1). There are also details on when false detections are rejected, as well as the minimum duration needed for a blink.
As for Q2, I'm not sure I understood what you mean by "safe to treat this duration value as the correct data". If you're unsure about the blink duration, you can manually process the raw binary data offline by downloading the "Native Recording data" folder. You also then edit the blink detector script to something that suits your research needs, or employ an entirely different blink detector.
@user-5c56d0 to add to the information already provided by @user-07e923: You can run the blink detector by yourself and evaluate it for your use case, following this tutorial
Sorry for the confusing question. Thank you for your response.
I apologize for asking similar questions repeatedly. β»I plan to use the Duration [ms] values obtained from pupil cloud as they are.
β Are the Duration [ms] values obtained from Neon in blinks.csv a reliable means of measuring the actual duration of blinks?
β‘ Do the Duration [ms] values obtained from Neon closely match the actual duration of blinks?
Hi, would it be possible to extract the pupil ellipsis area (major and minor axes) from the Neon 3D eye-states measurements? Or can we only access the average pupil diameter? Thanks!
Hey @user-edb34b, thanks for reaching out π I've moved your message to this channel since it's concerning Neon, and not Core XR.
Pupil diameters provided by Neon isn't an average, but the physiological size in mm of each eye. You can learn more about the 3D eye states file here.
If you'd like to access the values in real-time, such as during recording, you can do that using the real-time API.
Hi all, I am starting a gaze tracking project with neon. Does anyone know where I can find existing open-source datasets or ML models? It would save me a lot of time. Thanks!
Hello, is there a way to extract raw gaze direction values from the device's Neon companion app that have been recorded?
Hi @user-435838 so that I am certain I fully understand the question, could you clarify what you mean by βraw gaze directionβ? Do you want gaze in a 3D world reference frame or do you want it in the scene camera reference frame (similar to 3D eyestate)? If it helps, you could draw what you mean on a piece of paper and send a photo.
Hi @user-151827 , my apologies for not getting back to you.
The team knows about it. A fix and what form it will take is being determined.
If you want, you can open a ticket in π troubleshooting to make it easier for us to stay in contact.
In the meantime, you can connect the Companion device to the hotspot of a second phone to initiate a network connection and start the stream.
@user-f43a29 Thanks for looking into it. I'll open a ticket. The meantime solution should be sufficient for the nearest future, but a fix would be excellent.
Hi there! What does the percentage bar indicate in the fixations heatmap figures?
Hi @user-20657f , have you seen the heatmaps documentation, including the implementation details at the bottom? If that does not answer your question, then just let us know.
Regarding the second question, you can use events to mark the beginning and end of each stimulus, such as βstart-picture-1β and βend-picture-1β. The events can have whatever name you want, so long as the event names are consistently used across all recordings that you are studying. You can easily add events during a recording using the real-time API or you can manually add them post-hoc in Pupil Cloud. There is also a way to programmatically add them post-hoc via the Pupil Cloud API, but this will require that you have a record of when each stimulus was presented to each participant, especially since presentation order was randomized. Will you be needing the Pupil Cloud API?
Once you have the events in place, then you can use them to set an analysis window when running an Enrichment, also known as Enrichment Sections. You will find this under βAdvanced Settingsβ when creating an Enrichment. Be sure to set the section before running the Enrichment.
Is it the % of recordings in which there was a fixation on that area?
Also, I was curious if someone could help me with the "time to first fixation" metric if I am randomizing the order of an image being shown to several individuals. I essentially want to know the time from when the picture was shown to when the first fixation was on certain AOIs. Right now, since my images are being presented in a randomized order, it is giving me the time to firstfixation since the start of the images which does not represent that metric. Thank you so much!
Would there be a way for me to set "Image 1 start" and apply that to all of the recordings for that image
im just curious why that wouldnt work given that Image 1 for example despite being presented in a randomized order still has the same barcodes/symbols in the corners accross different recordings
Hi @user-20657f , in Pupil Cloud, events are not tied to the output of the Enrichments, so creating an event does not link it to the detection of a specific set of AprilTags. This means an event cannot be propagated across recordings in that way.
Since you have collected your data, here are some ways to do what you are asking:
For future reference, it is easiest to add events during the experiment using the real-time API.
Hi Pupil Labs Team, I have the rare opportunity to refit a room specifically for eye-tracking using the Neon glasses. Estate management has asked about the ideal ceiling lighting conditions. Could you provide advice on the optimal levels of lighting warmth, luminance, or any other suggestions for room lighting?
Hi @user-bba4b8. Thanks to its deep learning approach, Neon provides robust gaze data in any environment (even in direct sunlight or complete darkness). Therefore, the room lighting will not affect your eye tracking data.
Room lighting can, however, impact the clarity of your environment as captured by Neonβs scene camera. For instance, overly bright areas in your scene may appear overexposed in the camera feed. To ensure you have an optimal and clear view of your scene in the Neon scene camera video, we recommend reviewing our docs on the different exposure modes offered in the Neon Companion App settings. The scene cameraβs exposure can be adjusted to improve image quality in different lighting conditions.
Hello, We are in the market for an eye-tracking goggles. My biggest concern is not the hardware but actually the software. I am worried that defining the AOIs will be extremely cumbersome. We want to study the usability of our devices using eye-tracking. Our devices are combinations of hardware and a screen with GUI. I remember when I was at university, we went through frame by frame to record the fixations. Extremely time-consuming. I know now it is possible to draw in AOIs but if the head or participant move do I need to manually move the AOIs to realign everything? is there some sort of AI that can be used to recognize different parts of the GUI so that AOIs do not have to be moved manually? How time-consuming is it to get useful information out of a eye-tracking video?
Hi @user-3ecd80! If your use case requires mapping gaze/fixation data onto AOIs, then I'd recommend considering the tools we offer on Pupil Cloud. For example, our Reference Image Mapper enrichment allows you to map gaze/fixations from your eye tracking recording onto a 2D image of your AOI. This is a powerful tool - it works with real-world AOIs (e.g., a table, a chair, even an entire building). You can see a demo in this tutorial where we used this tool to map gaze onto areas of interest throughout an entire room as the wearer was moving.
Following completion of this enrichment, you can define further AOIs on the reference image using our AOI Editor Tool.
If you'd like to learn more, I'd recommend scheduling a demo call with us to discuss about the software and how you can analyze your data using our tools. Feel free to reach out at info@pupil-labs.com and we can schedule a call.
Hello team, Is there any easy way to download specific time series data (e.g., saccades) from Pupil Cloud? I have long 10+ recordings and want to download only certain types of data, but there is no option to select them. As a result, I have to download the entire time series data, which is painfully slow and could be costly for you.
Hi @user-292135 - Selecting only one file for download is not currently available on Cloud. Feel free to add this feature to the π‘ features-requests π
However, getting the folders via the Pupil Cloud API would be less time-consuming. You can download programmatically a selected recording (or a set of recordings), download it via the API, and work only with the saccades.csv file. You can find an example attached.
Hello
I am using the values of eyeball cente and optical axis provided within the 3d_eye_states.csv
file to cast an optical axis ray for each eye and visualise this using pyvista (I ultimately plan to correct this to the visual axis). I need to visulise this within a coordinate space of my choice. To do this I have used the tag aligner program provided by your alpha labs which has given me an aligned_poses.csv
file. I would like to use this file to get my optical axes rays into this calibrated space by using the values of translation_x,y,z
(which will be converted to mm so there are in the same units as eyeball center) as the origin of the scene camera. However, I do not know how to match the timestamps within aligned_poses.csv
with those from 3d_eye_states.csv
(or any other file for that matter). For each row in 3d_eye_states.csv
it gives timestamp [ns]
which is a very large number (e.g. 1.72E+18). This timestamp seems to match up with those provided in other files such as gaze.csv and imu.csv. However, it does not match with the values given for start_timestamp
and end_timestamp
in the aligned_poses.csv
file which are much smaller (e.g. 1.5).
1) Could you please tell me what are the units for start_timestamp
and end_timestamp
in aligned_poses.csv
?
2) Crucially, how can I match the the timestamps in aligned_poses.csv
with the timestamps in 3d_eye_states.csv
so that I know which rows in each file correspond with each other?
Hi, @user-a09f5d - I believe the timestamps of the poses are in seconds, measured from the recording start time, whereas other timestamps are measured in nanoseconds from the Unix epoch. To convert the pose timestamps, first multiply by 1e9
, then add the result to the recording start time which should be in the info.json
file
The aligned poses are calculated from scene camera images, which, as you probably know, are sampled at ~30 Hz, whereas eye states come from the eye camera images at ~200 Hz. Since the data come from different sensors, the timestamps will never match up perfectly - you will need to match them by finding the nearest values
Hey pupil labs, While collecting data during day time iam facing this issue of too much of brightness. iam not able to look at what he is exacly looking from the scene video. it is focusing on what is inside the car rather than what is outside. how do i deal this ??
Hey @user-b6f43d, thanks for reaching out π The "brightness focus issue" you're describing is related to camera exposure. You can change the exposure modes before recording (or even during). However, it's not possible to modify a recording's exposure.
Hello Pupil Labs.
I followed the Real-Time API tutorial to obtain scene video, gaze data, pupil diameter, and eye state data, but I am unable to output the pupil diameter information.
The XY coordinates of the gaze, the positions of the centers of both eyeballs, and the optical axis information of both eyes were output without any issues.
The error output is as follows:
'GazeData' object has no attribute 'pupil_diameter_left'
File "C:\Users\key06\Documents\Python_UDP\neon_test1.py", line 12, in <module>
print(f"Pupil diameter in millimeters for the left eye: {gaze_sample.pupil_diameter_left} and the right eye: {gaze_sample.pupil_diameter_right}\n")
AttributeError: 'GazeData' object has no attribute 'pupil_diameter_left'
The versions of Python and the Real-Time API are as follows:
γ»Python 3.10.9 γ»pupil_labs_realtime_api 1.3.1
Could you please provide a solution to this issue?
Hi @user-dcc847 , may I ask what version of the Neon Companion App you are running on the Companion device?
Hi, I have a question regarding the calculation of the pupil diameter. I have read that it is only availabel in real-time and pupil cloud. I export the data from the app to my computer. When importing the data to neon player, I can apply the plugin "Eye State Timeline" which should calculate Pupil Diameters, Eyeball Centers and Optical Axes. However, when opening the cvs file 3d_eye_states, there is no data. (all the other data was available) So my question is: Is there a way to calculate the diameter without using pupil cloud?
Hi @user-85dce8, thanks for reaching out π You can now get Pupil diameter offline by exporting data from the Companion app onto your computer, or download the "Native Recording data" from Pupil Cloud, then dropping the folder into Neon Player.
Which operating system and Neon Player version are you currently using?
Hi there, Is it at all possible to access the rtsp stream of the camera remotely (i.e. not on the local network)? Will port forwarding work or is there an easier solution?
How to solve the problem shown in the picture when using Neon?
Hi @user-4bc389 - could you please open a ticket in the π troubleshooting channel? We will assist you there in a private chat
Hi, is possible to order a set of prescription lenses from -3.0dpt to 3.0 dpt, diopter in 0.5 steps? Thanks.
Hey @user-de8319, thanks for reaching out π Sure! Please contact our sales team --> sales@pupil-labs.com with your contact, telephone, shipping address, and your request.
Btw: we also sell a lens extension kit that ranges -6 to +6 diopters. If you don't have the "I can see clearly now frames", you can simply by the frame + lens kit. Otherwise, you can by the lens extension kit if you already have the frame. Check out the accessory section of our online store.
Hello! What happens if we accidently upldate the Android software, it keeps popping up and I've been snoozing it everytime since we got the Motorola phone. Today I noticed the update has been downloaded but it needs to be restared to be updated officially. In the event that the phone restarts and updates to the new software, from my understanding the app wont work as it works with this specific version of android OS. How should I go about this?
Hi @user-0001be, thanks for reaching out π Firstly, don't panic. I understand that you might have unknowingly upgraded Android. Moto's system update is very aggressive. The good news is that, we usually provide a way to rollback the wrong Android version from our documentation page.
If you look at the compatible devices page, we do support Android 14 for the Moto Edge 40 Pro.
Why are some of my "time to first fixation" values = to "0" when I have fixation metrics for this participant that wasnt 0
for instance one observer looked at a part of an image for an average of 435 ms, 10 times, but then time to first fixation and total fixation duration = 0
Fixation-based metrics on Pupil Cloud
Hey Pupil support team I work at iMotions, and we have recently expericned issues with live recordings done with our software when using the neon glasses. When the recording is being done the gaze is fixated on the left sight of the screen the entire recording and this reuslts in bad eye tracking data. If anyone can DM private i can further explain the issue.
Hi @user-bef103 ! Could you open a ticket on π troubleshooting and develop there the issue? Btw have you tried clearing the app's cache?
Yea i have.
Is there any way in which i can download the scene video with the gaze, fixation and saccades on it just like how it shows in pupil cloud ? if i download the scene video it just come as a raw video.
Hi @user-b6f43d ! The video renderer is designed for this exact purpose. Just place all the recordings you want to export in a project, then create a new visualisation on the left panel and use the Video Renderer option.
You can define the start and end points of the segments youβd like to include using events (to trim the recordings, if necessary). Additionally, you can configure how gaze circle is displayed, show or hide fixations and undistort the video. Once youβre ready, click βRun,β and after processing, youβll be able to download the final video.
also i have a second inquiry. how can i change the final enrichment image that gets carried through to the visualizations. some images have been coming out a bit faded or with strange lighting. We have been keeping recording setting and lighting consistent so I am wondering if the final enrichment image can be enhanced or changed to a better quality timestamp. Thank you!
Hi @user-20657f , let me see if I can clear it up.
The primary purpose of markers is to define the boundaries of surfaces. In some cases, they can mark the start and end of a stimulus presentation, but markers can also be used to define the boundary of a kitchen table, for example. Sometimes, users print the markers and attach them to the corners of their monitor, and in that case, there would be the same markers for all stimuli, including the inter-stimulus intervals. In such a case, marker detection cannot be used to determine beginning and end of a stimulus.
So, rather than impose restrictions on what the markers mean, we have events to denote analysis windows and "moments of interest". If you use the default recording.begin
and recording.end
events, then time to first fixation is measured from the start of the recording, not the start of marker detection. If you use events that are positioned at the start and end of the marker presentation, then that is one way to get what you want.
I understand the difficulty of doing it manually in your case, so one programmatic solution is method 1 from the previous message, by which you sidestep events and obtain the start and end timestamps of marker detection, which would correspond to the start and end of your stimuli. Then, you can compute the values you need. There is also method 2, which uses the Cloud API. If you click the link in method 2 (https://discord.com/channels/285728493612957698/1047111711230009405/1204088822892073033), then you will be taken to a message with example code and details of the Cloud API. Method 2 would require that you have saved when each stimulus was presented.
If you are looking for dedicated programming support, then we do offer support packages.
OK thank you! I will look into that. Also I am running some enrichments on a lot of recordings right now. They have been running for 2 hours and 0% progress. Is the site down or something?
All recordings have been stuck on 0% processing
without doing all 20 images with 160 recordings each at once
and once i run the enrichment can i close my laptop - i.e. will it run in background within the cloud?
Yes. None of the processing runs locally. The Enrichments will continue to process on the Pupil Cloud servers.
Ill wait until morning however, its been about 6 hours and every single recording is still on 0% doesnt seem like its moving
Hi @user-20657f! Can you try refreshing your browser window? Sometimes browsers can cache the page, making it appear like the processing is still ongoing.
5+ hours
Any thoughts? Thanks so much guys!
Hey @user-20657f, thanks for providing the screenshots. So it seems like you're trying to run the enrichment on 160 recordings x 6 times. While I don't know how long the average recordings are, or which segments of the recordings you're processing (in terms of recording length), I'd ask you to please remain patient π . The enrichment won't be completed within only a few hours.
If i try to exreact using win rar, its showing like this
Hi @user-b6f43d , could you give 7-zip a try?
Which video type are you looking at? - Video renderer visualization - Timeseries Data + Scene Video - Native Recording Data video
When you say it βgets stuckβ, do you mean that the time in the video player keeps increasing (I.e., the play head keeps moving), but the video does not change? Or the videos are shorter in duration than expected?
What video player are you using? It should not be necessary, but just to be sure, does it work with VLC player?
iam looking at video render visualization, the videos gets stuck but the time is increasing. and using VLC media player only
I see that the page/code for scanpath generation on alphalab have change (is there big change or not? I'm not sure)
Hi @user-bdc05d, indeed we have updated the scanpath generation on our Alpha Lab page. The output remains the same, that is you can generate both static and dynamic scanpahts. With this updated version, we provide an easier and more user-friendly tool (leveraging Google Colab) for a faster scanpath generation. Additionally, with the updated code, you can generate scanpaths using the output of both Reference Image Mapper and Manual Mapper. You simply need to open the Google Colab link and follow the instructions on the tutorial. Let me know if you have any further questions π
the use of google drive obligatory? even in local or not ?
In case you don't want to use Google Colab, you can find the same code available for download here
If you still want to have access to the old scanpath generation code, you can still access it here
okay it is just that I have some trouble with drive (even if I connect while running code it does not work), we must download ref img mapper folder and they put it on drive that it to use the new version of the code ?
The current version offers two options:
Google Drive Path
field in the attached image), and then run the cells to generate the scanpaths. Retrieve the enrichment automatically
(see image). You can use a Pupil Cloud API token to have the enrichment loaded into Google Drive automatically. You need to obtain a developer token from Pupil Cloud (click here to obtain yours). Then you simply need to copy paste the enrichment URL
and everything is done automatically.okay thank I will try this ! and thank also for the old scanpath code π the change is just in the process but not in the result so if I don't manage do do it I can use the old one π
correct, the changes concern only the process. The results are the same - you'll get static and dynamic scanpaths in either case. If you run into any issues while trying it out, let me know π
I don't have issue but the precedent code was much better for my use in local than always copy paste the url etc for all my subject but one question, the code for generating dynamic scanpath (video) was updatated ? because it it was faster that what I have actually (sec vs several dozen of min)?
yes, the current version offers a faster generation of scanpaths.
Please note that you don't need to run the code for each subject separately. If you have an enrichment that includes the data from 5 participants, the current version allows you to select which participants will be included in the scanpath visualization (see the section Select The Wearers To Be Included
in the Google Colab notebook).
no i have several "project" for each subject that why I said that (for my case I needed to separate them)
right! thanks for clarifying. In that case, yes, you'd have to run the notebook separately for each enrichment folder. However, that was the case for the old scanpath code - you had to select one enrichment folder every time you'd run the code.
The decision is ultimately yours, feel free to use the option that best meets your needs. In terms of results, though, these are the same in either case π
yes but I had change that to just run on all my folder so that was quick not sure to do this easily with this new one, but the speed of process is so much better ! π₯²
I have a question about the data that comes out when performing the head pose tracker.
On the one hand we have the rotation in X/Y/Z and on the other hand the Pitch, Yaw, Roll.
What rotations do each one describe?
https://docs.pupil-labs.com/neon/neon-player/head-pose-tracker/#head-pose-tracker
Hi @user-934d4a , they are two different ways of specifying the exact same rotation.
rotation_x/y/z
give you.pitch, yaw, roll
give you.They both specify the rotation of the head.
Hello I have a problem when preprocessing data with the neon player. If I ask use the fixation pluggin to extract the fixations and try to export the data, it crashes with the following traceback, thanks in advance:
Hey @user-b3b1d3, thanks for reaching out π May I ask which Neon Player version are you using, and what's your operating system?
Also, did the fixation detector finished running (i.e., finished detecting all fixations) before you tried exporting the data?
Hi there, my neon eye tracker is having sensor failure repeatedly, even with uplugging and plugging back in, what shall I do to resolve this? Thanks
Hi @user-d2d759! Please open a ticket in https://discord.com/channels/285728493612957698/1203979608563650650 and we can do some debugging π
Is there any way to get the real time audio signal from the neon to see when an auditory signal was produced with a timestamp associated?
Hi @user-baddae ! The audio stream is currently not exposed in the realtime API, if you want to see this implemented feel free to upvote this feature request here https://discord.com/channels/285728493612957698/1226973526947266622
If I record the real time data and save the real-time video feed, can I analyze the audio stream from there offline?
Just to clarify, the audio stream would be encapsulated in the video stream as a channel, but this is currently not implemented as I mentioned. So, you wonβt be able to obtain it from the streamed video.
With that said, if you have on the phone audio enabled and you start the recording pressing the button or programmatically, the video stored in the device will include the audio signal.
From there, you can export the stored video or if you have cloud uploads enabled, you can download it from Cloud to further analysis.
If you need more guidance on how to access the audio channel, let us know
Or is it possible to collect data in real time but also save it to pupil cloud to look at the events and analyze further?
And I can have the recording on the phone while still doing real time collection correct?
Yes, you can start/stop the recording by including the following in your code.
import time
from pupil_labs.realtime_api.simple import discover_one_device
# Look for devices. Returns as soon as it has found the first device.
print("Looking for the next best device...")
device = discover_one_device(max_search_duration_seconds=10)
if device is None:
print("No device found.")
raise SystemExit(-1)
print(f"Starting recording")
recording_id = device.recording_start()
print(f"Started recording with id {recording_id}")
time.sleep(5)
device.recording_stop_and_save()
print("Recording stopped and saved")
# device.recording_cancel() # uncomment to cancel recording
device.close()
Check more here
In fact, I strongly recommend that you rely on the recorded data rather than only on the streamed data. This is mainly because your network can lose packages.
Thank you! Could you give me some insight on how to access the audio channels afterwards from this recording?
To access the audio channel, you would typically use ffmpeg
or pyav
(a python binding for it). A straightforward way to work with these libraries is through our pl-neon-recording
library, which simplifies their usage.
You can find an example of how to access the audio channel and detect the loudest sound in this example script.
Hi team, I am about to downgrade my OnePlus 10 pro companion device from Android 14 to Android 13. Could you guys please tell me how to backup the data on the companion app? Is there a particular way I should do this or is it just a simple copy and paste? How do export all the recordings properly? I would like to keep an offline backup since some of the projects are still in progress. I know that the data is already on the cloud but just as a fail safe.
Hi @user-37a2bd - thanks for reaching out. If you want to save the recordings locally, you need to simply transfer the recordings from the phone to your computer. Please refer to these instructions.
Sure, if you have any issues feel free to reach out here or via email at [email removed]
Hi Nadia, I have a lot of recordings on the phone and when I tried to export all of them the phone gave me a message saying it has insufficient space to perform the export. What should I do?
Hello! I'm considering purchasing a new laptop for research. Are there any specific computing specs and requirements that I might want to look for if I'm conducting research with the Neon eye tracker?
Hi @user-dcc042! Neon connects to a phone (included in the full bundle we ship, regardless of the frame you choose), so a laptop is not needed for recording Neon data. In terms of analysis, Neon recordings can be analyzed on Pupil Cloud, our online platform, which is accessible from any browser and its usage is not dependent on laptop specs.
If you want to learn more about Neon's software ecosystem, feel free to explore our docs, or send us an email to info@pupil-labs.com and we can schedule a demo and Q&A call.
Hey Pupil Labs! π
I'm a university researcher, and we're looking to buy 4 or 5 pairs of Neon eye-tracking glasses for a project. Our budget is pretty tight, so I was wondering if there are any discounts available if we go for 5 pairs? Any help or advice would be greatly appreciated! π
Thanks!
Hi @user-79486f π. Great to hear you're interested in Neon! For your query, please reach out to [email removed] and someone will get back to you. Have you already seen a demonstration of Neon? If not, we can also arrange a video call once we've got you're email π
Hey pupil labs What is duration threshold for identifying a gaze as fixation ?
Hey @user-b6f43d, our fixation detector is velocity based. In the first step, the detector computes how fast gaze changes (in pixels / sec) between gaze samples.
Once the change falls below a certain velocity threshold, the detector then determines how many samples stay below the velocity threshold. In this second step, the detector takes into consideration the time across samples, which is about 70 ms for Neon.
If you'd like to modify the velocity parameters, you'll have to do it offline. See https://discord.com/channels/285728493612957698/1047111711230009405/1253738317757812866
and there is also a time threshold ? why so ?
can explain why this is happening ?
As described in the whitepaper, the fixation detector is a velocity-based algorithm with some minimum threshold to be considered.
How are those thresholds applied? Well, if you exceed the velocity threshold for the minimum saccade duration, then you are (still) in a saccade, which lasts for as long as you are above the velocity threshold.
In contrast, for a movement to be classified as a fixation, the gaze velocity must be below a different threshold, indicating that the eyes are relatively stationary.
I have recently rerun computations on some of my earlier subjects to download saccade data, but now I realize that by doing so, their 3d-eyeState csv is now empty. Would there be a way to recover this data again?
Hi @user-328c63, what do you mean by "rerun computations"? Do you mean that you've applied the gaze offset correction in Pupil Cloud?
Hi Pupil Labs team,
I have been working on implementing the Map Gaze Into a User-Supplied 3D Model functionality. However, I noticed that the gaze collision visualization in the 3D environment does not seem to be enabled. Is there any way to integrate this functionality?
Thank you in advance for your help.
Hi @user-934d4a , great to see you are using this!
May I ask what you mean by "not enabled"? Can you show the result for your scene? If it needs to remain private, you can send it to me in a DM.
Hi Rob,
I mean that after reviewing the Python scripts, I couldn't find any reference to this functionality. Additionally, in the assets, the 3D object that should represent the intersection with the scene is missing; only the glasses and the ray are present.
Hi, @user-934d4a - I think what you're looking for is in the Blender addon
Hi pupil -labs- Team, i have a question regarding the MarkerMapper. In my project i will acquire experiment data from many participants performing tasts on a screen. I expect to end up with about 150 or more videos with a duration of each video within the range of 45 minutes up to one hour. All these vides are using the markerMapper and my understanding was, that i put them in the same project in pupil clouds. There i create the enrichment with the marker mapper. However testing this procedure i learned, that the results of the marker mapper ,e.g. gaze relative to surface , are extracted in only one file, including all the project data. Intuitively I expected to get one file for each video. Is there a possibillity to change this , so i get one enrichment file for each video? M afraid that otherwise the datatable would become very big and hard to handle . Thanks for your support
Hi @user-876d7f! Indeed, if you include e.g. 10 recordings to a project and run a Marker Mapper enrichment on these recordings, then your data export format will have the mapped gaze and fixations of all these recordings in one file. However, it is easy to handle the data since the csv file has a column with the recording id associated with each data point (please refer to the Marker Mapper Export Format).
Although it is possible to run an enrichment for each video, this wouldn't be an optimal way for analyzing your data. I recommend running the enrichment on all your recordings, download the data, and then you can handle the data using Python's pandas library to create individual csv files for every recording. Here's a snippet:
import pandas as pd
Load the CSV filedf = pd.read_csv('your_marker_mapper_gaze_data.csv')
Get unique recording idsrecording_ids = df['recording id'].unique()
Loop through each unique recording id and save the corresponding rows to a new CSVfor recording_id in recording_ids: df_filtered = df[df['recording id'] == recording_id] df_filtered.to_csv(f'recording_{recording_id}.csv', index=False)
What does saccade amplitue in degree mean ? degree with respect to ?
Hi @user-b6f43d - amplitude [deg]
is a float value representing the amplitude of the saccade in degrees of visual angle. You can find a detailed description of all the data provided for Neon recordings in our docs
Hi everyone! We are working with Neon output data but found the samples are not equally spaced in time. We took the timestamps and computed the diff between subsequent timestamps, and it seems that usually the diff is ~5ms (which makes sense given the 200Hz sampling rate) but could also be 10ms (one missed sample?). Extreme values include 175ms. If we want to convert it to a contineous signal (e.g. to be incorporated as an EEG channel), what would you recommend us to do?
Hi, @user-a55486 - are you using data pulled from the phone in its native recording format? It is true that a frame of data may occasionally be skipped during real-time processing on the device. Recordings that are uploaded to Pupil Cloud are re-processed there at the full 200 Hz, and those extreme values should not be present in that data.
Another option is to interpolate the data at whatever frequency/timestamps you want using pl-neon-recording
:
import numpy as np
import pupil_labs.neon_recording as nr
from pupil_labs.neon_recording.stream import InterpolationMethod
recording = nr.load('path/to/recording/folder/')
sample_rate = 200
interpolated_times = np.arange(recording.gaze.ts[0], recording.gaze.ts[-1], 1 / sample_rate)
interpolated_eyestates = recording.gaze.sample(interpolated_times, InterpolationMethod.LINEAR)
timestamps = interpolated_eyestates.ts
diff = np.diff(timestamps) / 1e6
print(np.unique(diff))
Output:
[5.00011444e-09]
timestamps = eye_states['timestamp [ns]'] diff = timestamps.diff() / 1e6
print(diff.unique())
[ nan 5. 5.007 4.993 4.999 5.004 4.996 175.245 5.125 5.012 4.987 5.005 4.995 5.001 4.994 5.003 4.997 5.132 5.006 10. 4.883 5.117 5.009 4.991 5.002 5.119 4.998 5.122 5.011 4.988 5.129 4.992 5.013 4.986 5.01 4.99 5.124 5.008 4.989 5.015 5.11 5.116 9.999 4.985 4.891 5.109 5.121 5.016 4.984 5.111 5.12 10.125 5.123 10.011 5.112 9.997 4.888 5.113 5.128 10.007 5.13 5.126 10.005 5.118 5.114 5.014 5.139 5.017 4.887 5.115 10.002 4.882 4.983 4.878 5.134 5.136 4.885 5.127 4.89 4.881 4.884 10.008 5.135 5.137 4.886 9.993 5.142 9.998 10.004 5.133 5.131 10.01 5.141 9.995]
Hi @user-cdcab0 Thanks for the reply! FYI we are using the processed recordings from Pupil Cloud but we still see such great gaps. I wasn't aware of pl-neuron-recording and it does seem like a sensable solution too!
You're welcome π
Do you notice those large gaps in timestamps in multiple recordings or just one of them?
in multiple ones
Actually I downloaded the 'Timeseries CSV and Scene Video' option instead of 'Native Recording Data'. Maybe that's the reason why I saw gaps?
The timeseries CSV is cloud-processed, so it shouldn't have those gaps. Do you mind opening a ticket in π troubleshooting?
For anyone who may encounter this in the future, it turns out that the large time deltas occur at the beginning of the recordings - which is expected to have some lag because sensors are starting up, files are being opened for writing, etc.
The reason these large gaps are not present in native recording data (and hence, not seen with pl-neon-recording
) is because gaze computed on the companion device doesn't start until after this lag has already cleared out. Many eye camera frames will be recorded without having gaze computed on the device. Whereas cloud computes gaze on every single frame of eye video from the very beginning.
Hi, I'm relaying this inquiry from one of our researchers. They noted that the Just act natural frame (the newer 2024 version) still gets hot after wearing for a prolonged time (30mins +). We recently also received a ready-set-go frame that has replaceable foam that fits over the module. Could this also be applied to the Just act natural frame?
Hi @user-23177e ! May I ask what is the ambient temperature during use and if you are using the heatsink version of the Just Act Natural frame? https://discord.com/channels/285728493612957698/733230031228370956/1200298162242465842
And, if so, would Pupil be able to provide these foam pads?
I will have to check if they made any temp measurements, if not, we will do so. The frames were ordered on 22-02-2024. I am not sure if they are the new version
It looks like you have the frames with the heatsink.
Please note that the module might get a bit warm with this frame - however this should not affect the Neon App functionality.
Thanks for checking. The main issue is the heat that does make it uncomfortable for participants. Would a foam piece work for this frame? The one that also is included with the ready-set-go frame?
Hey @user-23177e - There will be no issue adding a small piece of foam to the frame, if you want to see whether it improves comfort for your participants. Just make sure it doesn't obscure the eye cameras, but I doubt that would really be an issue.
Great thanks for the response!
Hey there! A quick question regarding Pupil cloud, is it possible to generate AOI heat maps for 'events' across a bunch of recordings rather than the whole recordings themselves? Thanks in advance
Hey @user-688047 π. Welcome to our Discord server!
Yes, you can choose to run enrichments and generate AOI heatmaps between specific 'events' across multiple recordings. We call these periods between events 'sections', and they're incredibly useful when you want to focus your analysis on certain parts of the recordings.
Read more about that here: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections
Great thanks!
Hello. I am collecting data for an experiment using neon eyetrackers and wanted to do enrichments on the pupil cloud. However, I faced this error when I checked the recordings on pupil cloud.
Do I have to worry my recordings not coming back?
Hi @user-46e202 π. Please open a ticket in π troubleshooting and we can take a look for you!
Could someone please help me define what a fixation is for Neon? I know it is a fixed x, y coordinate on a given location, but for how long (200ms?)
Hi @user-20657f π ! You can find the fixations definition in our documentation under Data Collection > Data Streams > Fixations & Saccades. There you will also find a link to our whitepaper and a publication describing the methodology more in detail. The algorithm is velocity based but the minimum fixation duration threshold is set to 70 ms, you can find the thresholds here https://discord.com/channels/285728493612957698/1047111711230009405/1276446120192507904
or the other pairs