Is it possible to stream audio data with python API ? Something like device.reveive_matched_scene_and_eyes_video_frames_**sound**_and_gaze()
Hi @user-ae2b7f ! The Python client for the Realtime API does not currently support audio streaming, if you would find this valuable, feel free to suggest it on the 💡 features-requests channel. Meanwhile you can access it direclty through the RTSP stream using other tools.
Hi there. I am new here. Is there any tool to conect Neon to Unity. I know that there is this Neon XR https://docs.pupil-labs.com/neon/neon-xr/neon-xr-core-package/. However, I am not developing for XR, and I also encouter these problems described here https://chat-archive.pupil-labs.com/logs/neon-xr/2024/04/. My plan was to use Python API send the data to Unity using UDP?
Hi @user-6e3b6d 👋 ! If you are looking to use Neon with Unity, but without XR, then that package can still be a good reference. For example, the code here shows how to connect via RTSP over WebSockets to receive data from Neon. This is a standard part of the Real-time API protocol.
Hello. I want to do some analysis on the number of saccades in my data uploaded on Pupil Cloud. Where can I download the data for the saccades from the cloud? Is it true to say that the number of saccades equals number of fixations - 1? Or does each change of fixation count as a saccade?
Hi @user-46e202! You can get the saccade data within your recording folder (right click on the recording on Cloud > Download > Timeseries Data). You can read more about fixations and saccades here. The full list of saccade data is also available in our data format docs
Hi there! I'm looking for some guidance on how to best track body and feet movements alongside eye and head movements using external IMUs paired with the neon . More specifically, I'd like to temporally sync eye movements, head rotation, torso rotation and feet movement to a common clock, so that for any given timestamp, I have the positions and orientations of each effector. The eye movements and head movements will obviously be recorded by the neon and its integrated IMU. The torso and feet will be recorded with external IMUs (specifically https://mbientlab.com/metamotionrl). I'll most likely sample torso and feet movements at the frequency of the neon IMU (110hz).
What I'd like to know is if there's a straightforward way to go about merging different data streams from different sources (IMUs) to the same time series, so that i can monitor movements their movements in real time and also log them for easy processign and analysis later on? I know this is a handful and would appreciate any guidance and advice! Thank you so much in advance.
Hello, is there a way to define an event using a single AprilTag on Pupil Cloud? Also, I understand that to define a surface, you need 2 or more AprilTags. Is this the case for events as well?
Hi @user-fea107! Just so I'm sure I understand, are you looking for a way to automatically create events in parts of your recording that correspond to the appearance of a single AprilTag marker?
Hi @user-81ea10 👋 ! Could you develop what are you aiming for? Yes, you can wear them during daily activities and the battery can last you up to 4h of continuous recording. If that's what you mean.
this glasses can use in daily lives ?
Hi, I'm currently testing out our neon and when using the companion app and the neon player the IMU reaches almost 110Hz. But when accessing the data through the simple api I only reach below 60Hz for the IMU but the gaze data for example is fine so I don't think that it is a connection issue
Hi @user-0e6279 👋! The recording data, as you mentioned, still achieves the expected sampling rate, which is great. The streaming functionality is primarily intended for monitoring, and the performance might also depend on the capabilities of the Companion Device, but 60Hz is definitely low, so we’d like to investigate this further. Could you please open a ticket in the 🛟 troubleshooting channel so we can follow up on this? Thanks!
Hi, looking into the raw data of the recorded video, I found that the file regarding saccade data in Pupil Cloud does not have any data regarding AOI or surface position. Is there any way already devised to calculate the saccades data based on AOI or surface coordinates? Your ideas and support are highly appreciated!
Hi @user-665dc2 , could you clarify which saccades specifically you are interested in? For example, do you want any saccades that cross over the AOI boundary or maybe only those that happen within the AOI boundary?
Hi! Is there a recommended USB hub I can get from Amazon US that supports both charging and data transfer at the same time for the companion Motorola Edge 40 pro that comes with Neon?
https://docs.pupil-labs.com/neon/hardware/using-a-usb-hub/ I saw this on the website but that one doesn't ship to the US. I have to buy it through the department so I want to get something that is for sure going to work so that I don’t have to worry about returning it
Hi @user-bda2e6! I think the same model is available on Amazon US 🙂
Thank you!
Automatic events with markers
That looks correct. Although we haven't ordered one ourselves from the US store.
Hello everyone,
Currently, we are preparing an experiment focusing on measuring smooth eye movements in patients with schizophrenia (SCHZ) and healthy controls (HC). Specifically, we have been measuring this task for several years using the stationary ET Eyelink 1000+. At this point, we need to determine whether we can detect differences in smooth eye movements between SCHZ patients and HC using the Pupil Lab Neon. The method for comparing the two ET systems is largely based on the following article: https://doi.org/10.7717/peerj.7086, although in our case, we are not comparing technical parameters, but rather trying to find out if the NEON system can detect disruptions in smooth eye movements in SCHZ patients.
However, in our case, we are using a head-mounted stand and the Pupil Lab Neon with the original sports frame (a piece of plastic with a rubber strap around the head—currently unavailable). We plan to synchronize both ET and stimuli using the LSL protocol. Our experimental setup with the stationary ET (Eyelink 1000+) has the following parameters: the width of the monitor is 74 cm, height is 43 cm, and the distance from the eye to the top and bottom edges of the monitor is 109 cm.
What we currently need help with is the experimental setup. It seems that the monitor is too far from the participant's eye, and the ET Neon is unable to accurately track the moving target's path, or more specifically, the participants' smooth eye movements, at this distance. We are also curious to know if you see any other areas for improvement in our experimental design.
Thanks!
P.S. Your ET system seems absolutely great to us.
Hi @user-9e0d53 - Thanks for reaching out and for your feedback 🙂 I have a few follow-up questions to better understand your setup.
Regarding this:
It seems that the monitor is too far from the participant's eye, and the ET Neon is unable to accurately track the moving target's path, or more specifically, the participants' smooth eye movements, at this distance.
I understand that you use fixed distances between the monitor and the participant when recording data with the Eyelink. Are you planning to conduct recordings simultaneously with Neon and Eyelink, similar to the approach in Ehringer et al.? If so, is this why you need to maintain the same participant-monitor distance for the Neon recordings? May I ask also how do you evaluate Neon's accuracy in tracking the moving targets on screen?
Hey everyone! Can someone tell me whether it is possible to load annotations in Neon Player from the (exported) annotations.csv instead of the annotation_player.pldata? I am talking about the exported annotations in <Video>/neon_player/exports/000/annotations.csv, because the annotation_player.pldata file was overwritten. Or whether there is a python script to recreate the annotation_player.pldata from the annotations.csv or another workaround? Have a nice weekend! 😀
Hi, @user-0cdc2d - something like this should work for you. Use it like so:
pip install msgpack
python recover_annotations.py "path/to/my/export/annotations.csv"
That will reconstruct annotation_player.pldata
next to your exported (csv) annotations. You can copy it into the neon_player
folder of your recording. Note that you may need to delete the annotation_player_timestamps.npy
(Neon Player will automatically reconstruct it)
EDIT: Script removed from this post and a better version is available below.
Hello, may I know if i can get real-time eye-tracking data from Neon on-the-fly? Like, to a pc nearby or over the network? do i need cloud?
Hi, @user-aafe26 - this is certainly possible! To start, we offer a realtime API that allows you to stream data from your Neon device over a network connection. This allows users to (among other things) stream the scene camera and gaze data to a PC. From there, you have a many options for detecting the target, like using AprilTag markers for example. If the wearer's gaze lies within the bounds of the target, an action can be triggered to do whatever you'd like
Like, i wanna know where my visitor is looking at in a controlled environment, then when they look at a particular spot, it triggers something on the visual display of my projector. Is it possible?
Lovely! thanks for the detail explanation and guidelines
Are there any shipping fee to HK?
Hi @user-aafe26 👋! Our prices on the website don’t include shipping fees, as these depend on the shipping destination. You’re welcome to add items to your cart and request a non-binding quote that will include the shipping costs directly from our website. Let us know if you have any questions.
Hi there! We would like to customize a frame to print with you, is that possible?
Hi @user-0001be 👋! For custom order queries, please reach us at [email removed] When doing so, including a brief description of the frame you have in mind can help us determine if we can fulfill your request.
Hi @user-ebe453 ! 👋 One of the simplest ways to synchronize multiple sensors over a single clock is by leveraging Lab Streaming Layer (LSL). LSL is an open-source networked middleware ecosystem designed for streaming, receiving, synchronizing, and recording neural, physiological, and behavioral data streams from diverse sensor hardware.
We support LSL directly from the Companion Device, so I’d recommend reaching out to MbientLab to check if they offer support for it for example via relay.
Alternatively, you could explore their Python API and integrate it with our Realtime API to send events triggered by specific IMU signals.
Hi, wanted to follow up on this to see if anyone had any suggestions.
Hello everyone, I am working on eye-tracking studies with surgeons in the operating room. During a recent procedure involving a microscope, the Neon scene camera, positioned against the microscope, recorded only a black screen. However, we have another video, captured by the operating room’s camera, which provides a clear view of the entire procedure. Is it possible to align the gaze data recorded by Neon with this external video? If so, could you guide me on the steps or tools required to achieve this mapping? Thank you !!
Hi @user-2b5d07! Would you be able to clarify what you mean by 'align'. Is your goal to essentially get a gaze circle, indicating where the Neon wearer was looking, on the external video?
Hi,
We have Gaze coordinates+Blinks? (how does our software count blinks – as positions / “does it input a pixel coordinate”) (Maybe a Boolean true/false)
Do we have the pixels of each of the AOIs or just the overall combined pixels for the whole picture – do we just have a yes/no for which AOI they might be looking at?
What's the definition of surface true/false in the data output? (is it world camera or the marker mapper 2D area?)
What is gaze position on surface normalized to? like the "fixation y [normalized]" in "fixations.csv"
Is "timestamp [ns]" in gaze.csv in arithmetic progression or the interval is random
@user-a97d77 Since it does sound like you are not using NeonXR but rather general use of Neon, I have moved the conversation to the appropriate channel.
As there are multiple questions, let's address them one at a time. But first, some general concepts:
Every data stream (eye video, scene video, IMU, etc.) is timestamped on the device following Unix epoch, at the sampling rate of the respective sensor:
- Eye cameras: 200 Hz
- Scene camera: 30 Hz
- IMU: 110 Hz
When running the marker mapper, surfaces are detected in the scene camera, and gaze coordinates are translated into the corresponding surface coordinates.
What's the definition of surface true/false in the data output? (is it world camera or the marker mapper 2D area?) This indicates only if your defined surface could been found on the scene camera (namely, markers are found and surface defined) and in the case of gaze/fixation, whether the coordinates of these samples fall within the coordinates. There could be instances where you look to the other places and the surface is not in the scene camera field of view or the surface is detected but gaze is not falling within.
What's the definition of surface true/false in the data output? (is it world camera or the marker mapper 2D area?)
This indicates only if your defined surface could been found on the scene camera (namely, markers are found and surface defined) and in the case of gaze/fixation, whether the coordinates of these samples fall within the coordinates. There could be instances where you look to the other places and the surface is not in the scene camera field of view or the surface is detected but gaze is not falling within.
Hi,
How do we handle Saccades? because I only see fixations in the 2d marker mapper folder.
Currently, we do not offer saccade information over surfaces. If you think this would be useful, feel free to suggest it in the 💡 features-requests channel.
Meanwhile you can apply the translation of saccade position to the saccades.csv file. However, keep in mind that while translating gaze and fixation coordinates to a 2D surface is relatively straightforward, applying this to saccades is more complex. Saccades data might not align well with traditional algorithms for 2D data.
Additionally, note that we do not use a dispersion-based algorithm for fixations. Instead, we employ an adaptive velocity-based approach. For more context, you can refer to this Discord message.
For further details, you can explore:
- Our fixation detector whitepaper (PDF)
- Our publication in Behavior Research Methods
With that in mind, you may want to employ your own saccade detector on top of the gaze surface data.
What's the definition of surface true/false in the data output? (is it world camera or the marker mapper 2D area?)
This indicates only if your defined surface could been found on the scene camera (namely, markers are found and surface defined) and in the case of gaze/fixation, whether the coordinates of these samples fall within the coordinates. There could be instances where you look to the other places and the surface is not in the scene camera field of view or the surface is detected but gaze is not falling within.
What is gaze position on surface normalized to? like the "fixation y [normalized]" in "fixations.csv"
As defined in the documentation. The Marker Mapper maps gaze points to a 2D surface and returns them in surface coordinates. The top left corner of the surface is defined as (0, 0), and the bottom right corner is defined as (1, 1).
The normalization does not consider the surface image’s pixel dimensions. If you multiply the normalized values by the image's width and height, you’ll get the exact pixel coordinates within the image.
We have Gaze coordinates+Blinks? (how does our software count blinks – as positions / “does it input a pixel coordinate”) (Maybe a Boolean true/false)
Blinks are detected using their own dedicated detector and are assigned unique IDs, from the beginning of the recording. See format here. Since both blinks and gaze samples are timestamped, you can easily determine if a blink occurred during a gaze sample but to facilitate this task, it is provided already there.
Do we have the pixels of each of the AOIs or just the overall combined pixels for the whole picture – do we just have a yes/no for which AOI they might be looking at?
You get an aoi_fixations.csv which tells you if a fixation fall onto which AOI if any.
If you need a timeline visualisation, check out this message https://discord.com/channels/285728493612957698/1047111711230009405/1302955505169465444 and associated gist
gist #2
Hi,
Is "timestamp [ns]" in gaze.csv in arithmetic progression or the interval is random
@user-a97d77 The timestamp[ns] values are not in random intervals. As mentioned earlier, they represent the exact time at which the sensor captured the data locally on the device.
It is expressed in Unix time (nanoseconds) or in other words, the elapsed time from midnight UTC on 1 January 1970 in nanoseconds.
The sampling interval, or the rate at which the data is captured, depends on the specific sensor being used, and settings .
Hey could you just confirm that Gyro X,Y,Z are angular velocities and yaw, pitch, roll are absolute angles?
So in theory if I take the first derivative of the angles I should get the velocity?
Hi @user-e74b04 , while the gyro_x/y/z
values are angular velocities and the yaw/pitch/roll
are absolute angles, it is a general property of IMUs that you cannot directly integrate one of the streams to obtain an accurate representation of another. One reason is that they are not sampled within infinite resolution. You will quickly notice what is known as "exponential drift". Computing a derivative, however, is okay; e.g., the derivative of the yaw
signal gives you gyro z
.
May I ask why you want to derive the angular velocity, rather than use the values provided by the gyroscope?
Thanks for you response!
When running a marker mapper enrichment with AOIs in the Pupil Cloud browser, can I run other demanding programs, or does my computer need to focus solely on this?
You can run other programs or even close the browser entirely. The processing happens on our servers, so your device doesn’t need to handle the heavy lifting. In fact, you can initiate the enrichment from any device, including tablets or phones. The processing time depends on the number of recordings, the total duration of the enrichment tasks, and how many jobs are running concurrently on our Cloud.
Should I add the AOIs before I make the marker mapper enrichment "run"? Or is it better to add them after (or is it even possible)?
It doesn’t make a difference. AOI metrics are computed on the fly, so you can add them either before or after running the marker mapper enrichment.
What is the fastest way to download? I have fast internet, but downloads seem slow (my tested internet speed is about 90 Mbps for both download and upload).
At 90 Mbps, downloads should be fairly quick. If you’re experiencing slowness, may I ask what were you trying to download? Are you noticing this issue consistently or sporadically?
Kindly note that our servers are located in Europe, so if you’re accessing them from another region, you might experience slight latency.
To test your connection to our servers, you can use this speed test tool.
Hi,
When running a marker mapper enrichment with AOIs in the pupil cloud browser, can I run other demanding programs or does my computer have to focus on basically only this? (is the hard lifting done by a pupil labs server somewhere, or on my pc?) It took over two hours to do marker mapper+AOIs for a one hour recording.
Should I add the AOIs before I make the marker mapper enrichment “run”? Or is it better to add them after (or even possible)?
What is the fastest way to download? I have fast internet, but downloads seem slow? (I have about 90 Mbps download and upload as my tested internet speed)
Thank you!
Hello, I am working with a lab on a project where we are trying to synchronize multiple streams of data. We are trying to use LSL to synchronize the Pupil Labs Neon Glasses (we are mainly concerned with the audio and video recordings). Is there a way to synchronize these data streams?
Hi, @user-29a91e - yes, this can be accomplished. First, be sure your LSL inlet (e.g., LabRecorder) is configured to receive data from the ***_Neon Events
Stream. Then, you should see LSL-timestamped recording.begin
and recording.end
events in your LSL data, and corresponding events in your Neon recording data. You can match these events to synchronize the data streams with the audio/video from the recording
Hi, in order to calculate gaze entropy, the only data we can get is pretty much from "aoi name", "fixation duration [ms]" from the "aoi_fixations.csv", right?
Hi, for the revious question:
it took two hours to do the Marker Mapper enrichment with AOIs. Our challenge is, I guess, that we have a lot of these very long 1 hour videos. (about 40+).
I was trying to download with the method shown in "图片 1.png"
Hi @user-a97d77! Responses below: 1. Gaze entropy - I'm not fully sure I understand what you're asking here. Could you please elaborate on your goals? 2. Enrichment time - Two hours seems reasonable for 40 hours of recordings. Do you need to map the entire recordings? Don't forget you can choose smaller segments of recordings to enrich by using 'events' 3. Download - Apologies, but I'm not fully sure I understand what you're asking here either. Can you elaborate here as well? If it helps, you can test the connection speed to our servers with this link: https://speedtest.cloud.pupil-labs.com/
Hi, I have two questions: 1. Which Neon is the best for people to wear together with their own glasses? I see there is the frame "I can see clearly now" with -3 to +3 diopter and Extended range lens kit up to -6, +6, but what if I am out of these ranges? I am -8 short-sighting, and I feel more comfortable with my own lens. 2. For the "Every move you make" with motion capture, it seems I need to build another IR-based motion capture system for tracking the 6 balls right? Any pointers for this?
Thanks!
Hi @user-aafe26 , wearing Neon under or over glasses is generally not recommended, as it will obscure the view of either the eye cameras (disturbing gaze estimation) or the scene camera. So, I can see clearly now
is indeed the solution for people who must wear glasses. If you need a stronger prescription, then an optometrist should be able to cut a fitting lens. The lens holders have a standard shape.
With respect to Every move you make
, yes, it is intended for use with motion capture systems, but you do not need to necessarily build such a system yourself. It is compatible with motion capture systems provided by other manufacturers.
one more indeed, any frame version can combine "I can see clearly now" & "Every move you make"?
Do I understand correctly that you would like a version of I can see clearly now
with the motion capture attachment of Every move you make
? If you are interested in custom frame design, then please send an email to info@pupil-labs.com and we can continue the discussion there. 🙂
Hello! When using Psychopy with Neon, is there a way to report the information of the apriltag QR codes in realtime? Like, how many of them there are in the scene, their IDs, etc
By real-time, I meant something that I can print or visualize per frame. Like how I can visualize the screen based gaze location using eyetracker.getPosition() in Polygon component
Can I for example print “there are X tags in the scene, their ID’s are XXX, XXX, XXX….” per frame in a text component?
Hi @user-bda2e6 , this is in principle possible. Do I remember correctly that you are using the experiment Builder of PsychoPy?
Thank you!
I am going to use specific tags. What I want to know really is how many tags are detected in realtime, and if some are missing, which ones they are
I see. Then, this would require a modification to the internals of the PsychoPy plugin to expose that information. My colleague who is responsible for that package is currently on vacation.
Are you experiencing trouble with the gaze mapping? Typically, if all of the AprilTags are well illuminated with good contrast, have equal size, and are visible to you in the scene camera preview, then if one is detected, you can have a fair amount of confidence that are all detected.
It is because our experiment is relatively long, so I wouldn't if there are any problems until the end
I see. I will confer with my colleagues tomorrow morning to see if there is an alternative solution in the meantime.
Thanks Rob. Ill have a look.
Hi @user-e74b04 , I'd like to update you. My colleague, @nmt , pointed out that you can derive the orientation values (pitch/yaw/roll
) to get angular velocity and use that as another reference/validation method. It is integration where you need to be careful.
My apologies for any confusion!
Another question I have is that, does the real-time gaze mapping algorithm always assumes rectangular surface?
For example, if for 10 seconds, the bottom right corner tag is blocked. Then would the surface still be rectangular during those 10 seconds, or would it be something like the other image?
Hi @user-bda2e6 , may I first briefly ask why AprilTags would be occluded during your experiment? Are your stimuli moving and would cover them?
Hi team, I have collected a data set using the Neon whereby participants view different objects in a fixed spot in front of them. In some of the data sets its apparent there has been some drift as we record in a block of around 20 mins. Is there a specific way to correct for this. I have not found an obvious solution from my searching but apologies if this has been missed. Many thanks!
Hi @user-3b4828 👋 ! Temporal drift shouldn’t occur with Neon since the gaze output is generated from single-eye images using deep learning, and there’s no calibration involved that could break.
Could you clarify if you’re referring to a constant offset in the gaze output? fixations being displaced over time? Peripheral offset? Besides clarifying, can you share also some recording with us? Either through Cloud inviting us or via any cloud provider such as GoogleDrive, if you invite us, we can examine what happened.
Hi👋, I’m trying to use the Reference Image Mapper on Pupil Cloud. I’ve taken the photo to use as the reference image and recorded the corresponding scanning video, but whenever I try to process it, I get this error:
"Error: The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image."
Do you know how to fix this issue? I can provide the photo I’ve used if needed.
Hi @user-cc1b01! This error means that the reference image or scanning video were not optimal and the mapping failed. Please review the documentation, and this relevant message on how to run RIM: https://discord.com/channels/285728493612957698/1047111711230009405/1268083029411364927.
You can also find more details on our documentation: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/
If you prefer, you can invite us to your workspace, such that we can review your data and provide more accurate feedback. Just let me know and I'll share with you the email you can use to invite us to your workspace. 🙂
Hello Everyone
I don't know whether this is the right channel, but I am wondering if anybody can help me out on following:
I am a Physical Education teacher and would like to provide my 14-15 year old students the opportunity to do a balance task while wearing a neon. In a following lesson, they can (visually) compare their personal heatmap with their classmates.
I would really like to do this, but am wondering how to handle the section in the Pupil Labs legal documents which states that "when collecting data of others, lawful permission should be obtained". In a secondary school, this usually (also) involves getting permission from their parents. I don't want to overwhelm or even scare their parents with long and complicated informational texts. I absolutely see the need to inform them, but want to keep it as short and simple as possible.
Who could advise me? Thanks a lot in advance.
Thanks for your questions; this is the right place to ask them! What you describe is feasible and sounds like an interesting teaching use-case. The statement about obtaining lawful permission before recording others' data is quite broad, so we'll try to provide further context.
The type of permissions you need will often depend on your location, institutional procedures, and how the data will be used. For instance, when data are collected and published, such as in scientific papers, rigorous procedures like ethics review panels are typically required. However, for general teaching, as described, it's likely to be far less stringent, e.g. just obtaining parental consent.
The best course of action is to consult your institution's data controller or equivalent for advice. If they need more information, they can check the data privacy FAQs, which explain how data are stored in Pupil Cloud, and so on. It's also true that recordings can be kept offline if you don't want to use Cloud. If there are any follow-up questions, just let us know!
Hello Pupil Labs team, I had a question regarding recordings and calibration with the Neon. If I were to make a recording with a specific wearer and gaze offset correction but then after the recording find that some drift has occured (even after resetting the glasses physically on the participant), if a new gaze offset correction is made, will that reflect in the prior recordings of that participants? Does it work retroactively or only on new recordings? Thank you.
Hi @user-ac085e 👋! Just to clarify a few things, unless you’re using Neon with Pupil Capture (in other words, traditional eye tracking pipeline), there’s no calibration involved. It would use NeonNet, our deep learning approach to estimate gaze. Since the gaze estimation is based on single-eye images, there’s no temporal drift either. Are you referring to some other type of drift?
Now, about the gaze offset—this is a linear correction used to improve accuracy by addressing a constant offset. There are a few ways to apply it:
Now, why might the offset differ for the same person?
Hope this clears things up! Let me know if you have any other questions. 😊
Is it possible to define two surfaces (using markers) in Pupil Cloud? I would like to analyze gaze directed at the screen and gaze directed at the hand (e.g., while writing)
Hi @user-6e3b6d , currently, it is not possible with a single Marker Mapper Enrichment to define multiple surfaces, although you can define multiple AOIs within a surface.
As alternatives, you could:
If you absolutely need two surfaces, then Neon Player's Surface Tracker plugin allows that.
As a final fallback, there is also Manual Mapper.
Hello, Pupil Labs.
I am currently using two Pupil Neon devices.
The pupil waveforms of the two Pupil Neon devices are different as shown in the images below.
One of the waveforms appears to be very noisy.
What is the cause of this noisy waveform?
Hi @user-dcc847! Could you please clarify what’s being plotted here?
Hi community:
I work for the government and we have strict rules about where we can store data. I don't think I'm allowed to use the cloud. I was wondering if your company has worked with the DOD and has security level IL4. Also does your offline app have the capabilities to Map gaze?
Hi @user-83d076 , since you asked about Pupil Cloud, I have moved your message to the 👓 neon channel.
With respect to data privacy, Pupil Cloud is GDPR compliant and has passed an external compliance audit on 2024-06. You can find the certification here.
If you would like more details, you can check out our Data Privacy FAQ.
By "offline app", do you mean Neon Player? Are you looking to map gaze in real-time?
I'm referring to the Neon Player, and I don't plan on doing the map gazing in real-time (I'm assuming real-time as in during the recording). I saw that there was a tutorial on doing the map gazing for dynamic screens and was wondering if I could do the map gaze on the neon player instead of the cloud
Hi @user-83d076! I'll step in briefly here for Rob. I assume you mean this tutorial in the Alpha Lab section of our docs. That does build on top of Pupil Cloud exports. However, you can achieve something similar without Cloud – use Neon Player's Surface Tracker plugin. You essentially affix AprilTag markers to your screen, and these are used to map gaze from Neon to your screen.
I downloaded the data from Pupil Cloud but the 3d_eye_states.csv contained no data except for the title row. I got this error for the recordings in Dec. 16th and 18th. The data seem fine for other recordings (on Dec. 20th, 23rd, and 25th). I do not feel that I changed any settings. Do you have any clues why? How can I (re)get the pupil diameter data?
Hi @user-d78df3 ! Could you kindly create a ticket on 🛟 troubleshooting and provide the affected recording IDs such that we can have a look at what happened and reprocess them (if needed)?
Hi @user-ebe453 ! You can stream all sensors from Neon, including the IMU, via our realtime API. However, the IMU is not currently available through the LSL integration.
If having that stream in LSL would be valuable to you, feel free to suggest it in the 💡 features-requests channel. We'd love to hear your feedback!