Neon Companion has not detected a disconnection with the glasses.Also,I can't see the image of the eye camera from the Neon companion, I don't use a third-party USB when connecting, and I haven't updated the version of Android. Please let me know if you have any suggestions on how to deal with it.
Hi @user-50082f! I've seen you opened a ticket in the π troubleshooting. We'll assist you there with some debugging steps to help you resolve this issue.
is it possible with the player to have the visualisation available on the cloud? (heatmap, aoi heatmap) even after correcting an offset, trimming a vid, ... or else?
Hi @user-bdc05d π ! Albeit different from those in Cloud, if you use the Surface Tracker in Neon Player, you can obtain heatmaps. Regarding, the AOIs, there are no AOIs like in Cloud for Neon Player, or Reference Image Mapper, but you can build custom plugins for your use case.
May I ask what is the reason for not using Cloud?
the custom pluging can be an idea but I'm new to this so as I have never made this before I'm not sure I can in my deadline :/ (I'm in intership π never made a pluging before)
currently I have an offset on my recording data but I would use the cloud with pleasure if I could modify this offset it's too bad as it is just a tranform to apply to y and x π¦
Hi @user-bdc05d! Reg. the https://discord.com/channels/285728493612957698/1247125277440741397 , I will recommend you upvoting and following that post.
I think rather than building a Plugin it would be much easier to apply the offset yourself manually over the data and building a heatmap yourself.
I already see the surface tracker in neon player but it does not work well on my data, I would have prefer just on a static reference as in the cloud :/
Applying an offset over the data.
Hello, I stumbled upon your website. The Neon module seems amazing, but I couldn't find where to purchase one. Is there any shop page that I'm missing?
Hi @user-fcc31d, thanks for showing interests in our products! π Click on this link to access our shop page.
Thanks, I'll have a look π
What headsets is it compatible with? I can't find a clear answer on the website. I understand that mount is offered just for Pico at the moment but I can build my own for others. Just to clarify i'm interested in the Neon XR module
Hello,is there a way to change the view from scene camera to eye-camera in pupil cloud to set events?
and another question came up: is there any way (boolean e.g.) to understand if the testperson has his/her eyes closed or open to make sure the data we collect e.g. pupil size is valid? As during or experiment the testperson might have to close the eyes for a bit
We currently only sell mounts for the PICO 4 headset. For other mounts, you'll have to 3D-print and build them yourself. We're working on adding more mounts in the future, but I don't have the estimated time for it.
Just out of curiosity, which XR headset are you using? You might also want to check out the π€Ώ neon-xr channel for all things related to Neon and VR/AR.
I think I placed the question wrong, I meant what headsets (or devices) is it compatible with, not considering the mount ( the month is not a problem I can design and 3D print it). Or does it work with any system? Android, Windows, Linux etc..., anything that can run MRTK or OpenXR? In my day-to-day, I mainly work with Quest 2-3, Hololens and soon with VisionPro. Also Android and IOS AR, XR standalone and WebXR.
Hi @user-95d0dd, thanks for reaching out! Eye videos are currently not available on Pupil Cloud. Feel free to post a request for this feature in the π‘ features-requests channel! As for the eye openness, we're currently working on this metric. Unfortunately, I don't have a timeline on when it will be finished.
Regarding the eye openness we did some tests yesterday to test the blink-detection. We were hoping that a closed eye would be detected as a blink, so we can make sure to only use data, when the eyes are open. However it seems, that the intentional blinks were not detected as blinks at all, maybe due to the longer duration comparing to non-intentional blinks. As our test-persons will work an a screen it is very likely, that they also blink intentionally from time to time to improve moisture of their eyes. Therefore we are afraid that our data will be compromised. So we are wondering if the team can provide some support with our videos Thanks for your attention
Do you mean like this alpha lab article ?
But this is for 2D images, right? Can I do the same over a video? I can see here (https://docs.pupil-labs.com/alpha-lab/multiple-rim/) that this is relevant to what I want to do. But how are the fixations depicted in this example calculated?
Yes, like this! π I can see the script available, thank you!
Oh! I thought you wanted them over the reference image.
I don't think I have a snippet that plots fixations over a video, but the fixations CSV file contains the x and y coordinates for those fixations, and both Neon Player and Pupil Cloud (that's what you see in the link you shared) would offer you with visualisation like the one you are looking for.
Could you develop what you are trying to achieve?
Yes, I will try develop it. Thank you for your help! π
Hi @user-42cb18 ! With that develop I did not meant to ask whether you can achieve your goal but whether you could explain in more detail what is your ultimate goal such that I can better assist you. But, of course, feel free to develop the code for it.
Sorry for the misunderstanding! What I am trying to achieve is isolate the fixation area in an image frame.
Hi there, my enrichment for qr code tracking looks like this and I'm not sure why - can you help? It seems completely misplaced (this is the case over all frames of the video and different videos, including videos where it used to look fine). Thanks
Hi @user-7413e1! Can you please open a ticket in π troubleshooting and share the enrichment ID there?
of course yes! sorry for posting in the wrong place
no worries π
Hi all, we have a user in our team that is using the neon via a shared gmail account. We want to have them create their own account for Pupil instead. Is it possible to migrate the workspace they created using the shared gmail account to their personal account? I noticed that you can invite collaborators to a workspace, and make them admin, but is it possible to make them owner too? Furthermore, would this have any impact on the companion app? Can they login with their personal account and still have access to the previously recorded data (from the shared account)?
Hi @user-23177e, thanks for reaching out! π It's not possible to migrate the workspace. Also, you can't move files across workspaces. If you'd like this feature, feel free to upvote it: https://discord.com/channels/285728493612957698/1212410344400486481 !
Yes, you can invite this person to the workspace as a collaborator, and the person will get access to the recordings in that workspace.
Recordings are uploaded to the workspace of whichever account that's signed into the Neon Companion app. Importantly, recordings will only upload to the chosen workspace. For example, if the account that's signed into the app has access to many workspaces, please choose the desired workspace before starting recording. Once you enable Cloud upload on the app, the recordings are uploaded to the selected workspace.
Ok great, thanks for your reply. I guess creating the personal account and giving them access to the existing workspace would be the answer.
Hi @user-876d7f, let me clarify about the blink detection. Our blink detection is based on duration thresholds. The min blink duration has to be 100 ms, but the max duration between blink onset and offset is 30 ms. You can read more about the blink detection white paper here.
In your case, your participants aren't blinking, but intentionally closing their eyes. This means that the time between eyes closed and eyes open is atypical, and they exceed the duration threshold to be considered a blink. Thus, I'd recommend that you first look through the blink detection offline guide. Then, you'll be hacking into the blink detection script, and things will get super experimental as you customize the script to your specific needs π .
For example, certain lines of our our blink detector pipeline would be useful, since they help define the thresholds as described in the white paper.
Hi, i am about to use the pupil neon for my multimodal brain imaging study that also includes fnirs. I am an electronics engineer. What i really need to know to mitigate potential interference is a) the wavelength of the NIR LEDs in the Neons frame b) the illumination pattern, if there is one (constant, pwm, if pwm at what frequency ca?) c) if possible a ballpark estimate of the intensities used
Would it be possible to tell me any or all of these details?
An independent question is what synchronization mechanisms with other modalities exist when it comes to higher precision (e.g. with eeg in the ms scale)
Hello ! I'm trying to make a nice ui using the neon headset and python real time API. I wanna use the gaze data to make nice dynamic graphs. After some attempts, I came here to ask a few questions about gaze data variables in the async API, from eyeball_center_left_x to optical_axis_right_z. Here is the point : what all of these represent exactly ? in which unit ? Thanks a lot for your reply π
Hi @user-a23b90, thanks for reaching out! π The eyeball center is relative to the scene camera of the Neon Module. The eyeball center variables are all in mm and the optical axis variables are directional vectors, d. You can learn more about them here.
Btw, directional vectors are unit vectors that represent spatial direction and relative direction. See here.
I look forward to what you do with the real-time API!
Hi Pupil Labs,
We've found that the NEON recordings don't have any audio when the phone is locked and we use the recording start/stop API but the audio is there when recording is triggered with the screen turned on and companion app is active. This behaviour beats the purpose of remote controlling the recordings with the help of APIs, is there a known workaround for this?
Hi @user-ccf2f6! Thanks for the report. We've identified the cause and a fix will be in the next Companion App update. If you need something immediately, please reach out to us via email π
Dear support team, is this question too technically detailed? It would be very important to me to be able to do joint eyetracking with fnirs
Hi @user-fcdbb6 π ! It is not too technical π . I hadn't replied before to you, because I am double checking the radiant intensity, please allow me to get back to you on this, later.
The IR LEDs have a wavelength of ~860nm (centroid 850nm Β± 42nm ΞΞ») and the pattern should be constant.
Regarding how to sync with other modalities, we have support for Lab Streaming Layer and our realtime API has a time offset estimator. So, it really comes down to what capabilities your other sensors have or how/which kind of master clock would you like to use.
Hi Pupils labs, I'd like to know if the blink detection fit well with the async API ? Thanks for the help and thanks for your amazing work
Hi @user-a23b90 π ! Do you mean the realtime blink detector ? If so, it should work, but besides modifying the code that is shown in the notebook, you will need to modify the helper.py
script within the blink_detector folder to use the Async version of the realtime API.
Thank you @user-d407c1 !! Constant would mean contantly turned on, yes?
Yes, but please allow me to get back to you with more details on the IR LEDs.
(LSL is great and solves a lot)
Thank you
To do so, using the Timeseries CSV files from Cloud or pl-rec-export would simplify things a lot.
I donβt have a snippet to share at the moment, but hereβs a basic outline of the process:
1. Read the timestamps of the world frames.
2. Load the scene video into a PyAV container.
3. For each timestamp, read the frame.
4. Identify if there is a fixation using the start and end timestamps.
5. Use the x and y positions of the fixation to plot it.
Let me know if you need more details or any help!
Thank you for the feedback. π
It stands for presentation timestamps, in the pyav/ffmpeg context you can find more info here
Good morning:) for our setup we inted to use the marker mapper and therefore show them in the corners of a screen. Unfortuantely i couldnt find in which size I have to present the markers for them to work but also not occupy more space of the screen than actally needed. In this setup our test person will not be further away from the Screen than 1 m. Thank you in advanve for your support
Hi @user-876d7f ! Please have a look at this message regarding markers size and other considerations for on screen markers https://discord.com/channels/285728493612957698/285728493612957698/1220676625604153416
hello, i'm using neon player 4.1.4 to annotate videos and track surfaces, upon export the player crahses. in the log it says
2024-06-10 11:21:26,900 - player - [INFO] launchables.player: Created export dir at "XXXXX\neon_player\exports\001" 2024-06-10 11:21:29,760 - player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 670, in player File "observable.py", line 367, in call File "surface_tracker\surface_tracker_offline.py", line 597, in on_notify TypeError: 'NoneType' object is not subscriptable
2024-06-10 11:21:29,780 - player - [INFO] launchables.player: Process shutting down.
can someone look in to that please? thanks in advance
Hi, @user-688acf - I'm unable to reproduce this error. Would it be possible for you to share your recording folder with us? If you aren't comfortable sharing it publicly, you can email it to data@pupil-labs.com
hello, thx for the quick reply, unfortunately i'm unable to share the recording at all because it contains 3rd party data. however, i was able to resolve the issue by deleting the entire export folder and re-running everything. but i think someone should take a look into that uncaught nonetype exception, it's pretty annoying when the whole player crashes and you essentially lose all the annotations etc.
Hello, we are currently experiencing immediate app crashes after opening the companion app (V 8.15) on two seperate devices. Our 3rd device has Version 8.10 installed and works. Is there a problem with the new Version?
Hi @user-23cf38 ! Do you mean 2.8.15? Could you create a ticket π troubleshooting such that we can better assist you?
Hello everyone how are you? I'm trying to use the new Map Gaze Into a User-Supplied 3D Model feature , but I'm having some problems with the 3D model, since it's the first time I've used this type of data. I used the 3D Scanner app to create the model of my room and now I'm looking for the tag information. Any tips on how I can get the position and size in the output space units? Should I use Blender or onther kind of 3D software, for example? If so, maybe someone has a tutorial link they can send me?
Hello, how are saccade amplitudes in degrees calculated in Neon given that there is no object distance information?
Hi @user-068eb9 , we would also like to ask what you mean by βobject distance informationβ and how that factors into saccade amplitude in your case?
Hi @user-068eb9, thanks for reaching out π We consider the gaps between fixations as saccades, and assume that there are no smooth pursuit eye movements. You can learn more here.
Hi @user-ff2367 , great to hear that you are trying this! Am I correct that you put an AprilTag on the wall of your room and made a recording with it?
Instructions can be found in the repo. Please also see @user-cdcab0 's post here (https://discord.com/channels/285728493612957698/1250066927980642395/1250232074611200061) that shows an easy and effective way to get the the position, size, and rotation values of your AprilTag.
For context, @user-cdcab0 programmed Tag Aligner π β€οΈ
Tag Aligner is independent of 3D software. The units and coordinate system are completely up to you. You can place the origin of your coordinate system at the bottom left of the door or at the center of your desk. You measure the center of the AprilTag with respect to this origin in your 3D software's units, meters, feet, etc. The choice is yours. For example, @user-cdcab0 's post shows an easy way to specifying everything in Blender's system.
In the case of the the Alpha Labs article, meters were the units and a ruler was used (see attached image). The foot of the statue was chosen as the origin and the AprilTag was 0.705 m upwards from that point. The size of the tag's black side was measured as 0.135m. This was put into the tag's json
file (see step 0.3 of the repo). If you decide to do it this way, then you should scale your model to the same dimensions. For the Alpha Lab article, Blender's measure tool was used as a guide. The top of the statue podium was about ~1.27m above the foot of the statue.
Please let us know if you have other questions
Hello @user-f43a29, thank you very much for the explanation!! It was quite enlightening, thank you. Also, do you have any tips for checking the tag orientation too? This part also left me a little confused. I believe I'm putting the wrong orientation, because I ran the model however the motion and gaze within the 3D model are incorrect
Hi @user-fcdbb6! The continuous radiance values measured at 0.2m are 205 W srβ»ΒΉ mβ»Β². Let us know if you need anything else.
Thank you @user-d407c1 this is helpful. Just to make really sure: you are also confirming the illumination scheme is continuous, and not multiplexed (PWM/TDM,...)?
Hi @user-ff2367 , you are welcome!
The tag orientation needs to be specified in the json
file as a quaternion in the OpenCV coordinate system.
How did you hang and orient your AprilTag?
If you have made a 3D model of your scene in Blender, then you can make everything much easier by following @user-cdcab0 's instructions here (https://discord.com/channels/285728493612957698/1250066927980642395/1250232074611200061). The process should be similar for other 3D modelling software.
Hello, @user-f43a29 thank you very much again!
[email removed] , you are welcome!
Hi I am new to Neon and I'm trying to use real-time-screen-gaze. I generated april tag markers on the four corners with marker size 128x128 and 16-pixel white padding. When I specify coordinates in marker_verts, should I include the white padding, or only list the corners of the markers?
Hi, @user-b7c82e - do not include the white padding. Only the the black square of the markers is considered
Thanks!!
@user-cdcab0 Thank you for providing those Tag Aligner + Blender instructions!
For anyone who referenced my previous posts, I recommend to also follow @user-cdcab0 's instructions here: https://discord.com/channels/285728493612957698/1250066927980642395/1250232074611200061
You will have a much easier time determining the rotation and position of your AprilTag π
Hello! My name is Ryan, I am using Neon to track where, on the spectacle lens, a subject is looking through. I've used the optical axis vector in 3d_eye_states.csv with good success. However, its come to my attention that from a physiological standpoint, it makes more sense to track the visual axis. This is typically a few degrees off from the optical axis of the eye.
Based on what I've read on the Discord and on the neon website, the gaze information provided by neon player is the visual axis. The only problem is that its in terms of the world camera coordinate system. Is it possible to the gaze of each eye as a unit vector, like how the optical axis is provided?
"gaze_point_3d_x (and y, z)" in gaze_positions.csv is the closest I've found, but it is still in world camera coordinates. Thank you in advance!
Hi @user-429d6a πβπ¨ ! You are correct in that we provide you with the "optical axis" in the eye states and not the visual axis per eye, and that this would be more correct for your application.
Technically, you should be able to compute the "visual axis", you have two vectors.
To get the "visual axis" vector for the eye, you could convert the gaze vector from spherical coordinates to cartesian, then as they are normalised compute the dot product and finally compute the new angle.
But it can get a bit more complicate, and it would depend on your definition of visual axis, take for example the diagram I attach based on Bennets & Rabbets' clinical visual optics.
Also I think the pantoscopic angle on Just Act Natural is a bit more than the avge, but don't quote me on that.
Hi @user-d407c1, thank you for your helpful response. I was able to convert the elevation/azimuth to x,y,z. After thinking more about my problem, I've come across another question that I think you could help answer. Is the gaze data in "gaze.csv" calculated from the optical axis data? If so, perhaps there is some code I could reference for my own project.
Visual axis
Hello I have a question about saving events with custom fields over the Neaon API. I have just started experimenting with out labs new Neon and its API. I have been using the pupil core for many years and rely heavily on the API's ability to save annotations with custom fields enabling me to save data such as stimulus condition, order, and other settings within multiple columns alongside the standard 'label' column. Is it possible to also save multiple columns when saving events with the pupil Neon? I had assumed it would be possible given how easy it was with the core but I haven't been able to find any information in the Neon documentation on how to do this.
Hi @user-a09f5d. To send events using Neon's Real-time API, you can use the send_event function that takes as input arguments the event name and the current timestamp (see here for some examples).
Although more columns cannot be added, you still can create custom names (incl. experimental variables) within your experiment (eg like in this tutorial). For example, assuming you have a matrix with your experimental conditions and a script that loops over your experimental trials based on this matrix, you can easily assign the stimulus condition or order to your event's name
Hi @user-480f4c I have been playing with the send_event
function and have gotten it working (including the example you shared). However, what I was hoping to be able to do is to save multiple columns alongside the event label in the same way as I do with the pupil core API. For instance when I export the annotations from the pupil core I get a csv file that looks like this example (see attached screen shot) which is an efficient and easily analysed way of saving all of the experiment parameters along side the event (label) of interest.
However, based on your response this isn't possible with the Neon, which is a shame! Is there a reason why this feature was removed or is it just not compatible when using a companion device?
I guess one work around is for me to save all of the parameters within a single event label (so they all share the same timestamp), eg device.send_event(f"description.of.event_{ID}_{age}_{sex}_{block}_{TargetDistance}_{TargetPosition}")
, and then use a post hoc script to separate the parameters into different columns using the '_' separator. Do you see any reason why this wouldn't work? Is there a character limit on how long an event label can be?
If Iβm wearing the module in a sound booth (faraday cage) after I calibrate the IMU outside, will it be okay for it to detect the magnetic north?
Hi @user-0001be, for accurate yaw orientation, the IMU requires continuous readings of magnetic north, even after the calibration. If these are disrupted, yaw orientation will be inaccurate. You can try it to see how it works out for you, but it is likely that the Faraday cage will interfere with the IMU. We would recommend you validate any measurements with a real compass, used outside the faraday cage.
Your approach would work; however, I am wondering, why would you want to code the subject ID, age, and gender in every event? You can simply save the final csv file with this information (e.g., ID1_33_F.csv). To keep things tidy and clean for your data collection, I'd simply add the event-related information to the event name, that is: {block}{TargetDistance}{TargetPosition}
I can't see a reason why character number would be an issue, but I can test it out on Monday and let you know π
So the main reason that this fixed information cannot be saved within the file name of the csv file is simply because the file name would be too long (there is a LOT of additional fixed information I'd need to save that is not included in the provided example). This would be a problem because Windows can't open files when the file path is too long.
You are right that fixed information doesn't need to be included in every event label though. What I would probably end up doing is including the fixed information in a single event at the start of the recording and then just including the dynamic information within each event thereafter. I think in this example I was just trying to replicate the output I'm currently able to get from the core (in the past the redundancy never caused any issues and just made sub-setting easier).
Thanks for the help.
For all the info that doesn't change within a recording, you could send a single event at the beginning of the recording that contains all of that info. You could also send one event per piece of info (then you don't have to worry so much about the length)
For sure! My current plan is to include the fixed information in a single event at the start of the recording and then just including the dynamic information within each event thereafter. Thanks!
How is missing data coded within the gaze.csv exported file? I am looking at a sample file of 10 minutes of recording and there are zero NA's or empty rows for the gaze position (i.e., gaze x [px]); it seems likely that there would be missing data for a few of the samples.
HI @user-309101, thanks for reaching out π May I know what you mean by "there would be missing data for a few samples"? What is the assumption behind this?
Are you looking at blinks, or data where the glasses aren't being worn, etc?
HI @user-07e923. Thanks for the response. I work in clinical research with children and in typical eye-tracking experiments there is data loss due to movement, closing eyes, rubbing hands on eyes, jumping out of the chair, etc. Using Neon is newer for me; I guess I just assumed there would be times when the eye-tracker could not find the eyes (above and beyond blinks). Ultimately, I am trying to calculate some type of data quality metric to demonstrate feasibility in my intended participant sample.
Thanks for the clarification!
Data loss due to movements like jumping out of a chair, or eyelid occlusion, is likely when using more traditional eye trackers. This is because they often rely on tracking features of the eye, like the pupil or corneal reflections, and have a calibration process prior to recording. In conditions where these features arenβt visible, the eye trackers will output empty data because nothing can be estimated. In addition, headset slippage, often induced by dynamic movements, can degrade the accuracy of the calibration.
Neon is different. It uses deep learning to estimate gaze, which is essentially calibration-free and very robust to slippage. This makes factors like glasses slippage and head or body movements non-issues for Neon.
We donβt provide a data quality metric on a per-user level with gaze produced. However, we have a thorough accuracy evaluation of Neon that shows its average performance across a large cohort of wearers during various activities. This evaluation will give you a better sense of how it works and the feasibility of using it for your intended participant sample in a given scenario. I highly recommend reading it. We also have a blink detector, so you can easily filter out periods of gaze data during blinks in the exports.
If you did want a data quality metric that can be outputted in Neonβs data-streams, please add that as a feature request in π‘ features-requests π
Hi Neon team, I'm using device = Device(address=ip, port=8080) to locate the Neon eye-trakcer. However, this code return the OnePlus phone that the eye-tracker is supposed to connect to. When the eye-tracker is not conected with this phone, the code still returns the device object as long as the phone is in network. Is there anyway to use the Device() function to tell whether the eye-tracker is properly connected and ready to send_event or stream the eye_video? I'd like to achieve the same effect as discover_one_device(), namely when the eye-tracker is disconnected, it returns None, but I need to use the IP address.
To continue with this issue, I tired to use device.serial_number_glasses to see if the device is connected to an eye-tracker. However, when I tested it, this function always return -1 no matter whether the eye-tracker is connected or not. Even when the computer and the eye-tracker was communicating well (like streaming the eye video and send_event() worked fine), device.serial_number_glasses will still return -1, where it should return the serial number of the eye-tracker. Did I do soemthing wrong or is it a bug need to be fixed? I'm using the updated version of the realtime api (1.2.1).
Hi Neon team, I was testing out the real-time-screen-gaze package and generated 256x256 markers with white borders (48 pixels). Visual stim were drawn using PsychoPy. gaze_mapper was able to create a surface, but it failed to produce surface_gaze coordinates. I tried to tag the markers offline using marker mapper on pupil cloud, no luck as well.
Your tags are flipped. If you're generating the marker images using the marker_generator
from the real_time_screen_gaze
package, you have to flip them manually for PsychoPy:
from pupil_labs.real_time_screen_gaze import marker_generator
marker_data = marker_generator.generate_marker(marker_id=0, flip_x=True)
Hi @user-613324 , device.serial_number_glasses
is for Pupil Invisible. For the Neon module, you want device.module_serial
.
Hi Rob, I have two follow-up questions: 1) for Device(address=ip, port=8080), is there a timeout argument we can specify like in discover_one_device(max_search_duration_seconds=XX)? 2) for device.receive_eyes_video_frame(), I saw there's a timeout argument for this function in the manual, but I tried and nothing change. Sometimes, when the network is unstable or the connection is off, this function will return repeated messages in the console (no RTP received for 10 seconds: closing no RTP received for 10 seconds: closing no RTP received for 10 seconds: closing unable to create requested socket pair unable to create requested socket pair ). I want to handle this exception in a better way, rather than letting it freeze and crash. So for example, I want to set a timeout duration, and if no connection is found beyond this duration, guide the program to do sth else. Can you give me a hint how to do it? Thanks!
thank you Rob for the swift reply! Yes, that's what I'm looking for. Thanks!
Hi @user-613324 , the timeout for βreceive_eye_video_frameβ only applies when there is no cached data. If it is not timing out, then it should be because you are actually receiving data, or at least have not yet drained the queue of cached data. Those messages are not exceptions but diagnostics, and are in principle harmless. The βunable to create requested socket pairβ message is typically only encountered on Windows. If you are receiving data and are able to remotely control the device, then you have a working connection. Regarding a timeout parameter for the Device constructor, I will check with the team and get back to you.
hi, is there a csv or a way to get back timestamps of event we annotated in the cloud? π
Hi @user-bdc05d! All your events are available in the events.csv file that is part of the Cloud export.
Ah thank!
Hey pupil labs, is there any way where i can set a threshold for Fixations. and only identify fixation that are greater than certain seconds ??
Hi @user-b6f43d, thanks for reaching out π ! You can do that by processing the raw binary files (i.e., raw recording) using this GitHub repo. You'll need to edit the fixation detector script to whichever threshold you're looking for. Our fixation detector is velocity-based, and you can learn more about it here.
It just occurred to me that if you're looking to only include fixations based on a minimal fixation duration, then you can simply filter out the data in the fixations.csv so that the fixation duration > X, where X is your criteria (e.g., 2000 ms).
My described method is perhaps way too much work for almost similar results (although it includes some sanity-checks). Specifically, I was under the impression that you wanted to manually specify how a fixation is computed by the fixation detector, i.e., two consecutive gaze samples that don't exceed certain velocity. Then, use this velocity threshold to look at all subsequent gaze samples until the n+1 sample exceeds the threshold. The fixation duration would then be the total number of gaze samples across time that stayed within the velocity threshold for this one fixation.
Dear support team, I'd like to know more about the "worn" attribute of the gaze data within the real time api data. I'm using the neon headset for 3 week now and the worn attribute has always been True, even when the headset was on desk. Do you have any clue for me ? Thanks in advance.
Hi @user-a23b90, thanks for reaching out π. The worn detector is not yet implemented for Neon. If you like to see this feature, please upvote it in π‘ features-requests See: https://discord.com/channels/285728493612957698/1047111711230009405/1243176459128930344
Hello Neon Team, I am new to using Neon. I am exploring the real-time-screen-gaze package. I am wondering how to connect the device on my local machine (not remote) because I want to do the same task with setup 1 monitor
Hi @user-23a0f0, thanks for reaching out! If your computer has a hotspot option, you can connect Neon to the hotspot. Otherwise, if you're trying to connect Neon directly via USB, you can use this USB hub.
Hello, wanted to ask if it was possible to make multiple Marker Mapper enrichments on the same workspace with the same April Tags used per visual stimulus, but different defined surfaces so that I can analyse areas of interest across different stimuli all in one workspace. As of now, the same image appears as the defined surface for all the marker mapper enrichments. The only solutions I can think of are using Reference Image mapper or adding all the projects to new Workspaces each time I want to do a MM enrichment (I have to make 56 of these), thanks.
Hi @user-324374, thanks for getting in touch π For clarification: the visualized surface beside your recording does not correspond to only that specific image. Rather, it's only a visualization of how the mapped surface look like. Not what the content is.
In my example, I am mapping the entire recording (as shown by the row named Neon Ad). The current surface visualized next to my recording only happens around 1 min into my recording (see Surface_1.png). If I refresh my page a few times, a new surface is visualized (see Surface_2.png).
To answer your question: Yes, you can have multiple marker mapper enrichment using the same April Tags. It's probably helpful to use events to mark the recording section that you want map -->e.g., Image X appear -- Image X disappear. Then, run the enrichment for all your recordings that have these events.
Edit: If I didn't address your question, then could you please elaborate on what I missed? π
Here is a screenshot of the Workspace after I ran the enrichment. As you can see, the image corresponding to the selected event space (left) is not the same as the image on the right. Whenever I try to visualise a Marker Mapper enrichment with an AoI heatmap it consequently gives me the image that was on the right instead of the one actually observed on the left. It has done the same for all subsequent Marker Mapper enrichments, but I will try again now and let you know the outcome. I have also just tried to run a Reference IM enrichment instead and it has given me an error, maybe because of mismatched reference images or the lack of white spaces on the april tags of some recordings (here's the RIM Enrichment ID: 2929a437-9760-41be-ba0a-8e0e002b0d0e)
Hi we have just carried out a research project eyetracking Regional Anaesthesiologists performing regional blocks where we had a marker on each corner of an ultrasound screen but we had different machines from different manufacturers so the shape and size of the screens would be different.
If Iβm carrying out a Marker Mapper enrichment for the project can it be applied to all the recordings with the different types of screens in the usual batch form of do I have to do individually for each specific machine type?
Hi @user-057596, may I clarify if you're trying to map several different screens (of different sizes)? If so, you'll need to run the enrichment on Pupil Cloud per screen separately.
Edit: Let me clarify - you just need to run 4 enrichments, one for each screen. Since each screen has unique AprilTags, the algorithm knows which tag ID it is (assuming the tags are always well detected).
Hi @user-07e923 they are different sized ultrasound screens 4 of them but they all had different combinations of markers
The next question regarding this project is we have 42 recordings which we want to download from Video Renderer but the size of the processed video is 4.7GB and we will have to remove things from our laptop to do so. Is it possible to download them to an external drive connected to the laptop without having to remove anything from our computer
Yes. You can absolutely download the videos to an external drive. You'll have to connect an external drive first before clicking the download icon. After you click download, you'll be prompted where to save the downloaded files.
Thank you @user-07e923 that was really helpful. πͺπ»
Hi @user-324374 ! The Marker Mapper should take the reference image you see on the right from one of the frames within the defined scope. If that is the case, would you mind inviting me to your workspace for further investigation?
Great, just sent an invite. Also, if I wanted to calibrate gaze data from native recordings using Pupil Player, how would I do this after adding Enrichments to the recordings on Pupil Cloud?
Hi @user-324374 ! We are checking the marker mapper issue, we would get back to you soon. Feel free to open a ticket at π troubleshooting if you want to keep track here of the progress. Regarding, the calibrate gaze data do you mean, modifying the gaze offset correction?
Yes, the gaze offset would need to be corrected individually on Pupil Player, but how would applying an enrichment to the modified data work?
Unfortunately, you can not re-upload recordings to the Cloud, which leaves you with three options:
1) Wait for https://discord.com/channels/285728493612957698/1247125277440741397 to be implemented π . Soon β’οΈ 2) Use the Surface Tracker in Neon Player (not Pupil Player, as Neon recordings are only compatible with Neon Player). That would be pretty similar to the Marker Mapper, although you can not create AoIs or aggregate data there. 3) Compute yourself the translated gaze coordinates.
Personally, if you are not under a super urgent deadline, I would wait for 1. Otherwise, opt for 2.
Hi all, I have the following problem: I want to synchronize the recordings from two Neons using the UTC timestamps. I tested it by tapping a plastic token on a table and then plotting the audio recorded by the microphones corrected for the difference in recording starting times. Still, there was an offset of about a second. I then used the pupil labs cloud to set a very precise event for when the token touches the table and indeed the UTC timestamps for exactly that event differ by about a second. I already turned the "automatic time" off and on in the phones' settings. Do you have an idea what might be the reason for the world timestamps differing so greatly? And is there a better way to align the recordings of the two Neons? Thanks in advance π
Hi @user-037674 π. Syncing two devices using their UTC timestamps is a valid approach. However, note that phone clocks can experience drift, and depending on when the sync-up to the NTP server was initiated, it can be up to about 1 second. A fresh sync-up might get you down to about 20 milliseconds for a few hours. This is all overviewed in this section of the docs, along with a way to further improve sync accuracy if required.
Tapping the token on the table, of course, provides a common event that will appear in the data streams of both phones. I've used similar approaches in the past when collecting data from multiple devices as a fail-safe π
Hi @nmt thanks for your response! I synced the clocks as described in your docs directly before I did the test, so unfortunately this did not solve the problem...
Hi @user-037674! Just to clarify, both phones were synced to the same NTP server, correct? If so, it's difficult to predict exactly why the offset is that big. If you do need more accuracy though, you can calculate the temporal offset using our real-time API - we provide a tutorial on how to do that here: https://docs.pupil-labs.com/neon/data-collection/time-synchronization/#improving-synchronization-further
@nmt Hi, My Neon suddenly cannot get the eye picture, but the front RGB is good. What's going on? The image of the eye captured in the app on the phone is black.
Hi @user-a2d00f! Please open a ticket in π troubleshooting π
Since this morning, I have problems to upload a recording to pupil cloud. The recording is indicated in the workspace, but the video is black and further actions, such as moving to a project or downloading, are unavailable. What could be the case here? In an attempt to resolve the problem, I deleted the recordings from the cloud and tried to upload them again. However, they don't get uploaded a second time. Not automatically and I can't find a way to do this manually. Any suggestions would be appreciated.
Hello @user-817cfd π! Sorry to hear youβre having trouble with a recording. Could you please create a ticket in π troubleshooting so we can assist you better?
When creating the ticket, please include: - Whether the Companion App shows the recording as fully uploaded. - The recording ID: You can find in the Cloud trash unless completely deleted. Please restore it, then right-click over it, and select βRecording Information,β and finally share the ID in the ticket.
For future reference, please do not delete recordings before confirming uploads, as they cannot be re-uploaded.
Hi! I am trying to download the enrichment from my workspace in the cloud, but I get this error:
{ "code": 500, "message": "internal server error", "request_id": "6d4465a4c397e53a8e943e4a8861ea3b", "status": "error" }
How can I download my data?
Hi @user-dd0489 - I saw that you already opened a ticket. Let's keep the conversation there π
Hi we have just carried out research on regional anaesthesia last week in the cadaver lab in Dundee University using a combination of three Neon Glasses and one Invisible which were all controlled by the Monitor App. All the devices were checked before we carried out the recordings to ensure that we were picking sound as itβs an important aspect of our medical training research but of the 42 recording only 15 have sound on them. I wonder if this is common feature
Hi @user-057596! Thanks for reporting this. Can you please open a ticket in π troubleshooting and we can assist you there.
Hello, we are trying to use MATLAB's "pupil_labs_realtime_api" function in order to send event markers using an ethernet connection from an Ubuntu machine (Ubuntu 18.04.6 LTS)
We are having two issues, the first is that we're unsure whether the ethernet connection is successful since Ubuntu 18 does not have an option to select "shared to other computers" (as we were advised to select).
The second issue (assuming that the connection is established) is that calling the pupil_labs_realtime_api function in MATLAB produces the error: "Could not access server. Host not found: pi.local."
Any advice on setting up an ethernet connection and sending triggers via MATLAB would be much appreciated.
Hi @user-8f55ac , we will need to look into the solution for Ubuntu 18.04. I have time for this tomorrow. In the meantime, I at least want to point out that the current Matlab integration can be found in the pl-neon-matlab repository. Since you are planning to use Ethernet, then latency is likely of importance to you, so you will want to use that version, which takes advantage of Matlab's Python interface.
Could you try again with that version? It could be that the connection is working. The previous integration connects to pi.local by default, which is the automatic name given to Pupil Invisible devices, whereas Neon uses neon.local.
Also, just a note on the terminology in our ecosystem. With Neon, you do not exactly send triggers. Rather, you take one recording for a whole experimental session and send events that are logged in the recording timeline, such as "trial_1.started", "trial_1.ended", "trial_2.started", "trial_2.ended", etc. The names are up to you. Then, during analysis of that recording, you can filter the data based on these events.
Thank you so much - this information has been helpful.
We have now downloaded the pl-neon-matlab repository and we are using MATLAB version 2020b with a python environment 3.8. With these settings, calling pupil_labs_realtime_api now returns the error "could not access server. http://pi.local:8080/api/status". Calling the "Device" command returns an error "could not find any device".
Thank you for looking into any changes we may need to make given our version of Ubuntu. Hopefully it is just an issue with setting up the ethernet connection. Thanks!
Hi @user-8f55ac , I will look into the setup on Ubuntu 18.04 today.
Please note, that you are mixing two versions of the Matlab integration:
- The previous version is the single function, pupil_labs_realtime_api
.
- The current integration is contained in the pl-neon-matlab repository.
You can simple remove the pupil_labs_realtime_api
file from your system and use the Device
command of the current integration, as shown in this example.
Hi again @user-8f55ac , are you using the tested USB hub to connect Neon via Ethernet to your computer?
Hi @user-8f55ac , I was able to test it here with Ubuntu 18.04. Indeed, there are some extra steps on that version. I leave notes here for others and I have attached images to help show the steps.
nm-connection-editor
.Once you see the QR code in the stream section of the Companion app, then try running this example of the Matlab integration and let us know how it goes.
Hi everyone,
I have a question about Pupil Neon. In the analysis section, when I need to upload a photo to generate the heatmap, does this photo have to be taken by the camera? Or can I extract it from my film? Thank you!
Hi @user-2cc535! The heatmap is part of our visualization tools on Pupil Cloud. You can find more details on our docs.
Heatmaps can be generated once a Reference Image Mapper or a Marker Mapper enrichment has been completed. - In the case of Reference Image Mapper, the heatmap is generated over the reference image you have selected for your enrichment. The reference image can be taken by any phone, or you can simply create a screenshot of your area of interest from your Neon recording and use it as a reference image. - For the Marker Mapper, the heatmap can be generated over the surface that is defined based on your Apriltag Markers. In that case, you don't need to take a photo.
I hope this helps, but me know if you have any further questions. π
Will do @nmt
Hi! I need to extend the usbc cable of the Neon. I bought the UGREEN USBC Cable extender but thatβs not working. Any tips on how to get it to work or any recommendations for the right type of extender cables?
Hi @user-5543ca , may I first ask why you need to extend Neon's cable? What are you planning to do with it?