πŸ‘“ neon


Year

user-50082f 03 June, 2024, 05:28:03

Neon Companion has not detected a disconnection with the glasses.Also,I can't see the image of the eye camera from the Neon companion, I don't use a third-party USB when connecting, and I haven't updated the version of Android. Please let me know if you have any suggestions on how to deal with it.

user-480f4c 03 June, 2024, 06:45:07

Hi @user-50082f! I've seen you opened a ticket in the πŸ›Ÿ troubleshooting. We'll assist you there with some debugging steps to help you resolve this issue.

user-bdc05d 04 June, 2024, 07:36:37

is it possible with the player to have the visualisation available on the cloud? (heatmap, aoi heatmap) even after correcting an offset, trimming a vid, ... or else?

user-d407c1 04 June, 2024, 07:48:06

Hi @user-bdc05d πŸ‘‹ ! Albeit different from those in Cloud, if you use the Surface Tracker in Neon Player, you can obtain heatmaps. Regarding, the AOIs, there are no AOIs like in Cloud for Neon Player, or Reference Image Mapper, but you can build custom plugins for your use case.

May I ask what is the reason for not using Cloud?

user-bdc05d 04 June, 2024, 08:00:39

the custom pluging can be an idea but I'm new to this so as I have never made this before I'm not sure I can in my deadline :/ (I'm in intership πŸ˜… never made a pluging before)

user-bdc05d 04 June, 2024, 07:49:48

currently I have an offset on my recording data but I would use the cloud with pleasure if I could modify this offset it's too bad as it is just a tranform to apply to y and x 😦

user-d407c1 04 June, 2024, 08:09:25

Hi @user-bdc05d! Reg. the https://discord.com/channels/285728493612957698/1247125277440741397 , I will recommend you upvoting and following that post.

I think rather than building a Plugin it would be much easier to apply the offset yourself manually over the data and building a heatmap yourself.

user-bdc05d 04 June, 2024, 07:51:13

I already see the surface tracker in neon player but it does not work well on my data, I would have prefer just on a static reference as in the cloud :/

user-d407c1 04 June, 2024, 08:10:20

Applying an offset over the data.

user-fcc31d 04 June, 2024, 10:29:49

Hello, I stumbled upon your website. The Neon module seems amazing, but I couldn't find where to purchase one. Is there any shop page that I'm missing?

user-07e923 04 June, 2024, 10:30:48

Hi @user-fcc31d, thanks for showing interests in our products! πŸ™‚ Click on this link to access our shop page.

user-fcc31d 04 June, 2024, 10:31:40

Thanks, I'll have a look πŸ˜„

user-fcc31d 04 June, 2024, 10:39:28

What headsets is it compatible with? I can't find a clear answer on the website. I understand that mount is offered just for Pico at the moment but I can build my own for others. Just to clarify i'm interested in the Neon XR module

user-95d0dd 04 June, 2024, 10:46:39

Hello,is there a way to change the view from scene camera to eye-camera in pupil cloud to set events?

user-95d0dd 04 June, 2024, 10:51:09

and another question came up: is there any way (boolean e.g.) to understand if the testperson has his/her eyes closed or open to make sure the data we collect e.g. pupil size is valid? As during or experiment the testperson might have to close the eyes for a bit

user-07e923 04 June, 2024, 11:19:22

We currently only sell mounts for the PICO 4 headset. For other mounts, you'll have to 3D-print and build them yourself. We're working on adding more mounts in the future, but I don't have the estimated time for it.

Just out of curiosity, which XR headset are you using? You might also want to check out the 🀿 neon-xr channel for all things related to Neon and VR/AR.

user-fcc31d 04 June, 2024, 11:39:39

I think I placed the question wrong, I meant what headsets (or devices) is it compatible with, not considering the mount ( the month is not a problem I can design and 3D print it). Or does it work with any system? Android, Windows, Linux etc..., anything that can run MRTK or OpenXR? In my day-to-day, I mainly work with Quest 2-3, Hololens and soon with VisionPro. Also Android and IOS AR, XR standalone and WebXR.

user-07e923 04 June, 2024, 11:21:31

Hi @user-95d0dd, thanks for reaching out! Eye videos are currently not available on Pupil Cloud. Feel free to post a request for this feature in the πŸ’‘ features-requests channel! As for the eye openness, we're currently working on this metric. Unfortunately, I don't have a timeline on when it will be finished.

user-876d7f 05 June, 2024, 09:00:32

Regarding the eye openness we did some tests yesterday to test the blink-detection. We were hoping that a closed eye would be detected as a blink, so we can make sure to only use data, when the eyes are open. However it seems, that the intentional blinks were not detected as blinks at all, maybe due to the longer duration comparing to non-intentional blinks. As our test-persons will work an a screen it is very likely, that they also blink intentionally from time to time to improve moisture of their eyes. Therefore we are afraid that our data will be compromised. So we are wondering if the team can provide some support with our videos Thanks for your attention

user-07e923 04 June, 2024, 11:42:02

Could you re-post the question in 🀿 neon-xr? Thanks πŸ™‚

user-fcc31d 04 June, 2024, 11:42:19

Shure

user-d407c1 04 June, 2024, 12:57:23

Do you mean like this alpha lab article ?

user-42cb18 04 June, 2024, 13:05:24

But this is for 2D images, right? Can I do the same over a video? I can see here (https://docs.pupil-labs.com/alpha-lab/multiple-rim/) that this is relevant to what I want to do. But how are the fixations depicted in this example calculated?

user-42cb18 04 June, 2024, 13:03:15

Yes, like this! πŸ™‚ I can see the script available, thank you!

user-d407c1 04 June, 2024, 13:48:13

Oh! I thought you wanted them over the reference image.

I don't think I have a snippet that plots fixations over a video, but the fixations CSV file contains the x and y coordinates for those fixations, and both Neon Player and Pupil Cloud (that's what you see in the link you shared) would offer you with visualisation like the one you are looking for.

Could you develop what you are trying to achieve?

user-42cb18 04 June, 2024, 13:57:01

Yes, I will try develop it. Thank you for your help! πŸ™‚

user-d407c1 04 June, 2024, 15:41:00

Hi @user-42cb18 ! With that develop I did not meant to ask whether you can achieve your goal but whether you could explain in more detail what is your ultimate goal such that I can better assist you. But, of course, feel free to develop the code for it.

user-42cb18 07 June, 2024, 10:22:00

Sorry for the misunderstanding! What I am trying to achieve is isolate the fixation area in an image frame.

user-7413e1 04 June, 2024, 16:27:42

Hi there, my enrichment for qr code tracking looks like this and I'm not sure why - can you help? It seems completely misplaced (this is the case over all frames of the video and different videos, including videos where it used to look fine). Thanks

Chat image

user-480f4c 04 June, 2024, 16:54:23

Hi @user-7413e1! Can you please open a ticket in πŸ›Ÿ troubleshooting and share the enrichment ID there?

user-7413e1 04 June, 2024, 17:34:23

of course yes! sorry for posting in the wrong place

user-480f4c 05 June, 2024, 06:51:23

no worries πŸ™‚

user-23177e 05 June, 2024, 08:36:44

Hi all, we have a user in our team that is using the neon via a shared gmail account. We want to have them create their own account for Pupil instead. Is it possible to migrate the workspace they created using the shared gmail account to their personal account? I noticed that you can invite collaborators to a workspace, and make them admin, but is it possible to make them owner too? Furthermore, would this have any impact on the companion app? Can they login with their personal account and still have access to the previously recorded data (from the shared account)?

user-07e923 05 June, 2024, 09:10:02

Hi @user-23177e, thanks for reaching out! πŸ™‚ It's not possible to migrate the workspace. Also, you can't move files across workspaces. If you'd like this feature, feel free to upvote it: https://discord.com/channels/285728493612957698/1212410344400486481 !

Yes, you can invite this person to the workspace as a collaborator, and the person will get access to the recordings in that workspace.

Recordings are uploaded to the workspace of whichever account that's signed into the Neon Companion app. Importantly, recordings will only upload to the chosen workspace. For example, if the account that's signed into the app has access to many workspaces, please choose the desired workspace before starting recording. Once you enable Cloud upload on the app, the recordings are uploaded to the selected workspace.

user-23177e 05 June, 2024, 09:23:29

Ok great, thanks for your reply. I guess creating the personal account and giving them access to the existing workspace would be the answer.

user-07e923 05 June, 2024, 09:36:56

Hi @user-876d7f, let me clarify about the blink detection. Our blink detection is based on duration thresholds. The min blink duration has to be 100 ms, but the max duration between blink onset and offset is 30 ms. You can read more about the blink detection white paper here.

In your case, your participants aren't blinking, but intentionally closing their eyes. This means that the time between eyes closed and eyes open is atypical, and they exceed the duration threshold to be considered a blink. Thus, I'd recommend that you first look through the blink detection offline guide. Then, you'll be hacking into the blink detection script, and things will get super experimental as you customize the script to your specific needs πŸ˜‰ .

For example, certain lines of our our blink detector pipeline would be useful, since they help define the thresholds as described in the white paper.

user-fcdbb6 05 June, 2024, 11:35:07

Hi, i am about to use the pupil neon for my multimodal brain imaging study that also includes fnirs. I am an electronics engineer. What i really need to know to mitigate potential interference is a) the wavelength of the NIR LEDs in the Neons frame b) the illumination pattern, if there is one (constant, pwm, if pwm at what frequency ca?) c) if possible a ballpark estimate of the intensities used

Would it be possible to tell me any or all of these details?

user-fcdbb6 05 June, 2024, 11:36:20

An independent question is what synchronization mechanisms with other modalities exist when it comes to higher precision (e.g. with eeg in the ms scale)

user-a23b90 05 June, 2024, 13:15:16

Hello ! I'm trying to make a nice ui using the neon headset and python real time API. I wanna use the gaze data to make nice dynamic graphs. After some attempts, I came here to ask a few questions about gaze data variables in the async API, from eyeball_center_left_x to optical_axis_right_z. Here is the point : what all of these represent exactly ? in which unit ? Thanks a lot for your reply πŸ™‚

user-07e923 05 June, 2024, 14:38:44

Hi @user-a23b90, thanks for reaching out! πŸ™‚ The eyeball center is relative to the scene camera of the Neon Module. The eyeball center variables are all in mm and the optical axis variables are directional vectors, d. You can learn more about them here.

Btw, directional vectors are unit vectors that represent spatial direction and relative direction. See here.

I look forward to what you do with the real-time API!

user-ccf2f6 05 June, 2024, 21:23:18

Hi Pupil Labs,

We've found that the NEON recordings don't have any audio when the phone is locked and we use the recording start/stop API but the audio is there when recording is triggered with the screen turned on and companion app is active. This behaviour beats the purpose of remote controlling the recordings with the help of APIs, is there a known workaround for this?

nmt 06 June, 2024, 00:39:28

Hi @user-ccf2f6! Thanks for the report. We've identified the cause and a fix will be in the next Companion App update. If you need something immediately, please reach out to us via email πŸ™‚

user-fcdbb6 06 June, 2024, 07:52:51

Dear support team, is this question too technically detailed? It would be very important to me to be able to do joint eyetracking with fnirs

user-d407c1 06 June, 2024, 08:11:03

Hi @user-fcdbb6 πŸ‘‹ ! It is not too technical πŸ˜… . I hadn't replied before to you, because I am double checking the radiant intensity, please allow me to get back to you on this, later.

The IR LEDs have a wavelength of ~860nm (centroid 850nm Β± 42nm Δλ) and the pattern should be constant.

Regarding how to sync with other modalities, we have support for Lab Streaming Layer and our realtime API has a time offset estimator. So, it really comes down to what capabilities your other sensors have or how/which kind of master clock would you like to use.

user-a23b90 06 June, 2024, 08:03:38

Hi Pupils labs, I'd like to know if the blink detection fit well with the async API ? Thanks for the help and thanks for your amazing work

user-d407c1 06 June, 2024, 08:42:50

Hi @user-a23b90 πŸ‘‹ ! Do you mean the realtime blink detector ? If so, it should work, but besides modifying the code that is shown in the notebook, you will need to modify the helper.py script within the blink_detector folder to use the Async version of the realtime API.

user-fcdbb6 06 June, 2024, 08:13:27

Thank you @user-d407c1 !! Constant would mean contantly turned on, yes?

user-d407c1 06 June, 2024, 08:43:28

Yes, but please allow me to get back to you with more details on the IR LEDs.

user-fcdbb6 06 June, 2024, 08:14:05

(LSL is great and solves a lot)

user-fcdbb6 06 June, 2024, 08:49:16

Thank you

user-d407c1 07 June, 2024, 10:34:26

To do so, using the Timeseries CSV files from Cloud or pl-rec-export would simplify things a lot.

I don’t have a snippet to share at the moment, but here’s a basic outline of the process:

1.    Read the timestamps of the world frames.
2.    Load the scene video into a PyAV container.
3.    For each timestamp, read the frame.
4.    Identify if there is a fixation using the start and end timestamps.
5.    Use the x and y positions of the fixation to plot it.

Let me know if you need more details or any help!

user-42cb18 07 June, 2024, 10:40:42

Thank you for the feedback. πŸ™

user-d407c1 07 June, 2024, 10:55:39

It stands for presentation timestamps, in the pyav/ffmpeg context you can find more info here

user-876d7f 10 June, 2024, 06:53:00

Good morning:) for our setup we inted to use the marker mapper and therefore show them in the corners of a screen. Unfortuantely i couldnt find in which size I have to present the markers for them to work but also not occupy more space of the screen than actally needed. In this setup our test person will not be further away from the Screen than 1 m. Thank you in advanve for your support

user-d407c1 10 June, 2024, 07:29:10

Hi @user-876d7f ! Please have a look at this message regarding markers size and other considerations for on screen markers https://discord.com/channels/285728493612957698/285728493612957698/1220676625604153416

user-688acf 10 June, 2024, 09:23:15

hello, i'm using neon player 4.1.4 to annotate videos and track surfaces, upon export the player crahses. in the log it says

2024-06-10 11:21:26,900 - player - [INFO] launchables.player: Created export dir at "XXXXX\neon_player\exports\001" 2024-06-10 11:21:29,760 - player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 670, in player File "observable.py", line 367, in call File "surface_tracker\surface_tracker_offline.py", line 597, in on_notify TypeError: 'NoneType' object is not subscriptable

2024-06-10 11:21:29,780 - player - [INFO] launchables.player: Process shutting down.

can someone look in to that please? thanks in advance

user-cdcab0 10 June, 2024, 09:49:57

Hi, @user-688acf - I'm unable to reproduce this error. Would it be possible for you to share your recording folder with us? If you aren't comfortable sharing it publicly, you can email it to data@pupil-labs.com

user-688acf 10 June, 2024, 10:30:24

hello, thx for the quick reply, unfortunately i'm unable to share the recording at all because it contains 3rd party data. however, i was able to resolve the issue by deleting the entire export folder and re-running everything. but i think someone should take a look into that uncaught nonetype exception, it's pretty annoying when the whole player crashes and you essentially lose all the annotations etc.

user-23cf38 10 June, 2024, 10:21:28

Hello, we are currently experiencing immediate app crashes after opening the companion app (V 8.15) on two seperate devices. Our 3rd device has Version 8.10 installed and works. Is there a problem with the new Version?

user-d407c1 10 June, 2024, 10:33:01

Hi @user-23cf38 ! Do you mean 2.8.15? Could you create a ticket πŸ›Ÿ troubleshooting such that we can better assist you?

user-ff2367 10 June, 2024, 13:22:11

Hello everyone how are you? I'm trying to use the new Map Gaze Into a User-Supplied 3D Model feature , but I'm having some problems with the 3D model, since it's the first time I've used this type of data. I used the 3D Scanner app to create the model of my room and now I'm looking for the tag information. Any tips on how I can get the position and size in the output space units? Should I use Blender or onther kind of 3D software, for example? If so, maybe someone has a tutorial link they can send me?

user-068eb9 10 June, 2024, 13:38:02

Hello, how are saccade amplitudes in degrees calculated in Neon given that there is no object distance information?

user-f43a29 10 June, 2024, 14:45:02

Hi @user-068eb9 , we would also like to ask what you mean by β€œobject distance information” and how that factors into saccade amplitude in your case?

user-07e923 10 June, 2024, 14:19:11

Hi @user-068eb9, thanks for reaching out πŸ™‚ We consider the gaps between fixations as saccades, and assume that there are no smooth pursuit eye movements. You can learn more here.

user-f43a29 10 June, 2024, 16:47:38

Hi @user-ff2367 , great to hear that you are trying this! Am I correct that you put an AprilTag on the wall of your room and made a recording with it?

Instructions can be found in the repo. Please also see @user-cdcab0 's post here (https://discord.com/channels/285728493612957698/1250066927980642395/1250232074611200061) that shows an easy and effective way to get the the position, size, and rotation values of your AprilTag.

For context, @user-cdcab0 programmed Tag Aligner πŸš€ ❀️

Tag Aligner is independent of 3D software. The units and coordinate system are completely up to you. You can place the origin of your coordinate system at the bottom left of the door or at the center of your desk. You measure the center of the AprilTag with respect to this origin in your 3D software's units, meters, feet, etc. The choice is yours. For example, @user-cdcab0 's post shows an easy way to specifying everything in Blender's system.

In the case of the the Alpha Labs article, meters were the units and a ruler was used (see attached image). The foot of the statue was chosen as the origin and the AprilTag was 0.705 m upwards from that point. The size of the tag's black side was measured as 0.135m. This was put into the tag's json file (see step 0.3 of the repo). If you decide to do it this way, then you should scale your model to the same dimensions. For the Alpha Lab article, Blender's measure tool was used as a guide. The top of the statue podium was about ~1.27m above the foot of the statue.

Please let us know if you have other questions

Chat image

user-ff2367 11 June, 2024, 11:19:01

Hello @user-f43a29, thank you very much for the explanation!! It was quite enlightening, thank you. Also, do you have any tips for checking the tag orientation too? This part also left me a little confused. I believe I'm putting the wrong orientation, because I ran the model however the motion and gaze within the 3D model are incorrect

user-d407c1 11 June, 2024, 09:35:58

Hi @user-fcdbb6! The continuous radiance values measured at 0.2m are 205 W sr⁻¹ m⁻². Let us know if you need anything else.

user-fcdbb6 11 June, 2024, 16:10:21

Thank you @user-d407c1 this is helpful. Just to make really sure: you are also confirming the illumination scheme is continuous, and not multiplexed (PWM/TDM,...)?

user-f43a29 11 June, 2024, 12:39:40

Hi @user-ff2367 , you are welcome!

The tag orientation needs to be specified in the json file as a quaternion in the OpenCV coordinate system. How did you hang and orient your AprilTag?

If you have made a 3D model of your scene in Blender, then you can make everything much easier by following @user-cdcab0 's instructions here (https://discord.com/channels/285728493612957698/1250066927980642395/1250232074611200061). The process should be similar for other 3D modelling software.

user-ff2367 11 June, 2024, 16:07:47

Hello, @user-f43a29 thank you very much again!

user-cdcab0 11 June, 2024, 23:35:54

[email removed] , you are welcome!

user-b7c82e 12 June, 2024, 00:02:04

Hi I am new to Neon and I'm trying to use real-time-screen-gaze. I generated april tag markers on the four corners with marker size 128x128 and 16-pixel white padding. When I specify coordinates in marker_verts, should I include the white padding, or only list the corners of the markers?

user-cdcab0 12 June, 2024, 00:22:05

Hi, @user-b7c82e - do not include the white padding. Only the the black square of the markers is considered

user-b7c82e 12 June, 2024, 00:31:50

Thanks!!

user-f43a29 12 June, 2024, 07:37:09

@user-cdcab0 Thank you for providing those Tag Aligner + Blender instructions!

For anyone who referenced my previous posts, I recommend to also follow @user-cdcab0 's instructions here: https://discord.com/channels/285728493612957698/1250066927980642395/1250232074611200061

You will have a much easier time determining the rotation and position of your AprilTag πŸ˜„

user-429d6a 13 June, 2024, 14:38:56

Hello! My name is Ryan, I am using Neon to track where, on the spectacle lens, a subject is looking through. I've used the optical axis vector in 3d_eye_states.csv with good success. However, its come to my attention that from a physiological standpoint, it makes more sense to track the visual axis. This is typically a few degrees off from the optical axis of the eye.

Based on what I've read on the Discord and on the neon website, the gaze information provided by neon player is the visual axis. The only problem is that its in terms of the world camera coordinate system. Is it possible to the gaze of each eye as a unit vector, like how the optical axis is provided?

"gaze_point_3d_x (and y, z)" in gaze_positions.csv is the closest I've found, but it is still in world camera coordinates. Thank you in advance!

user-d407c1 13 June, 2024, 15:54:39

Hi @user-429d6a πŸ‘β€πŸ—¨ ! You are correct in that we provide you with the "optical axis" in the eye states and not the visual axis per eye, and that this would be more correct for your application.

Technically, you should be able to compute the "visual axis", you have two vectors.

  • [gaze.csv]: V1 is defined by its azimuth and elevation angles with its origin at (0, 0, 0) (scene camera origin).
  • [eye_state.csv]: V2 has a direction given by coordinates (x_1, y_1, z_1) and its origin at (x_0, y_0, z_0) .

To get the "visual axis" vector for the eye, you could convert the gaze vector from spherical coordinates to cartesian, then as they are normalised compute the dot product and finally compute the new angle.

But it can get a bit more complicate, and it would depend on your definition of visual axis, take for example the diagram I attach based on Bennets & Rabbets' clinical visual optics.

Also I think the pantoscopic angle on Just Act Natural is a bit more than the avge, but don't quote me on that.

Chat image

user-429d6a 13 June, 2024, 20:08:17

Hi @user-d407c1, thank you for your helpful response. I was able to convert the elevation/azimuth to x,y,z. After thinking more about my problem, I've come across another question that I think you could help answer. Is the gaze data in "gaze.csv" calculated from the optical axis data? If so, perhaps there is some code I could reference for my own project.

user-d407c1 14 June, 2024, 06:37:41

Visual axis

user-a09f5d 14 June, 2024, 14:06:51

Hello I have a question about saving events with custom fields over the Neaon API. I have just started experimenting with out labs new Neon and its API. I have been using the pupil core for many years and rely heavily on the API's ability to save annotations with custom fields enabling me to save data such as stimulus condition, order, and other settings within multiple columns alongside the standard 'label' column. Is it possible to also save multiple columns when saving events with the pupil Neon? I had assumed it would be possible given how easy it was with the core but I haven't been able to find any information in the Neon documentation on how to do this.

user-480f4c 14 June, 2024, 14:34:50

Hi @user-a09f5d. To send events using Neon's Real-time API, you can use the send_event function that takes as input arguments the event name and the current timestamp (see here for some examples).

Although more columns cannot be added, you still can create custom names (incl. experimental variables) within your experiment (eg like in this tutorial). For example, assuming you have a matrix with your experimental conditions and a script that loops over your experimental trials based on this matrix, you can easily assign the stimulus condition or order to your event's name

user-a09f5d 14 June, 2024, 15:13:01

Hi @user-480f4c I have been playing with the send_event function and have gotten it working (including the example you shared). However, what I was hoping to be able to do is to save multiple columns alongside the event label in the same way as I do with the pupil core API. For instance when I export the annotations from the pupil core I get a csv file that looks like this example (see attached screen shot) which is an efficient and easily analysed way of saving all of the experiment parameters along side the event (label) of interest.

However, based on your response this isn't possible with the Neon, which is a shame! Is there a reason why this feature was removed or is it just not compatible when using a companion device?

I guess one work around is for me to save all of the parameters within a single event label (so they all share the same timestamp), eg device.send_event(f"description.of.event_{ID}_{age}_{sex}_{block}_{TargetDistance}_{TargetPosition}"), and then use a post hoc script to separate the parameters into different columns using the '_' separator. Do you see any reason why this wouldn't work? Is there a character limit on how long an event label can be?

Chat image

user-0001be 14 June, 2024, 15:16:28

If I’m wearing the module in a sound booth (faraday cage) after I calibrate the IMU outside, will it be okay for it to detect the magnetic north?

user-f43a29 18 June, 2024, 10:40:00

Hi @user-0001be, for accurate yaw orientation, the IMU requires continuous readings of magnetic north, even after the calibration. If these are disrupted, yaw orientation will be inaccurate. You can try it to see how it works out for you, but it is likely that the Faraday cage will interfere with the IMU. We would recommend you validate any measurements with a real compass, used outside the faraday cage.

user-480f4c 14 June, 2024, 15:19:16

Your approach would work; however, I am wondering, why would you want to code the subject ID, age, and gender in every event? You can simply save the final csv file with this information (e.g., ID1_33_F.csv). To keep things tidy and clean for your data collection, I'd simply add the event-related information to the event name, that is: {block}{TargetDistance}{TargetPosition}

I can't see a reason why character number would be an issue, but I can test it out on Monday and let you know πŸ™‚

user-a09f5d 14 June, 2024, 15:42:35

So the main reason that this fixed information cannot be saved within the file name of the csv file is simply because the file name would be too long (there is a LOT of additional fixed information I'd need to save that is not included in the provided example). This would be a problem because Windows can't open files when the file path is too long.

You are right that fixed information doesn't need to be included in every event label though. What I would probably end up doing is including the fixed information in a single event at the start of the recording and then just including the dynamic information within each event thereafter. I think in this example I was just trying to replicate the output I'm currently able to get from the core (in the past the redundancy never caused any issues and just made sub-setting easier).

Thanks for the help.

user-cdcab0 14 June, 2024, 16:16:32

For all the info that doesn't change within a recording, you could send a single event at the beginning of the recording that contains all of that info. You could also send one event per piece of info (then you don't have to worry so much about the length)

user-a09f5d 14 June, 2024, 19:54:56

For sure! My current plan is to include the fixed information in a single event at the start of the recording and then just including the dynamic information within each event thereafter. Thanks!

user-309101 14 June, 2024, 17:16:20

How is missing data coded within the gaze.csv exported file? I am looking at a sample file of 10 minutes of recording and there are zero NA's or empty rows for the gaze position (i.e., gaze x [px]); it seems likely that there would be missing data for a few of the samples.

user-07e923 15 June, 2024, 04:50:54

HI @user-309101, thanks for reaching out πŸ™‚ May I know what you mean by "there would be missing data for a few samples"? What is the assumption behind this?

Are you looking at blinks, or data where the glasses aren't being worn, etc?

user-309101 17 June, 2024, 13:35:06

HI @user-07e923. Thanks for the response. I work in clinical research with children and in typical eye-tracking experiments there is data loss due to movement, closing eyes, rubbing hands on eyes, jumping out of the chair, etc. Using Neon is newer for me; I guess I just assumed there would be times when the eye-tracker could not find the eyes (above and beyond blinks). Ultimately, I am trying to calculate some type of data quality metric to demonstrate feasibility in my intended participant sample.

user-07e923 17 June, 2024, 14:26:35

Thanks for the clarification!

Data loss due to movements like jumping out of a chair, or eyelid occlusion, is likely when using more traditional eye trackers. This is because they often rely on tracking features of the eye, like the pupil or corneal reflections, and have a calibration process prior to recording. In conditions where these features aren’t visible, the eye trackers will output empty data because nothing can be estimated. In addition, headset slippage, often induced by dynamic movements, can degrade the accuracy of the calibration.

Neon is different. It uses deep learning to estimate gaze, which is essentially calibration-free and very robust to slippage. This makes factors like glasses slippage and head or body movements non-issues for Neon.

We don’t provide a data quality metric on a per-user level with gaze produced. However, we have a thorough accuracy evaluation of Neon that shows its average performance across a large cohort of wearers during various activities. This evaluation will give you a better sense of how it works and the feasibility of using it for your intended participant sample in a given scenario. I highly recommend reading it. We also have a blink detector, so you can easily filter out periods of gaze data during blinks in the exports.

If you did want a data quality metric that can be outputted in Neon’s data-streams, please add that as a feature request in πŸ’‘ features-requests πŸ™‚

user-613324 17 June, 2024, 15:28:33

Hi Neon team, I'm using device = Device(address=ip, port=8080) to locate the Neon eye-trakcer. However, this code return the OnePlus phone that the eye-tracker is supposed to connect to. When the eye-tracker is not conected with this phone, the code still returns the device object as long as the phone is in network. Is there anyway to use the Device() function to tell whether the eye-tracker is properly connected and ready to send_event or stream the eye_video? I'd like to achieve the same effect as discover_one_device(), namely when the eye-tracker is disconnected, it returns None, but I need to use the IP address.

user-613324 17 June, 2024, 17:49:29

To continue with this issue, I tired to use device.serial_number_glasses to see if the device is connected to an eye-tracker. However, when I tested it, this function always return -1 no matter whether the eye-tracker is connected or not. Even when the computer and the eye-tracker was communicating well (like streaming the eye video and send_event() worked fine), device.serial_number_glasses will still return -1, where it should return the serial number of the eye-tracker. Did I do soemthing wrong or is it a bug need to be fixed? I'm using the updated version of the realtime api (1.2.1).

user-b7c82e 17 June, 2024, 17:31:31

Hi Neon team, I was testing out the real-time-screen-gaze package and generated 256x256 markers with white borders (48 pixels). Visual stim were drawn using PsychoPy. gaze_mapper was able to create a surface, but it failed to produce surface_gaze coordinates. I tried to tag the markers offline using marker mapper on pupil cloud, no luck as well.

Chat image

user-cdcab0 17 June, 2024, 21:50:26

Your tags are flipped. If you're generating the marker images using the marker_generator from the real_time_screen_gaze package, you have to flip them manually for PsychoPy:

from pupil_labs.real_time_screen_gaze import marker_generator
marker_data = marker_generator.generate_marker(marker_id=0, flip_x=True)
user-f43a29 17 June, 2024, 21:45:24

Hi @user-613324 , device.serial_number_glasses is for Pupil Invisible. For the Neon module, you want device.module_serial.

user-613324 18 June, 2024, 00:33:44

Hi Rob, I have two follow-up questions: 1) for Device(address=ip, port=8080), is there a timeout argument we can specify like in discover_one_device(max_search_duration_seconds=XX)? 2) for device.receive_eyes_video_frame(), I saw there's a timeout argument for this function in the manual, but I tried and nothing change. Sometimes, when the network is unstable or the connection is off, this function will return repeated messages in the console (no RTP received for 10 seconds: closing no RTP received for 10 seconds: closing no RTP received for 10 seconds: closing unable to create requested socket pair unable to create requested socket pair ). I want to handle this exception in a better way, rather than letting it freeze and crash. So for example, I want to set a timeout duration, and if no connection is found beyond this duration, guide the program to do sth else. Can you give me a hint how to do it? Thanks!

user-613324 17 June, 2024, 23:19:26

thank you Rob for the swift reply! Yes, that's what I'm looking for. Thanks!

user-f43a29 18 June, 2024, 10:03:35

Hi @user-613324 , the timeout for β€˜receive_eye_video_frame’ only applies when there is no cached data. If it is not timing out, then it should be because you are actually receiving data, or at least have not yet drained the queue of cached data. Those messages are not exceptions but diagnostics, and are in principle harmless. The β€œunable to create requested socket pair” message is typically only encountered on Windows. If you are receiving data and are able to remotely control the device, then you have a working connection. Regarding a timeout parameter for the Device constructor, I will check with the team and get back to you.

user-bdc05d 21 June, 2024, 07:22:38

hi, is there a csv or a way to get back timestamps of event we annotated in the cloud? πŸ™‚

user-480f4c 21 June, 2024, 07:26:17

Hi @user-bdc05d! All your events are available in the events.csv file that is part of the Cloud export.

user-bdc05d 21 June, 2024, 07:32:26

Ah thank!

user-b6f43d 21 June, 2024, 15:24:32

Hey pupil labs, is there any way where i can set a threshold for Fixations. and only identify fixation that are greater than certain seconds ??

user-07e923 21 June, 2024, 15:48:28

Hi @user-b6f43d, thanks for reaching out πŸ™‚ ! You can do that by processing the raw binary files (i.e., raw recording) using this GitHub repo. You'll need to edit the fixation detector script to whichever threshold you're looking for. Our fixation detector is velocity-based, and you can learn more about it here.

user-07e923 22 June, 2024, 14:26:18

It just occurred to me that if you're looking to only include fixations based on a minimal fixation duration, then you can simply filter out the data in the fixations.csv so that the fixation duration > X, where X is your criteria (e.g., 2000 ms).

My described method is perhaps way too much work for almost similar results (although it includes some sanity-checks). Specifically, I was under the impression that you wanted to manually specify how a fixation is computed by the fixation detector, i.e., two consecutive gaze samples that don't exceed certain velocity. Then, use this velocity threshold to look at all subsequent gaze samples until the n+1 sample exceeds the threshold. The fixation duration would then be the total number of gaze samples across time that stayed within the velocity threshold for this one fixation.

user-a23b90 24 June, 2024, 12:39:06

Dear support team, I'd like to know more about the "worn" attribute of the gaze data within the real time api data. I'm using the neon headset for 3 week now and the worn attribute has always been True, even when the headset was on desk. Do you have any clue for me ? Thanks in advance.

user-07e923 24 June, 2024, 12:57:04

Hi @user-a23b90, thanks for reaching out πŸ™‚. The worn detector is not yet implemented for Neon. If you like to see this feature, please upvote it in πŸ’‘ features-requests See: https://discord.com/channels/285728493612957698/1047111711230009405/1243176459128930344

user-23a0f0 24 June, 2024, 13:19:50

Hello Neon Team, I am new to using Neon. I am exploring the real-time-screen-gaze package. I am wondering how to connect the device on my local machine (not remote) because I want to do the same task with setup 1 monitor

user-07e923 24 June, 2024, 14:26:55

Hi @user-23a0f0, thanks for reaching out! If your computer has a hotspot option, you can connect Neon to the hotspot. Otherwise, if you're trying to connect Neon directly via USB, you can use this USB hub.

user-324374 24 June, 2024, 14:57:08

Hello, wanted to ask if it was possible to make multiple Marker Mapper enrichments on the same workspace with the same April Tags used per visual stimulus, but different defined surfaces so that I can analyse areas of interest across different stimuli all in one workspace. As of now, the same image appears as the defined surface for all the marker mapper enrichments. The only solutions I can think of are using Reference Image mapper or adding all the projects to new Workspaces each time I want to do a MM enrichment (I have to make 56 of these), thanks.

user-07e923 24 June, 2024, 15:03:54

Hi @user-324374, thanks for getting in touch πŸ™‚ For clarification: the visualized surface beside your recording does not correspond to only that specific image. Rather, it's only a visualization of how the mapped surface look like. Not what the content is.

In my example, I am mapping the entire recording (as shown by the row named Neon Ad). The current surface visualized next to my recording only happens around 1 min into my recording (see Surface_1.png). If I refresh my page a few times, a new surface is visualized (see Surface_2.png).

Chat image Chat image Chat image

user-07e923 24 June, 2024, 15:07:20

To answer your question: Yes, you can have multiple marker mapper enrichment using the same April Tags. It's probably helpful to use events to mark the recording section that you want map -->e.g., Image X appear -- Image X disappear. Then, run the enrichment for all your recordings that have these events.

Edit: If I didn't address your question, then could you please elaborate on what I missed? πŸ˜…

user-324374 24 June, 2024, 15:28:35

Here is a screenshot of the Workspace after I ran the enrichment. As you can see, the image corresponding to the selected event space (left) is not the same as the image on the right. Whenever I try to visualise a Marker Mapper enrichment with an AoI heatmap it consequently gives me the image that was on the right instead of the one actually observed on the left. It has done the same for all subsequent Marker Mapper enrichments, but I will try again now and let you know the outcome. I have also just tried to run a Reference IM enrichment instead and it has given me an error, maybe because of mismatched reference images or the lack of white spaces on the april tags of some recordings (here's the RIM Enrichment ID: 2929a437-9760-41be-ba0a-8e0e002b0d0e)

Chat image Chat image

user-057596 24 June, 2024, 15:35:22

Hi we have just carried out a research project eyetracking Regional Anaesthesiologists performing regional blocks where we had a marker on each corner of an ultrasound screen but we had different machines from different manufacturers so the shape and size of the screens would be different.

If I’m carrying out a Marker Mapper enrichment for the project can it be applied to all the recordings with the different types of screens in the usual batch form of do I have to do individually for each specific machine type?

user-07e923 24 June, 2024, 15:37:42

Hi @user-057596, may I clarify if you're trying to map several different screens (of different sizes)? If so, you'll need to run the enrichment on Pupil Cloud per screen separately.

Edit: Let me clarify - you just need to run 4 enrichments, one for each screen. Since each screen has unique AprilTags, the algorithm knows which tag ID it is (assuming the tags are always well detected).

user-057596 24 June, 2024, 15:40:10

Hi @user-07e923 they are different sized ultrasound screens 4 of them but they all had different combinations of markers

user-057596 24 June, 2024, 15:44:49

The next question regarding this project is we have 42 recordings which we want to download from Video Renderer but the size of the processed video is 4.7GB and we will have to remove things from our laptop to do so. Is it possible to download them to an external drive connected to the laptop without having to remove anything from our computer

user-07e923 24 June, 2024, 15:47:41

Yes. You can absolutely download the videos to an external drive. You'll have to connect an external drive first before clicking the download icon. After you click download, you'll be prompted where to save the downloaded files.

user-057596 24 June, 2024, 16:01:57

Thank you @user-07e923 that was really helpful. πŸ’ͺ🏻

user-d407c1 25 June, 2024, 07:33:35

Hi @user-324374 ! The Marker Mapper should take the reference image you see on the right from one of the frames within the defined scope. If that is the case, would you mind inviting me to your workspace for further investigation?

user-324374 25 June, 2024, 12:49:06

Great, just sent an invite. Also, if I wanted to calibrate gaze data from native recordings using Pupil Player, how would I do this after adding Enrichments to the recordings on Pupil Cloud?

user-d407c1 25 June, 2024, 16:04:04

Hi @user-324374 ! We are checking the marker mapper issue, we would get back to you soon. Feel free to open a ticket at πŸ›Ÿ troubleshooting if you want to keep track here of the progress. Regarding, the calibrate gaze data do you mean, modifying the gaze offset correction?

user-324374 25 June, 2024, 16:06:21

Yes, the gaze offset would need to be corrected individually on Pupil Player, but how would applying an enrichment to the modified data work?

user-d407c1 25 June, 2024, 16:15:39

Unfortunately, you can not re-upload recordings to the Cloud, which leaves you with three options:

1) Wait for https://discord.com/channels/285728493612957698/1247125277440741397 to be implemented πŸ˜‰ . Soon ℒ️ 2) Use the Surface Tracker in Neon Player (not Pupil Player, as Neon recordings are only compatible with Neon Player). That would be pretty similar to the Marker Mapper, although you can not create AoIs or aggregate data there. 3) Compute yourself the translated gaze coordinates.

Personally, if you are not under a super urgent deadline, I would wait for 1. Otherwise, opt for 2.

user-037674 25 June, 2024, 18:38:57

Hi all, I have the following problem: I want to synchronize the recordings from two Neons using the UTC timestamps. I tested it by tapping a plastic token on a table and then plotting the audio recorded by the microphones corrected for the difference in recording starting times. Still, there was an offset of about a second. I then used the pupil labs cloud to set a very precise event for when the token touches the table and indeed the UTC timestamps for exactly that event differ by about a second. I already turned the "automatic time" off and on in the phones' settings. Do you have an idea what might be the reason for the world timestamps differing so greatly? And is there a better way to align the recordings of the two Neons? Thanks in advance πŸ™‚

nmt 26 June, 2024, 02:46:19

Hi @user-037674 πŸ‘‹. Syncing two devices using their UTC timestamps is a valid approach. However, note that phone clocks can experience drift, and depending on when the sync-up to the NTP server was initiated, it can be up to about 1 second. A fresh sync-up might get you down to about 20 milliseconds for a few hours. This is all overviewed in this section of the docs, along with a way to further improve sync accuracy if required.

Tapping the token on the table, of course, provides a common event that will appear in the data streams of both phones. I've used similar approaches in the past when collecting data from multiple devices as a fail-safe πŸ™‚

user-037674 26 June, 2024, 02:49:02

Hi @nmt thanks for your response! I synced the clocks as described in your docs directly before I did the test, so unfortunately this did not solve the problem...

nmt 26 June, 2024, 03:15:48

Hi @user-037674! Just to clarify, both phones were synced to the same NTP server, correct? If so, it's difficult to predict exactly why the offset is that big. If you do need more accuracy though, you can calculate the temporal offset using our real-time API - we provide a tutorial on how to do that here: https://docs.pupil-labs.com/neon/data-collection/time-synchronization/#improving-synchronization-further

user-a2d00f 26 June, 2024, 04:32:41

@nmt Hi, My Neon suddenly cannot get the eye picture, but the front RGB is good. What's going on? The image of the eye captured in the app on the phone is black.

nmt 26 June, 2024, 04:33:27

Hi @user-a2d00f! Please open a ticket in πŸ›Ÿ troubleshooting πŸ™‚

user-817cfd 26 June, 2024, 13:55:11

Since this morning, I have problems to upload a recording to pupil cloud. The recording is indicated in the workspace, but the video is black and further actions, such as moving to a project or downloading, are unavailable. What could be the case here? In an attempt to resolve the problem, I deleted the recordings from the cloud and tried to upload them again. However, they don't get uploaded a second time. Not automatically and I can't find a way to do this manually. Any suggestions would be appreciated.

user-d407c1 26 June, 2024, 14:01:42

Hello @user-817cfd πŸ‘‹! Sorry to hear you’re having trouble with a recording. Could you please create a ticket in πŸ›Ÿ troubleshooting so we can assist you better?

When creating the ticket, please include: - Whether the Companion App shows the recording as fully uploaded. - The recording ID: You can find in the Cloud trash unless completely deleted. Please restore it, then right-click over it, and select β€œRecording Information,” and finally share the ID in the ticket.

For future reference, please do not delete recordings before confirming uploads, as they cannot be re-uploaded.

user-dd0489 27 June, 2024, 10:47:56

Hi! I am trying to download the enrichment from my workspace in the cloud, but I get this error:

{ "code": 500, "message": "internal server error", "request_id": "6d4465a4c397e53a8e943e4a8861ea3b", "status": "error" }

How can I download my data?

user-480f4c 27 June, 2024, 10:52:30

Hi @user-dd0489 - I saw that you already opened a ticket. Let's keep the conversation there πŸ™‚

user-057596 27 June, 2024, 15:50:20

Hi we have just carried out research on regional anaesthesia last week in the cadaver lab in Dundee University using a combination of three Neon Glasses and one Invisible which were all controlled by the Monitor App. All the devices were checked before we carried out the recordings to ensure that we were picking sound as it’s an important aspect of our medical training research but of the 42 recording only 15 have sound on them. I wonder if this is common feature

nmt 28 June, 2024, 03:20:35

Hi @user-057596! Thanks for reporting this. Can you please open a ticket in πŸ›Ÿ troubleshooting and we can assist you there.

user-8f55ac 27 June, 2024, 22:24:56

Hello, we are trying to use MATLAB's "pupil_labs_realtime_api" function in order to send event markers using an ethernet connection from an Ubuntu machine (Ubuntu 18.04.6 LTS)

We are having two issues, the first is that we're unsure whether the ethernet connection is successful since Ubuntu 18 does not have an option to select "shared to other computers" (as we were advised to select).

The second issue (assuming that the connection is established) is that calling the pupil_labs_realtime_api function in MATLAB produces the error: "Could not access server. Host not found: pi.local."

Any advice on setting up an ethernet connection and sending triggers via MATLAB would be much appreciated.

user-f43a29 27 June, 2024, 23:08:31

Hi @user-8f55ac , we will need to look into the solution for Ubuntu 18.04. I have time for this tomorrow. In the meantime, I at least want to point out that the current Matlab integration can be found in the pl-neon-matlab repository. Since you are planning to use Ethernet, then latency is likely of importance to you, so you will want to use that version, which takes advantage of Matlab's Python interface.

Could you try again with that version? It could be that the connection is working. The previous integration connects to pi.local by default, which is the automatic name given to Pupil Invisible devices, whereas Neon uses neon.local.

Also, just a note on the terminology in our ecosystem. With Neon, you do not exactly send triggers. Rather, you take one recording for a whole experimental session and send events that are logged in the recording timeline, such as "trial_1.started", "trial_1.ended", "trial_2.started", "trial_2.ended", etc. The names are up to you. Then, during analysis of that recording, you can filter the data based on these events.

user-8f55ac 28 June, 2024, 00:59:07

Thank you so much - this information has been helpful.

We have now downloaded the pl-neon-matlab repository and we are using MATLAB version 2020b with a python environment 3.8. With these settings, calling pupil_labs_realtime_api now returns the error "could not access server. http://pi.local:8080/api/status". Calling the "Device" command returns an error "could not find any device".

Thank you for looking into any changes we may need to make given our version of Ubuntu. Hopefully it is just an issue with setting up the ethernet connection. Thanks!

user-f43a29 28 June, 2024, 11:50:45

Hi @user-8f55ac , I will look into the setup on Ubuntu 18.04 today.

Please note, that you are mixing two versions of the Matlab integration: - The previous version is the single function, pupil_labs_realtime_api. - The current integration is contained in the pl-neon-matlab repository.

You can simple remove the pupil_labs_realtime_api file from your system and use the Device command of the current integration, as shown in this example.

user-f43a29 28 June, 2024, 13:48:11

Hi again @user-8f55ac , are you using the tested USB hub to connect Neon via Ethernet to your computer?

user-f43a29 28 June, 2024, 18:34:47

Hi @user-8f55ac , I was able to test it here with Ubuntu 18.04. Indeed, there are some extra steps on that version. I leave notes here for others and I have attached images to help show the steps.

  • Let's say you have created a wired connection in Settings -> Network with the name "neon".
  • Now, open the terminal and run the command nm-connection-editor.
  • Select 'neon' and click the settings icon in the bottom left.
  • Switch to the 'IPv4 Settings' tab of the window that opens.
  • Under 'Method' select 'Shared to other computers'.
  • Click 'Save' and close the nm-connection-editor window.
  • Make sure the Neon and Companion device are properly connected to the USB hub and that the Companion app is started and has recognized your Neon.
  • Now, connect the Ethernet cable to your computer and the Neon's USB hub, if you've not already done so.
  • Return to Settings -> Network and select a different wired connection and then select the 'neon' wired connection again to disable and then re-enable it.
  • Open the stream section of the Neon Companion app (the icon in the top right) and wait a bit. Eventually, you should see the QR code and IP address appear. You might need to wait longer than seems necessary.
  • If not, leave all settings and connections as they are (do not unplug anything) and simply restart the Companion device and then start the Companion app again.

Once you see the QR code in the stream section of the Companion app, then try running this example of the Matlab integration and let us know how it goes.

Chat image Chat image Chat image

user-2cc535 28 June, 2024, 08:22:23

Hi everyone,

I have a question about Pupil Neon. In the analysis section, when I need to upload a photo to generate the heatmap, does this photo have to be taken by the camera? Or can I extract it from my film? Thank you!

user-480f4c 28 June, 2024, 09:37:11

Hi @user-2cc535! The heatmap is part of our visualization tools on Pupil Cloud. You can find more details on our docs.

Heatmaps can be generated once a Reference Image Mapper or a Marker Mapper enrichment has been completed. - In the case of Reference Image Mapper, the heatmap is generated over the reference image you have selected for your enrichment. The reference image can be taken by any phone, or you can simply create a screenshot of your area of interest from your Neon recording and use it as a reference image. - For the Marker Mapper, the heatmap can be generated over the surface that is defined based on your Apriltag Markers. In that case, you don't need to take a photo.

I hope this helps, but me know if you have any further questions. πŸ™‚

user-057596 28 June, 2024, 10:13:35

Will do @nmt

user-5543ca 29 June, 2024, 20:03:48

Hi! I need to extend the usbc cable of the Neon. I bought the UGREEN USBC Cable extender but that’s not working. Any tips on how to get it to work or any recommendations for the right type of extender cables?

user-f43a29 01 July, 2024, 08:10:45

Hi @user-5543ca , may I first ask why you need to extend Neon's cable? What are you planning to do with it?

End of June archive