πŸ‘“ neon


Year

user-057596 01 August, 2023, 12:47:57

Hi regarding using the Neon module with Pupil Capture is there a recommended adapter cable to connect the module to the laptop?

user-d407c1 01 August, 2023, 14:57:26

Hi @user-057596 ! I 'd recommend not to use any adapter but to connect it directly through a usb-c port if you have the chance. An adapter can be a source of signal deterioration, and could end up giving you some headaches...

user-057596 01 August, 2023, 14:58:43

Thanks Miguel

user-48526c 02 August, 2023, 06:18:53

hello Nadia...Thanks for support. . working well now.

user-480f4c 02 August, 2023, 06:19:47

Hi @user-48526c πŸ‘‹ ! Happy to hear that everything works fine now. Thanks for letting us know.

user-48526c 02 August, 2023, 06:49:33

how do I connect to Neon Monitor? I opened RTSP PI monitor screen but couldn't live stream.

user-480f4c 02 August, 2023, 06:51:36

To access the Monitor app make sure the Neon Companion app is running and visit the page neon.local:8080 on your monitoring device. Please have a look at our documentation for more details regarding the Monitor App: https://docs.pupil-labs.com/neon/how-tos/data-collection/monitor-your-data-collection-in-real-time.html#the-app-s-user-interface

user-48526c 02 August, 2023, 06:53:15

OK...thanks.

user-5203df 02 August, 2023, 08:05:11

Dear Pupil Labs team, we enjoy working with the new Neon, just ordered a second one.

One question came up: is there a possibility to also record a confidence value (as previously in the core)? We use the API to record the gaze_datum which contains only x[float], y[float], worn[bool], unix_timestamp[float]

as described in https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/async.html#pupil_labs.realtime_api.streaming.gaze.GazeData

Thanks and best, Daniel

marc 02 August, 2023, 08:45:05

Hi @user-5203df! I am glad to hear you enjoy working with Neon! There is no confidence value available for the gaze signal produced by Neon. With a machine learning approach, a confidence value if difficult to realize. Upcoming changes to the gaze estimation pipeline like the planned addition of pupil measurements and eye state might change what is possible in the future, but currently the addition of confidence is not concretely planned.

user-839466 02 August, 2023, 18:22:17

Hi!! thank you so much for your continuous assistance! I'm currently looking at 'Marker Mapper' tutorial provided from Pupil Labs, and I'm curious it would be ok if I use different marker at every corner of the boundary, or should I use the same marker for every edge of the boundary? Thank you so much in advance!

user-d407c1 02 August, 2023, 18:27:09

Hi @user-839466 ! They have to be unique markers on each corner. Also keep in mind that you need to leave a white margin around them of at least the width of the smallest white square

user-839466 02 August, 2023, 18:27:40

thank you so much!

user-eebae9 02 August, 2023, 19:51:43

Hi PL team, I've just started working with the Neon glasses and was trying to pull the data directly from the phone (without going through the cloud) and importing it into the Pupil Player (so I can add the gaze overlay to the video). However, I get a warning in the Pupil Player saying that "This version of player is too old! Please upgrade." - I can't find a newer version of the Player than the v3.5 anywhere. The one I tried is from this Github repo: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-cdcab0 02 August, 2023, 20:17:46

Hi, @user-eebae9 - it seems that error message is unclear. Recordings made with the Neon companion app cannot be opened with any version of Pupil Player. We are working on separate desktop software that will allow you to create gaze overlay videos without using Cloud, but don't have an ETA just yet

user-292135 03 August, 2023, 00:06:39

Hi Pupil Labs team, I sent email to info@pupil-labs.com for inquiry on 28th July and have no response. Please check it up.

user-d407c1 03 August, 2023, 08:47:28

Hi @user-292135 πŸ‘‹ ! Thank you for reaching out. We have received your questions and have forwarded them to our Cloud team. They are currently working on gathering the necessary information to provide you with accurate answers. We appreciate your patience and we will get back to you as soon as possible.

user-057596 03 August, 2023, 08:33:35

Is it possible to connect the Neon module to a laptop to run with the Core software if the laptop doesn’t have a USB-C port?

user-c2d375 03 August, 2023, 10:52:33

Hi @user-ae0f24 πŸ‘‹ I'm replying to your questions (https://discord.com/channels/285728493612957698/633564003846717444/1136327339706437642) in this channel to ensure it's in the right place.

Yes, Neon is a good fit for recording eye tracking data in such dynamic and high-arousal environments. It uses deep learning to provide accurate, calibration-free, and slippage-invariant gaze estimation in all kinds of settings. You can wear and remove the glasses without needing to re-calibrate.

Neon comes with a Companion device (OnePlus 10 Pro android smartphone) running the Neon Companion App, where the deep-learning gaze estimation takes place. Neon requires to be connected via USB cable to the Companion device, allowing health professionals to move freely.

Pupil Cloud offers several tools to facilitate the analysis of complex data. For example, you can use tools like the Reference Image Mapper enrichment (https://docs.pupil-labs.com/enrichments/reference-image-mapper/) to map gaze and fixations, creating heatmaps on images of medical instrument monitors or operating table regions.

Just to note, while Pupil Cloud is GDPR compliant, it's not HIPAA compliant due to some incompatibilities between the frameworks. You can find more information about our security measures here (https://docs.google.com/document/d/18yaGOFfIbCeIj-3_GSin3GoXhYwwgORu9_7Z-grZ-2U/export?format=pdf)

If you want to learn more about Neon and schedule a demo call with our Product Specialists, shoot us an email at [email removed]

Hope this helps! Let us know if you have any more questions. 😊

user-057596 03 August, 2023, 14:30:11

Is it possible to connect the Neon module to a laptop to run with the Core software if the laptop doesn’t have a USB-C port?

mpk 03 August, 2023, 17:51:43

Yes, you can use a usb-a-c adapter .

user-057596 03 August, 2023, 18:40:23

Hi @mpk I have but the Neon module is not being registered when you click Active Device on Video Source

user-cdcab0 03 August, 2023, 19:56:00

Hi, @user-057596 - sounds like you're using the Pupil Core software with Neon. Couple of compatibility notes:

  1. You need to run Pupil Capture from source, as an official release with Neon support hasn't been made yet
  2. That functionality is only supported on Linux and Mac at this time
  3. If you're running MacOS, you need to run Pupil Capture as an admin. See: https://docs.pupil-labs.com/core/software/pupil-capture/#macos-12-monterey-and-newer
user-057596 03 August, 2023, 18:40:29

Chat image

user-057596 03 August, 2023, 20:05:12

Hi @user-cdcab0 That explains it. I had read in the instructions as regards running Pupil Capture from source and the part as regards Linux/Mac but I mad the mistake of assuming that it would automatically run with Windows. Thanks for that.

user-48526c 04 August, 2023, 07:11:35

Hi Pupil Labs team, I am having connections issues with my device again. The phone recognizes the Neon glasses, but the tabs for video and eye tracker will not show up. I can push the red button to record, but unable to play it back, and it won't upload to the cloud either. Everything was working fine yesterday.

nmt 04 August, 2023, 07:48:08

Thanks for the update, @user-48526c! We're currently investigating this behaviour and will follow up ASAP!

user-48526c 04 August, 2023, 07:48:31

Thanks Neil..

nmt 04 August, 2023, 10:07:15

nmt 04 August, 2023, 10:24:20

@user-594678, in answer to your specific questions: (1) Is there a way that I can retrieve the raw magnetometer values? No, but the heading is encoded in the orientation quaternion, and as the 'yaw' component of the Euler angles.

(2) How is the accelerometer computed to implement the magnetometer reading? Because on the manufacturer's website, it seems like there are separate accelerometers that only capture acceleration values. It isn't. Remember, the fusion algorithm computes orientation using three different sensors. Essentially, it comes down to correcting gyroscope drift errors. The gyroscope should 'conceptually' be able to measure angular rotations by integrating velocity to position. However, due to the finite sampling frequency, drift errors accumulate over time. Thus, accelerometry and magnetometer readings are needed to correct the drift and provide absolute orientation. This all happens in a fusion algorithm implemented on-chip.

(3) Do you have any ideas or suggestions on how I can compute velocity [and translation from your first message] from the accelerometer readings? Getting translation from an IMU alone is nigh-on impossible unless you have additional sources of localisation data. Similar to the gyroscope, if you just integrate accelerometer data, you will get drift errors that inflate exponentially over time. Thus, you'll probably want to fuse it with something like visual odometry or GPS. This is a common problem in robotics and navigation systems.

That doesn't mean you can't pull useful metrics from the linear accelerations. E.g. you could examine things like head movement variability in a given plane.

nmt 04 August, 2023, 11:52:48

Thanks Neil

user-7ad436 07 August, 2023, 12:31:07

Hi! I would like to ask if the acquisitions require an internet connection or not or if the device is connected to the android app in another way

user-d407c1 07 August, 2023, 12:46:01

Hi @user-7ad436 ! Neon's frames are connected to the phone directly through cable. Internet connection is only required to set-up the phone, download the app and log in. Then you can record completely offline, but note that to automatically upload recordings to Cloud, sync workspaces or wearer profiles, you would still need to connect them at some point to the internet. If you would like to know more or request a demo, do not hesitate to contact info@pupil-labs.com

user-7ad436 07 August, 2023, 13:14:48

Thanks! Is there anything to keep in consideration if an android tablet is used instead?

user-d407c1 07 August, 2023, 13:27:50

Hi @user-7ad436 ! I did not explain myself properly, apologies. Neon has to be tethered to the Companion Device. This Companion Device is an Android phone (One Plus 8, 8T or 10 Pro), that is already included in the bundle.

If you would like to get the data on a tablet, you could either use the real-time API to stream it through WiFi or use the monitor app to control the Companion Device from the tablet (assuming you are on the same network).

You can read more about it here: https://docs.pupil-labs.com/neon/getting-started/understand-the-ecosystem/#understand-the-ecosystem

user-d407c1 07 August, 2023, 13:39:39

Hi @user-292135 ! You should find the answer to your questions in an email reply, please check your inbox.

user-292135 08 August, 2023, 05:52:35

Great! Thank you.

user-a64a96 08 August, 2023, 09:13:40

Hey Pupil Labs team. I have a question regarding head tracking. We are running a study that we need to know the location of the head. I was checking the documentation for the imu.csv file. For the pitch, roll and yaw it is mentioned the output is between -180 and +180 degree but the values are between -2 and 2. I am wondeirng if I misunderstood something. I also did a very simple task and recorded only doing yaw, pitch and roll. I will attach the screenshot to this message. Thank you in advance.

Chat image

user-d407c1 08 August, 2023, 11:13:13

Hi @user-a64a96 ! that's in Radians, a fquick fix has been deployed to Cloud, to report the data in degrees as stated by the documentation, please download again the data if you require it in degrees. πŸ˜‰

user-07e923 09 August, 2023, 07:37:32

Hello again, I am experiencing some errors when running the surface marker enrichments on Pupil Cloud. There are 2 errors which have occurred, and that I should contact you guys for help.

Chat image

user-d407c1 09 August, 2023, 07:44:12

Hi @user-07e923 ! I'm sorry to hear that you are experiencing issues with the surface marker mapper. Could you send us an email to info@pupil-labs.com with the UID of the workspace and the enrichment that prompted that error, such that we can further investigate what happened?

For the workspace ID, you can find it on the URL https://cloud.pupil-labs.com/workspaces/{UID}/recordings and for the enrichment you can find it after right-clicking it - Copy Enrichment ID.

Thank you for your cooperation

user-068eb9 09 August, 2023, 13:55:03

Hello, I would like to map gaze over a cockpit during a flight simulator experiment. How should I go about it given the small space of the cockpit? Also I would like to develop an XR system for freely behaving macaques. Would it be possible to schedule a call? Thank you

user-d407c1 09 August, 2023, 14:03:38

Hi @user-068eb9! Please write to us at info@pupil-labs.com to request a demo & QA call with one of Product Specialists. For the first one, it should not be a problem. Reg macaques, Neon was not designed, neither tested with macaques.

user-839466 09 August, 2023, 20:31:16

Hi! I'm currently suffering some issues and I would like to ask a quick question on 1) where could I find 'blink.csv' file? (image 1) I found all files but 'blin.csv' 2) what does each column refers to (tl, tr, br. bl) in 'surface_position.csv'? (image 2) 3) If I would like to segment the eye fixation within viewpoint 1,2,3 in a single screen (as image 3), how should I analyze this data with marker enrichment? or would there be better way for analyzing the fixations on each area in the screen? Thank you in advance for your assistance!!

Chat image Chat image Chat image

user-480f4c 10 August, 2023, 06:39:09

Hi pepsi πŸ‘‹ - Thanks for reaching out. Regarding your questions, please find some points below:

1) You can get the blinks.csv file through the Timeseries Data (right-click on the recording > Download). 2) Please find all the information about the exported data included in this file here: https://docs.pupil-labs.com/export-formats/enrichment-data/marker-mapper/#aoi-positions-csv 3) You could consider creating one enrichment for each viewpoint (I understand that each viewpoint is an AOI, is that right?). Using the Marker Mapper enrichment, you can drag the corners of the to-be-defined surface such that the surface will include only the viewpoint of interest (e.g., https://docs.pupil-labs.com/enrichments/marker-mapper/#selecting-markers-in-the-cloud). This way you will end up having 3 enrichments for viewpoints 1, 2, and 3, respectively and you will be able to analyze gaze and fixation data within each AOI. Would this help with the analysis you had in mind?

user-07e923 10 August, 2023, 08:51:48

Hi Wee Kiat I m sorry to hear that you

user-839466 10 August, 2023, 14:45:25

thank you so much Nadia!

user-839466 10 August, 2023, 16:54:36

sorry for bothering you agian 😒 would it be possible to analyze the area of interest for the data collected without marker?? thank you so much for your help in advance!

nmt 11 August, 2023, 07:28:27

Hi @user-839466! If you did want to use the Marker Mapper enrichment specifically, that does require AprilTags – no way around that, unfortunately. We do have another enrichment that is able to map gaze to environments without requiring markers. But there are certain constraints associated with running it successfully. What exactly are your participants looking at, is it a computer monitor, simulator screen etc?

user-839466 11 August, 2023, 15:24:42

Thanks Neil for your reply!! they are actually watching the display! this is the example image of what they're looking at!

Chat image

nmt 14 August, 2023, 08:53:27

The photo of your setup is helpful!

It is feasible, as you've suggested, to utilise the Reference Image Mapper enrichment. This tool essentially allows you to map gaze data onto a 2D image of a 3D environment without the need for AprilTags. However, you need to ensure that the enrichment is properly configured, so I recommend reading the instructions on how to do so here: https://docs.pupil-labs.com/enrichments/reference-image-mapper/#reference-image-mapper and then giving it a try!

Note that we currently do not have a feature to generate Areas of Interest (AOIs) in Cloud after using the Reference Image Mapper, although this is planned for the future.

Thus, if you wish to generate AOIs, you will need to execute some Python code. This tutorial will guide you through the process: https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#define-aois-and-calculate-gaze-metrics

user-839466 11 August, 2023, 15:30:24

maybe I could use reference image marker...?

user-33607d 11 August, 2023, 23:17:25

Hi Pupil labs, the neon companion app no longer recognizes the eye tracker when plugged-in. It's a relatively new device (we have been using it for less than a month), and never had this issue before. Would appreciate if you can help us diagnose the issue. Thanks!

nmt 13 August, 2023, 09:09:25

Neon Companion App + Connection

user-057596 13 August, 2023, 15:00:37

Previously with Pupil Cloud when you added the Gaze Overlay enhancement in a project you could have it applied in the cloud to each recording in a particular project whilst you played the recording in the cloud. However with the new analysis format you can only see the video render in the cloud on the recording that was selected to set the parameters and you have to download the file to see it applied to all the other recordings in the project. Is this correct?

nmt 14 August, 2023, 13:28:42

Hi @user-057596! Thanks for reporting your observation. That's actually not how it should be. The visualisation settings should be persistent when you select a new recording πŸ˜…. I'll pass this information to the Cloud Team!!

user-c16926 14 August, 2023, 06:29:50

Hi pupil labs, I am trying to get the gaze positions offline. Previously when I used pupil invisible I can do this by pupil player, but now it says a new version is needed. Is there any plan on this new version, and is there an alternative way that I can do this on my side (without using pupil cloud)? Thanks a lot.

nmt 14 August, 2023, 08:56:07

@user-839466, to throw in a wildcard, we also have a guide that shows how to map and record screen-based gaze coordinates using the Reference Image Mapper. I'm not sure how useful that'd be, but you can read about it here: https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/#map-and-visualise-gaze-onto-a-display-content-using-the-reference-image-mapper

nmt 14 August, 2023, 09:14:19

Neon exporter

user-057596 14 August, 2023, 13:30:09

Thanks @nmt glad to help out.

user-839466 14 August, 2023, 15:19:13

Thank you so much!! With using Neon by Pupil device, could pupil diameter change (size to check dilation) be exported?

nmt 14 August, 2023, 15:28:36

Pupil size measurements are not yet available for Neon. Although they will be added as a free update. You can read more about that here: https://feedback.pupil-labs.com/neon/p/pupillometry-eye-state-estimation πŸ™‚

user-839466 14 August, 2023, 16:45:59

Thank you Neil! it will be released in Q3 2023, and it will be available without any changes needed on the device's end, accessible via pupil cloud right πŸ™Œ thank you!

user-44c93c 14 August, 2023, 20:24:04

Hello. We are collecting gaze data from the neon, attached to the bare metal kit, via your real-time API/python script. I am wondering if the neon companion app and/or anything in the API is already applying a smoothing algorithm to the gaze data or if what we are getting is the raw gaze data? It looks like raw data, but wanted to be sure. We have found that applying a smoothing algorithm (using running average of set amount of frames to display gaze point) in our python script improves the usability of our gaze data, but wanted to know if there are validated and/or recommended approaches to a smoothing algorithm. Our current algorithm was chosen without significant supporting data. Thanks!

user-d407c1 16 August, 2023, 09:53:52

Hi @user-44c93c ! A kind of 1€ filter (https://gery.casiez.net/1euro/) is applied to the data to smooth the signal, although there are plans to change this in the future.

user-07e923 15 August, 2023, 08:17:50

Hi Pupil Labs Team, quick question: Is it possible to extract raw data between certain events on Pupil cloud? I know that we can download the full raw data, but I would like to extract only data within specific events which I've tagged in each videos. Thanks!

user-d407c1 15 August, 2023, 08:46:56

Hi @user-07e923 ! There is currently not such feature, if you would like to see it implemented, you can propose it here https://feedback.pupil-labs.com/

user-a64a96 15 August, 2023, 12:09:32

Hey Pupil labs team. I can't access the cloud. Yesterday, same thing happen. I wrote a message here but since I could sign in again, I deleted my message from here. Will the problem persist for today? (apologies in advance if this is not the right place to ask this Q. I didn't know where to ask it) Thank you in advance for your response.

user-d407c1 15 August, 2023, 12:39:08

Hi @user-a64a96 ! We are experiencing some issues with our "Login" system, I would like to apologise for the inconveniences this might be causing you. We will let you know as soon as we have the situation handled.

user-d407c1 15 August, 2023, 14:45:19

Hi @user-a64a96! I am glad to say that Pupil Cloud login issues have been resolved, apologies again for the inconvenience, and let us know if there is anything else we can do for you.

user-ccf2f6 15 August, 2023, 15:24:33

hi PL, I wanted to ask if it's possible to get the gaze azimuth and elevation with the realtime APIs?

nmt 16 August, 2023, 10:30:16

Hi @user-ccf2f6! Just so it's clear, are you referring to gaze or head position?

user-d407c1 16 August, 2023, 10:30:28

Hi @user-ccf2f6 ! the gaze azimuth and elevation are not reported in the realtime API.

That said, the azimuth and elevation of gaze reported on Cloud, does not use the IMU, but is rather the angle from the center of the camera.

Have a look at this gist https://gist.github.com/papr/164e310047f7a73130485694d037abad to better understand how it is converted in PI, the gist can also serve for Neon, and you wont need to download the intrinsics as they are already there

user-ccf2f6 15 August, 2023, 15:25:49

if not, is it possible to calculate it with the IMU data?

user-839466 15 August, 2023, 22:02:24

Hi, I have a quick question regarding the data which could be analyzed in pupil cloud! if there are several AOIs in the same project, would it be possible to calculate 1) revisit count to each AOI, and 2) fixation count in each AOI, 3) dwell time or duration on each fixation ??

I believe https://docs.pupil-labs.com/export-formats/enrichment-data/marker-mapper/#aoi-positions-csv doesn't include this info, so any comments would be greatly appreciated! Thanks

user-d407c1 16 August, 2023, 10:32:59

Hi @user-839466 ! This data is not reported by default in Cloud yet, we have plans to improve the metrics we offer out of the box, but in the mean time you will have to compute them by yourself.

Please find here a tutorial based on the reference image mapper, that you can adapt to to your use case https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/

user-a64a96 16 August, 2023, 13:58:42

Hey Pupil lab team. Our experiment needs to know the head position however from time to time when we ran the experiment the imu.csv file is empty but other times it records data. Could you please help me to figure out the issue? is it something on my end or something else? Thank you in advance.

user-88386c 17 August, 2023, 00:33:56

Hello all, apologize in advance for a long list of questions. I am a user Researcher working at a Public Transportation Service Agency and have been using Pupil Invisible and the add-on for Vive before, and we are thinking about getting another eye tracker for in lab studies and is debating between Core vs Neon and would like to get some input from the community.

  1. If connected to a computer, does Neon support using running with Pupil Capture and store/process all data locally?
  2. (connected) Are there any difference in the data we can get? (I saw Pupillometry is not available?)
  3. (connected) Can I use all existing Core plugins? Such as using markers to calibrate, streaming data through LabStreamingLayer, manual offset correction, or integrating with Psychopy or other experimental programs?
  4. Regarding the tracking accuracy, how does it compare to the Core for in lab study (UI and content reading related studies)?
  5. And given Neon's form factor can it be used with a VR headset, such as Meta Quest Pro or HTC Vive 2?
  6. As Neon camera is fixed I am worry that for people with greater IPD they eyes cannot be picked up (I had a colleage with 70cm IPD that none of the glasses type eye-tracker works except Pupil Core)

Thanks a lot for the input.

marc 17 August, 2023, 06:12:33

Hi @user-88386c! No worries about sending too many questions, we're happy to help! In almost all cases we'd recommend Neon over Pupil Core, because it is much easier to use and more robust. Let me give you some details in the responses to your questions:

1) Yes, you can connect Neon do a computer and record data with it via Pupil Capture. But this implies that Pupil Core's algorithms will be used to make the gaze predictions, which will be less robust and require a calibration. The experience will be somewhat similar to using a Pupil Core headset or the HTC Vive VR add-on. This setup is only partially supported though: You'd have to run Pupil Capture from source and it's only compatible with MacOS and Linux. Using Neon like this would not be the smoothest experience.

2) The data you get from Neon is similar to what you get from Pupil Invisible with the following differences: - It is more accurate. Final evaluation numbers are not finished yet, but the improvement is around 50%. Specifically the parallax error Pupil Invisible had is gone for Neon too. - Pupillometry data will become available for Neon in an update in late September! - The IMU of Neon has 9 degrees of freedom, while Pupil Invisible's had 6. Thus you get estimates for all of pitch, yaw, and roll now.

marc 17 August, 2023, 05:48:14

Hi @user-a64a96 and @user-594678! Sounds like you have similar problem with the IMU not recording data reliably. Recent updates to the Companion app should have increased the IMUs robustness. Could you let me know what app versions you have been using? You'd find this info in the info.json file of a recording or in the "App info" menu you can access by pressing and holding the app's icon on the homescreen.

user-a64a96 17 August, 2023, 07:08:19

Hey @marc! Thanks for getting back to me. This is my app version: "2.6.33-prod".

user-594678 17 August, 2023, 16:46:33

Hey Marc, we're currently using 2.6.33-prod. Do you have a recent version? Also, for the data already collected, isn't there anything that we can do to recover the missing IMU data?

marc 17 August, 2023, 06:12:40

3) Unless you run Neon through Pupil Capture, you will not be able to use Pupil Capture's plugins easily. But there are solutions for all the things you listed: - LSL: There is a dedicated LSL relay for Neon, which you could use in combination with e.g. LabRecorder. https://pupil-labs-lsl-relay.readthedocs.io/en/stable/ - Calibration: with Neon's accuracy boost you don't really need to do offset correction most of the time. But offset correction is possible via the Companion app just like with Pupil Invisible. - Neon is about to get officially supported in Psychopy.

4) If your subjects are using things like a headrest and you put a lot of effort into controlling everything Pupil Core might provide higher accuracy for short periods after calibration. In more practical settings the performance is expected to be the same. During long recordings Neon might perform better because Pupil COre can suffer from the calibration deteriorating over time. Consider e.g. these two examples of screen related applications with Neon https://docs.pupil-labs.com/alpha-lab/phone-screens/ https://docs.pupil-labs.com/alpha-lab/gaze-contingency-assistive/

5) Yes, Neon can be used within a VR headset. We are currently in the process of implementing the software support to make this easier. A first release of this can be expected in the next couple of weeks. The first headset we will support is the Pico 4 and more will follow after that. Given the software support, anyone could design a 3D-printed hardware mount themselves for whichever headset they need though.

marc 17 August, 2023, 06:12:42

6) We are evaluating Neon on a very large population covering the full range of IPD values. 70 mm should be no problem at all! Small IPD values can be an issue in some cases because the glasses are not sitting firmly on the head anymore (e.g. when working with children). To solve that we offer child-sized frames and headbands to hold the glasses in place.

I hope this gives you enough input for your decision! Let me know if you have more questions!

user-057596 17 August, 2023, 16:27:22

What is the cost of a companion device for Neon and head straps?

user-d407c1 18 August, 2023, 09:08:33

Hi @user-057596 ! The Companion Device is 750€, and head straps are 50€

user-057596 18 August, 2023, 11:07:04

Thanks @user-d407c1 πŸ’ͺ🏻

user-a64a96 18 August, 2023, 12:58:24

Hey Pupil labs team. I have two questions:

  1. We did some recordings and checked the imu file. The recordings were short, standing in each direction (North, west, east, and south). The imu file showed the same values for yaw for all of them although we did the yaw calibration before this experiment. Then seeing the data like that we checked the glass with the imu visualisation tool + the compass and it seemed that the sensor was "stuck" in the west direction; After doing 360 degrees with the glasses, The glass went back to "normal" and as we moved the glasses the coordinates showed the right direction. Is there a reason why this would happen?

  2. There is always a slight drift in the roll. pitch and yaw even when the glasses are stationary. I am wondering why.

nmt 18 August, 2023, 13:33:47

Hey @user-a64a96!

Q1 This sounds like a classic yaw was not calibrated scenario (when I say classic, we've spent a lot of time testing out the IMU and calibrating the yaw component in various situations πŸ˜…).

Rotating the glasses through 360 degrees probably had the effect of calibrating the IMU, hence the yaw orientation became aligned with your compass, when you were using the IMU visualisation tool.

I wonder, did you start a recording prior to initiating the calibration choreography when you did your testing in the morning?

Q2 Have you checked this after calibration with the IMU visualisation tool? What are you using to validate the measurements?

user-ac085e 18 August, 2023, 17:11:41

Is the world view camera on the Neon capable of receiving IR light? Or can IR light help illuminate surfaces for the worldview camera?

mpk 18 August, 2023, 18:48:02

There is a IR blocking filter in the lens. IR should not be visible in the scene camera, extra illumination will not help in the case.

user-ac085e 18 August, 2023, 22:52:13

@mpk Even for the purposes of using the reference image mapper?

mpk 18 August, 2023, 22:52:51

@user-ac085e in that case just decent visible illumnation will help getting good non-blurred images and improve tracking.

user-ac085e 18 August, 2023, 22:53:59

@mpk We are attempting to use the Neon in a low ambient light setting (as needed for the purpose of the study) and were hoping to improve tracking by utilizing some ambient IR light. So this will not work as we intend it?

mpk 18 August, 2023, 22:54:57

I dont think out of the box, but some sample footage would help us give useful feedback.

mpk 18 August, 2023, 22:55:44

illuminating the markers would help of course. it also depends on how fast the subjects move. Maybe reference image mapper will still work.

user-ac085e 18 August, 2023, 22:57:30

It is for use in a simulator

Chat image

mpk 18 August, 2023, 23:01:18

this one is tricky for reference image mapper because the big screens have content that changes. but adding illuminated markers should work.

user-ac085e 18 August, 2023, 23:01:58

By illuminated markers do you mean illuminated april tags? Or just unique recognizable features about the evironment?

mpk 18 August, 2023, 23:02:40

ideally illumnated april tags. Adding other markers would also work but will required custom tracking code.

user-ac085e 18 August, 2023, 23:03:52

Ah ok. AprilTags only work within the marker mapper encirchment though correct? Or can they also assist in the reference image mapper?

mpk 18 August, 2023, 23:04:55

that would be for the marker mapper enrichment. But there might be ways of adding other illumnated landmarks that will make reference image mapper work in your usecase. That would simply have to be tested.

user-ac085e 18 August, 2023, 23:06:06

Thank you for your assistance.

mpk 18 August, 2023, 23:06:35

You are welcome, good luck with making this work!

user-d407c1 21 August, 2023, 07:16:42

Hi @user-d569ee πŸ‘‹ ! Are you accessing the stream trough your browser? VLC? There might be some buffering setting or codec processing causing this delay. Although, there are things you can do like enabling hardware acceleration to improve the codec processing performance, my recommendation would be to use our realtime API. https://docs.pupil-labs.com/neon/real-time-api/introduction/

https://pupil-labs-realtime-api.readthedocs.io/en/stable/examples/index.html

user-d569ee 21 August, 2023, 07:49:12

Thank you for your response. In my testing using VLC, I experienced a 2s latency, which was quite unsatisfactory. However, with FFPLAY and the low_latency settings (-fflags nobuffer -flags low_delay -framedrop), I managed to achieve a more satisfactory result of approximately 0.9s latency. Yet, this is still slower than what I'm aiming for.

I've observed that the video on neon.local:8080 has an impressive latency of just around 0.1s. Does your local app use RTSP? This is the kind of latency I am hoping to achieve.

I also found that the Python code provides a favorable latency of about 0.5s, which fits my criteria. However, translating this to Android using Flutter has presented a challenge for me. I've been trying to locate the RTSP settings in the code, but to no avail yet. Could you guide me to their location? Your assistance would be greatly appreciated.

Thanks!

user-5543ca 28 August, 2023, 17:41:21

Hello, what is the delivery time for Neon eye tracker now?

mpk 28 August, 2023, 17:42:15

For most frames its 4-8 days. I think the only one with a longer wait is "I can see clearly now".

user-5543ca 28 August, 2023, 18:44:40

Thank you, Moritz.

user-328c63 29 August, 2023, 19:11:58

Hello! If I'm showing every subject the same stimulus and starting the eye tracker when the stimulus starts and ending it when the stimulus ends, should every subject have the same number of rows? Indicating, same number of gaze outputs?

user-cdcab0 29 August, 2023, 19:42:22

The short answer is no essentially because the timing is imperfect, but the more important answer is that you will want to start your recording before your first stimulus and end it after your last one.

To synchronize your stimulus presentations with your gaze data, you can annotate your gaze recording with events. See: https://docs.pupil-labs.com/neon/real-time-api/track-your-experiment-progress-using-events/#how-to-use-events-to-keep-track

user-28ebe6 30 August, 2023, 20:40:56

I have a question regarding the enrichment data. If there are multiple recordings with enrichments shared between them when you download the the enrichment data it puts all of the data across recordings into the same .csv but is there a way to download a seperate folder that contains only the enrichment data for a specific recording?

user-480f4c 31 August, 2023, 07:13:11

Hey @user-28ebe6 πŸ‘‹πŸ½ ! Filtering which subset of data is downloaded from the enrichment results is not possible. However, you could create an enrichment for the recording of your interest only by using events (e.g., create events "start-rec" and "end-rec" and put these events as the start/end of your enrichment in Advanced Settings > Temporal Selection). This way, the enrichment that you will run will be only applied to recordings that have these events. But also note that the enrichment CSV files you download contain a recording id column, which also allows to split the data by recording. I hope this helps!

user-28ebe6 31 August, 2023, 13:41:40

Hmm, so for our use case we want to use the same enrichments (marker mappers) across multiple participants as they will be looking at the same surfaces but because we are doing additional analysis in MATLAB it makes it hard to automate the importing of the files and associating it with our naming convention for participants, experiments and sources of other physiological data. Hence for our use case it would be more useful to have one folder per recording so that we can automate the navigation through directories to read in the appropriate data and will not have to redundantly load the entire data set each time we analyze the data for new participants. If we used events and each recording was associated with its own enrichment is there a way to keep the same origin so that we do not have to drag the corners of the surface for each enrichment we make?

user-480f4c 01 September, 2023, 09:37:25

Hey @user-28ebe6, thanks for clarifying.

Currently, it is not possible to reuse the same surface definitions for Marker Mapper enrichments in Cloud. In your case, indeed it might be time consuming running the enrichment for each specific rec by filtering out recordings using events, having to re-define surfaces etc.

Instead, what you could do to have one folder for each recording would be to download the enrichment data with all recordings, use the recording id value from the enrichment data to filter out rec-specific gaze/fixation data and re-write this in a new table in your rec-specific folder. For example, something like the following in python:

df = pd.read_csv("fixations.csv")

for rec_id in df["recording id"].unique():
  os.make_dirs(rec_id)

  d = df[df["recording id"] == rec_id]
  d.to_csv(rec_id + "/fixations.csv")

or the same but in MATLAB:

% Load the CSV file into a table
df = readtable('fixations.csv');

% Get unique recording IDs
recording_ids = unique(df{'recording id'});

% Loop through each unique recording ID
for i = 1:length(recording_ids)
    rec_id = recording_ids{i};

    % Create a directory with the recording ID as the name
    mkdir(rec_id);

    % Select data for the current recording ID
    d = df(strcmp(df{'recording id'}, rec_id), :);

    % Save the selected data to a CSV file in the created directory
    writetable(d, fullfile(rec_id, 'fixations.csv'));
end

I hope this helps!

user-594678 31 August, 2023, 18:04:20

Hi Pupil team, I'm trying to define a surface using marker mapper enrichment. I tried to detect the surface of the monitor in the picture, but when I move the lines to align with the monitor, it twisted weirdly and I couldn't make it fit to the monitor - and that happened no matter how many markers I selected. The problematic one is the right bottom corner. Do you have any idea why is this the case and how I can solve it?

Chat image

user-cdcab0 31 August, 2023, 21:17:31

Hmm, that's interesting! Have you successfully used the Marker Mapper enrichment on any recordings from this Neon device or is this your first time trying? Can you tell us the serial number of your glasses (open the app, click the β“˜ in the top right corner)?

user-594678 31 August, 2023, 21:24:18

Yes I have. But previously I didn’t set the surface area manually. For this recording, the enrichment still works if I don’t define the surface area. It just makes the boundary based on the marker’s outer-most corners which is a bigger area than the monitor, so I need to adjust it. The serial number is 917651. Actually our device has some other issue and your team is going to send us a new device. But the problem we have is related to the timing stamps, and I’m not sure whether this would be a relevant issue.

user-594678 31 August, 2023, 21:26:58

In addition to that, I wonder whether you realize the issue with the app. When I open the app, it pretty frequently shows a white blank screen and shuts down the app. Sometimes it says that the app is not responding and whether I’d want to wait, and if I wait, about half time it works but the other half times the app stays just frozen and does not respond at all.

user-cdcab0 31 August, 2023, 22:51:07

Let me check with some colleagues regarding the surface issue. If there's a problem with your hardware, it could be causing the app to have that issue, for that, I'd wait to see if it still happens with your new device

End of August archive