πŸ‘“ neon


Year

user-83d076 02 January, 2025, 20:33:23

Hi community. I'm having an issue using the Neon Player plug "surface tracker". I see that there is an option to "edit surface" and "edit markers" but it seems like they are inactive buttons on the screen. Is this a bug or is there a specific setting I need to activate? Thank you for your help in advance!

Additionally, is it possible to add a surface without the markers? For example, if I want to specify the chessboard as a surface, would I need to add more markers on each corner of the chessboard

Chat image

nmt 03 January, 2025, 02:13:28

Hi @user-83d076! It's absolutely possible to edit a surface in Neon Player, e.g. such that it covers only the chessboard. You'll need to click once on the circle where it says, 'edit surface'. You might need to give it a little time, but you should then be able to drag and drop the corners of the surface to new desired locations. It's also very easy to add another surface. Just click, 'add surface' in the Plugin settings. If you haven't already, check out the Neon Player Surface Tracker docs that cover all these points. Let me know how you get on!

user-83d076 03 January, 2025, 03:43:43

@nmt Thanks for getting back to me. It seems that clicking on the pink circle still isn't working for me so I might just do a reinstall. Out of curiosity, does the Neon player have an "Area of Interest" feature like the pupil cloud has?

nmt 03 January, 2025, 04:03:01

Before trying a fresh install, it's worth clicking 'restart with default settings', which can be found in the Neon main settings. Whilst you can create multiple surfaces in Neon Player, and drag and drop the corners, this does not fully replicate the Area of Interest feature of Pupil Cloud, which also computes useful metrics. Pupil Cloud is the recommended workflow if that's what you need

user-83d076 03 January, 2025, 04:16:38

@nmt Hmm, it still does not work for me. Maybe its my machine.

One of the main reasons I'm utilizing the Neon player is because the government agency I've worked for is very peculiar with where we store data. Platforms we store data on have to be vetted intensely. I would like to by using the offline software.

However, if the AOI features in Pupil Cloud is the best way to go, then I will make the argument to my leadership and see where can go

nmt 03 January, 2025, 04:47:51

Hey @user-83d076! Pupil Cloud indeed has several benefits when compared to Neon Player - you can draw free hand AOIs, generate metrics, and moreover, these can be done on aggregrate, whereas the offline software only processes single recordings. I see you've opened a ticket. I'll respond there about the Neon Player behaviour!

user-75d5ea 06 January, 2025, 11:25:27

Hi, we use the NEON glasses to record eye movements while watching images and videos on a screen. For the images we use the reference image mapper to map the fixations on the image (it's super helpful). Is there a way for the videos that is as simple as the one for the images to map the fixations during the videos on the screen? Thanks for your help in advance! Kind regards from Luebeck

user-f43a29 06 January, 2025, 12:34:46

Hi @user-75d5ea , I believe you are referring to something like our Map Gaze Onto Dynamic Screen Content Alpha Lab guide? If not, just let me know and I can provide further tips!

user-2618c1 08 January, 2025, 07:02:47

I want to try using the Neon Eye Tracker while reading a book, and I would like to use the Surface Tracker in Neon Player to track my gaze on the paper. Is it possible to modify the Surface Tracker in Neon Player from marker detection to document detection through coding?

user-d407c1 08 January, 2025, 08:38:44

Hi @user-2618c1 πŸ‘‹ !
There isn’t a document detection plugin by default. That said, Neon Player is open-source, so you can create your own plugin or modify the source code as you mention.

Check out the Plugin System API here.

user-3ecd80 08 January, 2025, 10:48:52

Hi, I am having trouble with the pupil cloud. I want to analyze how someone interacts with a device. the device has a screen with a GUI and hardware components. there are buttons on the the GUI that the user can choose which gets them to another screen. I want to analyze each screen they go into in terms of what they look at. I want to create different enrichments and different areas of interest depending where I am in the video. is that possible? or do you have a better suggestion of how I would setup and analyze something like this?

user-f43a29 08 January, 2025, 13:21:04

Hi @user-3ecd80 πŸ‘‹ ! Yes, this is indeed possible.

May I first ask if you have had your free 30-minute Onboarding session yet? We can cover some of these details in real-time that way.

In the meantime, do you plan to use the Marker Mapper or Reference Image Mapper? May I also ask what kind of screen you are using (monitor, smartphone, tablet)?

At least when it comes to the general approach for analyzing sections of recordings, the process is the same for both Enrichment types:

  • Events: You can add Events with the same names to each recording, either manually in the Pupil Cloud interface or programmatically during the experiment with the Real-time API. You can give the Events whatever name you want, such as screen_1.start, screen_1.end, etc.
  • Enrichment Setup: If you have added these events to the respective recordings and organized them in a Project, then you can create an Enrichment and use the Temporal Selection to limit the Enrichment analysis window to the section where a specific screen is displayed. For example, by choosing screen_1.start and screen_1.end.
  • AOI definition: Once you have an Enrichment ready, you can click Edit next to Areas of Interest on the right and draw your AOIs. After the Enrichment has finished processing, you can then visualize an AOI Heatmap.

I recommend also taking a look at the Pupil Cloud tutorial video.

user-3ecd80 09 January, 2025, 06:35:40

Hi @user-f43a29, Thanks for your fast answer. No We have not had the onboarding we just got the glasses and I was playing around with it. How do I schedule the onboarding? As to your other questions: We don’t have a plan yet in terms of Marker Maker or Reference image mapper, I have been playing around with the Marker Mapper mainly because the reference image mapper failed when I tried it. From my understanding, the reference image mapper might not be a good option since the information on the screen can change.

As to screen, we are analysing the usability of medical ventilators. These ventilators have a built in touchscreen monitor.

user-3ecd80 09 January, 2025, 14:21:00

Hi @user-f43a29, when I change the temperal selcetion it keeps the old refrence image and AOIs that I drew for the first teporal selection. What am I doing wrong?

user-98b2a9 09 January, 2025, 13:12:41

Hi! I have a question regarding the files uploaded to the pupil cloud. Is it possible to upload a file that has already been uploaded?

user-f43a29 09 January, 2025, 13:35:05

Hi @user-98b2a9 , it is not possible for users to re-upload recordings/files to Pupil Cloud.

May I ask why you need to do this?

user-b8c945 09 January, 2025, 17:07:41

Hi! I have a few questions about API implementation and data handling: 1. Is there a C++ real-time API implementation available? (I've successfully used the Python API, but our multi-sensor integration project is written in C++) 2. For LSL streaming in Neon - I see it streams gaze data, but are videos and IMU data also included in the stream? 3. When saving data using the real-time API, is there a way to import these recordings into Neon Player later for post processing? I noticed Neon Player expects specific file formats from Neon Companion. Thanks in advance for any help!

user-f43a29 10 January, 2025, 11:37:06

Hi @user-b8c945 πŸ‘‹ !

  1. We do not yet have a supported C++ implementation, but it is certainly possible. The Real-time API can essentially be used from any language that speaks the respective network protocols. When developing such an implementation, it will be helpful to reference the respective Under the Hood documentation.

  2. Videos and IMU data are currently not streamed via LSL. However, you can run a recording in parallel in the Neon Companion app and post-hoc synchronize the data using Events.

  3. Since the Real-time API is designed with flexibility in mind, collection of the streamed data will be dependent on how the client receives and formats it. This means that it is not in general possible to load such data into Neon Player. If you run a recording in parallel, then, as you point out, the recorded data can be loaded into Neon Player.

user-b8c945 10 January, 2025, 15:36:21

Hi @user-f43a29 , thank you for your detailed response! I have a few follow-up questions: 1. I noticed there are ZeroMQ documents on the website - could you clarify if ZeroMQ is only used with Pupil Core devices, while Neon devices use HTTP REST API instead? 2. Regarding data formats compatible with Neon Player, I'm curious about which data formats are compatible with Neon Player for uploading. Is there documentation available about the file formats for uploads or Neon Companion's exports, or is this information proprietary? I understand we can use parallel recordings with event synchronization as a solution, but I'm exploring if there might be alternative approaches. Thank you again for your help!

user-f43a29 10 January, 2025, 16:23:25

Hi @user-b8c945 , you are welcome!

  1. ZeroMQ is for Pupil Core and technically underlies the Legacy API of Pupil Invisible. Neon's Real-time API is based on HTTP (REST) and RTSP, which can be used via UDP or WebSockets.

  2. The data formats are completely open, not proprietary. For example, we offer a reference Python implementation, pl-neon-recording, for loading & working with the binary data directly from the Neon Companion device. I would surmise though, that you're likely to save more time by using the Native Recording Data directly from the device, rather than trying to replicate the full format when saving the real-time streamed data.

What tools from Neon Player do you plan to use?

May I also ask, when you say "uploads", do you mean uploading data to Pupil Cloud, or solely for loading into Neon Player?

user-b8c945 10 January, 2025, 17:13:25

Hi @user-f43a29, Thank you for confirming that ZeroMQ is for Pupil Core and for sharing the reference for working with binary data from the Neon Companion device.

We plan to use tools like the fixation and blink detector. Although we could define and perform fixation and blink analysis ourselves using the data from the Real-time API, we appreciate the convenience and user-friendly interface provided by Neon Player for this purpose. I understand that using native data from Neon Companion would likely be the most practical approach here.

Regarding "uploads," I was specifically referring to uploading data into Neon Player. I understand that uploading data to Pupil Cloud is currently only possible through the Neon Companion app.

Thanks again for your help!

user-d407c1 13 January, 2025, 08:04:18

Hi @user-b8c945 πŸ‘‹ ! As my colleague mentioned, saving streamed data directly into a Neon Player format is not a supported workflow at the moment.

In fact, we strongly recommend using recorded data from the Companion Device rather than relying on the stream alone, to ensure the best sampling rate and to avoid potential network issues.

That said, if you still want to explore this path despite our recommendations, you could consider reverse-engineering the pl-rec-export or pl-neon-recording tools to mimic our recording format and load it onto Neon Player. However, please note that we won’t be able to assist you on this endeavor.

If your goal is simply to have an easier way to load recordings from the companion device into Neon Player, we encourage you to suggest this feature in the πŸ’‘ features-requests channel.

user-a5c587 11 January, 2025, 20:23:48

Hi! Quick question, were doing a recording using neon and a flight simulator, one issue we keep having is the glare from the screen making it hard to see in the post processing on the specifics an individual is fixating / gazing at within the cockpit , any suggestions ? Thank you!

user-d407c1 13 January, 2025, 07:41:27

Hi @user-a5c587 πŸ‘‹! Do you mean glare or overexposure? And if the first, is the glare appearing on the scene camera? If so, you can try placing a polarizer filter in front of it. This can help reduce glare, but keep in mind that if your monitors already have a polarized filter oriented in one direction, adding another filter in the opposite direction will block light transmission entirely, making the screen content invisible.

An alternative approach, though a bit more complex, would be to use a third-party camera to map gaze onto that camera. You can learn more about this option here. However, I assume you're currently using Marker Mapper or Reference Image Mapper to remap gaze onto the screen, which would be trickier if using a 3rd party camera.

Are you able instead to record the screen content of the simulator and remap gaze onto that? You can find something like that here.

user-2618c1 13 January, 2025, 03:01:02

Hi, I wonder whether we have description on the exported file of neon player, including .csv file from surface tracker.

user-d407c1 13 January, 2025, 07:34:03

Hi @user-2618c1 πŸ‘‹, Are you referring to something like this ?
Let me know if this is what you were looking for or if you need further clarification!

user-2618c1 13 January, 2025, 07:40:12

Yes, Could you provide more details about the .csv file surf_positions_<surface_name>? Specifically, I would like to understand the meaning of dist_img_to_surf_trans. Thank you!

user-d407c1 13 January, 2025, 07:52:19

@user-2618c1 The dist_img_to_surf_trans in the .csv file represents the transformation matrix that maps coordinates from the distorted image space to the surface coordinate system.

This matrix is used to provide gaze from camera’s distorted image in the surface. You can find its implementation in the surface.py file of the Neon Player repository.

user-24f54b 13 January, 2025, 08:26:16

Hi all, I'm having some trouble accessing the companion app in a browser despite the phone and computer being on the same network. Could someone give me some troubleshooting tips? Thank you.

user-480f4c 13 January, 2025, 08:31:15

Hi @user-24f54b! Have you checked our troubleshooting section for connection problems? https://docs.pupil-labs.com/neon/data-collection/monitor-app/#connection-problems

Please make sure the phone and laptop are connected to the same network. Are you maybe using an institutional network?

user-24f54b 13 January, 2025, 08:31:45

Yes it's institutional

user-480f4c 13 January, 2025, 08:33:26

Institutional networks usually block MDNS; I will strongly recommend using a dedicated router if possible or using a hotspot from a third-party device. You'll need to connect both the phone and the laptop/PC to the hotspot or dedicated network to be able to stream the data using the Neon Monitor App

Once both devices are connected, can you try typing the IP in the browser rather than neon.local:8080 and see if it works?

user-24f54b 13 January, 2025, 08:32:25

The phone isn't on any kind of service

user-24f54b 13 January, 2025, 08:34:42

I've connected the PC to the hotspot produced by the phone but I'm not sure what to do about the phone itself

user-480f4c 13 January, 2025, 08:38:29

Work & university WiFi networks can pose issues, as they might either block the ports needed for streaming, or are congested with traffic from other users, which then interferes with streaming.

You can use a mobile hotspot from your personal phone (or a dedicated router as mentioned in my previous message). You can then connect the Companion Device and the laptop to this mobile hotspot, and you should be able to stream the data without issues.

user-24f54b 13 January, 2025, 08:41:33

@user-480f4c That worked thank you.

user-d407c1 13 January, 2025, 08:44:40

Hi @user-24f54b! To add to my colleague's answer, if you're using Windows 10 or 11, you can create a hotspot directly on your PC and connect your Companion Device to that network.

This setup can help free up resources on your Companion Device, especially if you plan to use 200Hz streaming with eye states.

user-24f54b 13 January, 2025, 08:51:22

@user-d407c1 That's a good option thanks. It isn't working at the moment presumably because the source of the connection is still the institutional wifi but the fact that it works with my personal phone ought to help. We're going to be doing our experiments at a hospital, if I were to ask the hospital's IT department to set something up that works better do you think they would be able to?

user-d407c1 13 January, 2025, 08:55:17

To clarify, I wasn’t referring to forwarding the institutional network connection, but rather to using a local hotspot without any internet access.

In many hospitals, institutional networks are either isolated from external devices or highly safeguarded. However, they might allow you to use your own router or local hotspot without internet access. This way, you can create your own local network without restrictions, providing a stable connection for your Companion Device.

Let me know if you have any questions or need further clarification!

user-24f54b 13 January, 2025, 08:59:23

Thank you. I don't seem to be produce the hotspot at all without wifi. Do you know why that would be?

user-d407c1 13 January, 2025, 12:30:29

Unfortunately it seems not all Windows machines support hostednetwork. Frankly, I believe a local router might be the easiest solution other than mobile hotspot.

user-2b5d07 13 January, 2025, 11:15:46

Hi, I’m looking to calculate eye velocity and acceleration. Can someone advise me on which metrics I should use for this purpose? There are many options, such as gaze_x, gaze_y, or eyeball centers, and I’m not sure which are the most suitable. Thank you !

user-d407c1 13 January, 2025, 12:44:47

Hi @user-2b5d07 πŸ‘‹! Are you looking to obtain eye rotational velocities or relative velocities?

Gaze X and Y are provided in the scene camera coordinates, so it would be relatively straightforward to compute velocity or acceleration in this 2D space as pixels per second.

Additionally, for gaze, we also report azimuth and elevation, which you could use for similar calculations in angular space, but these have their origin on the scene camera.

If looking for purely eye movement speed of the eye, you can use the optical axis vectors reported for each eye in the eye states file.

user-2b5d07 13 January, 2025, 13:17:21

Thanks for your answer! In fact, I am working on eye movement detection, so I might indeed need the relative velocity. I assume this is also what you base fixation detection on ? I plan to use this to determine microsaccades and smooth pursuits !!

user-f43a29 16 January, 2025, 23:20:52

Hi @user-2b5d07 , I'm stepping in briefly for my colleague, @user-d407c1 .

The fixation detector is detailed in the associated whitepaper. You might also be interested in the related published article on fixation detection strategies.

The open-source implementation is available in pl-rec-export.

user-83d076 13 January, 2025, 19:24:10

Hi. I have a question about the exported output data for the visualizations in the pupil lab cloud. I read in the documentation for AOI that I can download the aoi_fixations.csv and aoi_metric.csv.

I can’t find those in the download tab in the cloud. I’m only able to download the .png of the aoi heatmap which isn’t what I’m looking for

Let me know if I’m overlooking something!

user-f43a29 13 January, 2025, 22:27:27

Hi @user-83d076 , when you have the respective Project open and are in the Downloads tab, then you want to download the green elements with corresponding Enrichment type & name. In there, you will find aoi_fixations.csv and aoi_metrics.csv. More details about those files can be found in the Documentation.

user-83d076 14 January, 2025, 02:21:26

Thank you for answering! I was looking at the wrong place this whole time. Another question regarding features. Is there a feature that lists how often they leave a region of interest for another. I’m doing research on a laptop screen and I want to know if the user bounces around one area or mostly stays in place

user-f43a29 14 January, 2025, 11:03:11

Hi @user-83d076 , you might be interested in either of these tools from my colleagues:

Otherwise, if you'd like to see this as part of a Pupil Cloud pipeline, then you can suggest it in πŸ’‘ features-requests .

user-0c254f 14 January, 2025, 06:18:38

Hello everybody! I have question about statistic analysis of data - which soft i can use for this ? I heard about "Neon player" but I didn't find instruments for data analysis there... Maybe I missed something. I would really appreciate any help.

user-f43a29 14 January, 2025, 10:50:48

Hi @user-0c254f , the different analysis plugins of Neon Player are listed in the menu bar on the left here. If you are looking for something like a statistical report, then you can take the exported CSV files from Neon Player and process them in an analysis environment of your choice, such as Python or R, or tools like SPSS or JASP, or even Excel. We enjoy Python, but there is no specific recommendation that Python must or should be used. Ultimately, whatever is easiest for you.

To clarify, we don't provide outputs like reports in order to provide users with flexibility and to always be open to new developments in the field. Having said that, you might be interested in one of our Support Packages.

user-f76dbc 14 January, 2025, 10:24:47

Hi, I have a question about the surface coordinates for the marker mapper. How are the positions normalised? by what formula? I've been trying for some time to map the bindings to the reference image (after Marker Mapper enrichment) in matlab. Nothing works, there is always an offset of between 50 and 200 pixels (after reprojection of the data in pixels) between the mapped data and the reference image. Thanks,

user-f43a29 14 January, 2025, 10:43:22

Hi @user-f76dbc , may I ask some clarifying questions to be sure I can help you optimally:

  • I'm not sure I understand "map the bindings to the reference image in Matlab", do you mean you have downloaded the Enriched data and are trying to visualize it in Matlab?
  • When you say 'reprojection' are you referring to the result of Marker Mapper?
  • Are you seeing this across many recordings or just specific recordings for a specific participant?

If you'd like, you can invite us to the Workspace as Collaborators [email removed] and we can take a look and provide more direct feedback. Or, you can share a screenshot or video with us at [email removed] referencing this conversation.

user-f7408d 15 January, 2025, 00:08:16

Hi, If this is in the wrong spot please let me know where I should be posting. I am looking at the Anonymization Add-on for Pupil Cloud. Can I confirm that this does not modify the source file on the companion device?

nmt 15 January, 2025, 04:08:17

Hi @user-f7408d! Welcome. This is right place to ask questions. You're correct - this does not modify the source file on the Companion Device.

user-2618c1 15 January, 2025, 02:39:42

Hi, I try to run neon player from source code Then, I drag folder of neon recording to program, and I got this:

AttributeError: module 'av' has no attribute 'AVError'

       INFO     player - launchables.player: Process shutting down.

I follow the instruction in github to download source code and install all library to virtual environment.

nmt 15 January, 2025, 04:09:59

Hi @user-2618c1! Is there a reason you're trying to run Neon Player from source? I'd highly recommend trying the pre-compiled bundle in the first instance - you can download it from the releases section of the repo, or simply click the download link from here: https://docs.pupil-labs.com/neon/neon-player/#neon-player

user-42cb18 15 January, 2025, 12:12:52

Hi all! I am recording data with Neon glasses. In my exports from neon player, I find the two following files: gaze_positions.csv (in exports folder) and gaze.csv (in offline_data folder). What is the difference between them when it comes to the gaze x and gaze y values? And which one should I be using?

nmt 16 January, 2025, 02:17:03

Hi @user-42cb18! Thanks for your question. Please use gaze_positions.csv that's found in the export folder. This data will respect both the slider values in the Player timeline (if you want to crop the data export at given times) and the post-hoc gaze offset correction that's possible in Neon Player. There are additional data, such as spherical gaze coordinates, currently found in the offline data folder - we will aim to update Neon Player next week such that the full range of data are available in the export directory!

user-13d297 15 January, 2025, 17:19:22

hi! I wanted to ask whether there is a way to manually upload a recording to pupil cloud from a hard drive or backup location. I would prefer to avoid automatic upload from the companion device.

nmt 16 January, 2025, 02:19:02

Hi @user-13d297! We only support recording uploads from the Companion Device to ensure data integrity. May I ask why you'd prefer not to use auto-uploads?

user-3bdb49 16 January, 2025, 14:07:40

Hi! I use NEON in a driving simulator. I would like to send core UDP Messages to the NEON-smartphone i.e. "BlinkerRechtsAktiv" according to actions of the driver. I can configure clients / -servers with UDP / HTTP and send messages to a specific IP:port, but I can not use the Python real Time API in this simulator. Is this possible to add events via basic UDP/HTTP Messages only by IP:Port address of the NEON?

user-f43a29 16 January, 2025, 23:26:26

Hi @user-3bdb49 , while Neon does not understand messages for Pupil Core, it does actually use UDP and HTTP, and no need to use the Python package if that is not feasible in your case. The Real-time API is actually programming language agnostic.

In your case, you will want to reference the Under the Hood documentation, in particular this section on sending Events to Neon.

user-d6a352 17 January, 2025, 09:45:24

Context: I have a Neon (is this thing on) and I use the Neon Player (desktop) to process the recording.

  1. I am running an experiment where we are recording close to an hour long session with each test subject, and there are close to 25 people in a batch.
  2. We plan to run similar batches from time to time.

Currently, to process the recording, on an i7 machine with about 64GB of RAM (there is a GPU but the software doesnt use it), it takes about 45 mins to an hour per person. Thats roughly about 25 hours of processing time, if one were to do it back to back.

My questions:

  1. Is there a way to process all these recordings as a batch and on commandline.
  2. Lets say I were to invest in a higher compute infrastructure - say a server with a threadripper for eg - would it cut down the processing time and has anyone here taken advantage of it and know what the time savings would be (to ensure it would be significant and not marginal).
user-f43a29 17 January, 2025, 10:25:01

Hi @user-d6a352 , batch processing is a default feature of Pupil Cloud, which also offers helpful data logistics. This helps save group effort, for example.

When using Neon Player, are you mainly exporting the raw data to CSV files or are you also using a specific plugin or Event marking protocol?

user-24f54b 17 January, 2025, 19:43:59

Hi all, I'm sometimes running into this problem even when the computer is producing a hotspot and the phone is connected to it and the app is open. Is there a troubleshooting step for this?

Chat image

user-f43a29 17 January, 2025, 22:19:56

Hi @user-24f54b , have you already tried the Troubleshooting steps here ?

You could also try clearing the cache of the app. This will not delete your recordings:

  • Long press the Neon Companion app icon in the main Android launcher screen
  • Click on App info
  • Click on Storage & cache
  • Click on Clear cache
user-f43a29 17 January, 2025, 22:22:21

Could you also test with the hotspot of your personal cellphone or with a router? Just to be sure that the hotspot in the computer is not causing the connection issue.

user-f43a29 17 January, 2025, 22:27:33

If you again reach a white screen in Neon Monitor, then you can also try this:

  • Leave that browser tab open. Leave the Neon Companion app open on the phone.
  • Briefly unplug Neon from the phone and plug it back in.
user-24f54b 17 January, 2025, 22:16:50

Also, is it possible to bypass the phone altogether? Using only the API maybe?

user-f43a29 17 January, 2025, 22:21:01

No, it is not possible to bypass the phone altogether. NeonNet runs in the Neon Companion app and the API is also hosted by the app.

user-2b5d07 19 January, 2025, 21:24:29

Hi, I wanted to confirm if the calculation shown below is correct for determining angular velocity, where Δθ represents the azimuth change and Ξ”Οˆ represents the elevation change. Thanks !

nmt 20 January, 2025, 03:10:56

Hi @user-2b5d07! This looks correct. Check out this notebook for a Python implementation: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb

user-2b5d07 19 January, 2025, 21:24:36

Chat image

user-337880 20 January, 2025, 08:52:59

Is it possible to prepare a visualization not for all the recordings in the particular project? Otherwise, you have to download unnecessary files.

user-d407c1 20 January, 2025, 09:07:05

Hi @user-337880 πŸ‘‹ ! Which visualisation are you referring to? Is it the video renderer?

user-337880 20 January, 2025, 09:07:29

Yes, video renderer.

user-d407c1 20 January, 2025, 09:14:58

Yes, you can define which recordings should generate the visualization through the enrichment sections.

Essentially, it uses events to determine which recordings to export. By default, it relies on recording.begin and recording.end, which are included in all recordings. However, you can adjust this in the temporal selection to use different events if needed.

Chat image

user-337880 20 January, 2025, 09:19:21

Thank you for the clarification. So, in the individual recording, I should define a new event (start and finish) and then clarify which event from which recording I'm choosing to create a visualization?

user-d407c1 20 January, 2025, 09:33:28

Yes, kind of. Let’s assume you want the entire recording length for 5 out of 15 recordings. For the recordings you’re interested in, you can create a new start.event at the same point as recording.begin or any other point throughout the recording. In this specific case, there’s no need to create a corresponding finish eventβ€”just set the start event to be start.event. This way, only those 5 recordings matching the start.event will be included.

user-337880 20 January, 2025, 13:03:56

Thank you very much.Worked perfect.

user-6e3b6d 21 January, 2025, 11:07:19

Hi everyone,

I have a hardware-related question regarding the Neon glasses from Pupil Labs. Can participants wear their own glasses underneath the Neon glasses? We have a set of corrective lenses ranging from -6 to +6. Would not using these corrective lenses and allowing participants to use their own glasses affect the data?

Additionally, if I don’t need glasses, is it necessary to use the 0-correction lenses, or can the Neon glasses be used without any lenses?

Thanks in advance!

nmt 22 January, 2025, 01:16:39

Hi @user-6e3b6d πŸ‘‹. It's sometimes possible to wear the Neon frames first and then place third-party glasses over the top, or vice versa, while still achieving good-quality recordings. This largely depends on the form factor of the glasses and whether they block Neon's eye or scene cameras. However, we typically don't recommend this approach as it's not usually ideal. For those who require vision correction, the prescription lens kit is the recommended solution. Of note, Neon works perfectly fine with contact lenses, as they do not affect Neon's measurements! No, it's not necessary to use plano lenses (0-optical correction) or any lenses with Neon, so you can simply leave them out if you prefer.

user-2b5d07 21 January, 2025, 12:37:32

Hello, I want to plot the gaze movement curves over time and separately for each eye to observe coordination. However, the gaze coordinates are in pixels, while the coordinates of the eye ball center are in millimeters. Could you explain how to convert millimeters to pixels? Thank you!

nmt 22 January, 2025, 01:38:32

Hello, @user-2b5d07! Reflecting back on what you've described, it seems you're considering how to map from each eyeball centre (measured in millimetres: 3D scene camera space) to Neon's 2D gaze coordinates (measured in pixels: 2D camera space), with the goal of calculating each eye's movements separately. To achieve this, you're thinking about converting the eyeball centres from millimetres to pixels. Is that correct?

user-2b5d07 22 January, 2025, 10:20:39

Yes, that's exactly what I want to do

nmt 22 January, 2025, 12:12:25

Alright, thanks for confirming. Technically, this is possible, but I'm not sure that it makes conceptual sense. Just so I understand your requirements, can you elaborate on your overall goal - instead of describing what you want to compute, can you describe your overall research question?

user-24f54b 23 January, 2025, 02:34:58

I was able to get it to work once using a computer connected to the password-protected version of our university's wifi to produce a hotspot which I then connected the phone to. This took a long time and many attempts to plug the neon back into the phone. Were you saying that a dedicated router not connected to the internet would be more consistent than this?

nmt 23 January, 2025, 02:48:43

Hi, @user-24f54b! I'll step in for Rob here. It's quite common for institutional wifi networks to have firewalls that block the connection. For that reason, we do recommend using a dedicated, standalone, router. It's also possible to set up a local hotspot from a computer or even the Companion Device. I think it would be worth trying with your Companion Device now: 1. Set up the hotspot and connect your computer to it 2. Connect Neon to the Companion Device and click the streaming icon in the top right of the app's home screen 3. Instead of typing neon.local:8080 in your computer browser, type the full IP Address Does that solve it?

user-24f54b 23 January, 2025, 02:50:32

Do you mean setting up the hotspot from the phone that has the companion app or another one?

nmt 23 January, 2025, 02:54:30

Set up the hotspot from the Companion Device

user-24f54b 23 January, 2025, 03:00:46

Thank you. That works more often. It's still a bit inconsistent but if I tinker with disconnecting and reconnecting the neon and the wifi on the phone it works.

What should I tell the IT department at the hospital we are working with? To arrange a router but not connect it to the broader network?

nmt 23 January, 2025, 03:15:32

If you think the hotspot is working sufficiently for your testing, then it would be fine to continue using it in my view. With that said, if your participants are going to be moving around in a large area, it might be preferable to use a more powerful standalone router to ensure a robust connection. We have had a lot of success with the Archer BE550 model!

user-24f54b 23 January, 2025, 03:31:28

They aren't going to move around very much but I'd still like to go with whatever is the most reliable option. If you think a powerful standalone router is the best option I will ask the hospital's IT department for that.

user-2b5d07 23 January, 2025, 11:22:57

I’m working on eye movement detection, focusing on smooth pursuit movements. To analyze binocular coordination and detect subtle discrepancies, I need to plot gaze features and monocular coordinates together. Since these data are on different scales (mm vs. pixels), I need to convert millimeters to pixels for accurate visualization and comparison.

user-d407c1 24 January, 2025, 08:01:53

Hi @user-2b5d07 ! πŸ‘‹ Quickly stepping in here for my colleague @nmt . I believe there might be a small misunderstanding about how gaze is estimated.

By default, the gaze point data is calculated using both eye images, meaning the data is derived from both eyes.

To analyze binocular coordination more accurately, we recommend using the 3D Eye State measurements. These measurements provide independent optical axes for the left and right eyes, originating at their respective eyeball centers and passing through the center of the pupils. The default outputs are vectors in Cartesian coordinates, which you can work with directly or convert into spherical coordinates if needed.

Regarding your described approachβ€”transforming eyeball centers from millimeters to pixels and mapping vectors to the 2D gaze point in image space does not make much sense. One of the main reason is that this method doesn’t yield technically independent eye movements, since both vectors would be tied to the 2D gaze point, they would essentially move in similar coordination.

Instead, the 3D Eye State measurements provide a more physiologically accurate representation of eye poses and better reflect the independent coordination of eye movements. Let me know if you’d like further clarification!

user-13d297 23 January, 2025, 19:27:19

Thank you for your response! I would like to be able to back up the eye tracking files on a hard drive for later processing. I tried this but was unable to upload the data to pupil cloud

nmt 24 January, 2025, 02:13:58

Yes that's expected. Recordings can be uploaded to Pupil Cloud from the Companion Device. You can also download recordings from Pupil Cloud to store in an offline context. But they can't then be re-uploaded. I hope this helps!

user-24f54b 24 January, 2025, 20:18:11

Hi all, what are some troubleshooting methods if the API is not discovering the Neon even though the monitoring app is working. E.g. the following set of commands in matlab's python integration aren't working:

% Import the necessary Python modules simple = py.importlib.import_module('pupil_labs.realtime_api.simple'); time = py.importlib.import_module('time');

% Start eye tracker recording device = simple.discover_one_device(); recording_id = device.recording_start(); disp(['Started eye tracker recording with id ', char(recording_id)]);

user-f43a29 24 January, 2025, 20:29:42

Hi @user-24f54b , while putting together our pl-neon-matlab integration, it was found that the py.importlib method is not generally applicable to all packages.

I recommend checking out the existing integration, which has been tested on a variety of systems and MATLAB + Python versions. It should save you significant effort in getting setup. You might find the examples helpful and this tutorial.

If you continue to experience issues or have any questions about it, feel free to ask!

user-24f54b 24 January, 2025, 20:36:51

I see, thank you!

user-42cb18 25 January, 2025, 16:49:19

Hi, all! In some of my recordings, when I replay the video on pupil cloud, I see that the gaze data doesn't start from the beginning to be tracked (I see gaze data later in the 15th sec of my recording) . Is it a setting that I have to fix or is it the conditions of my recording environment that could be responsible for not tracking gaze relatively soon?

nmt 25 January, 2025, 17:13:16

Hi @user-42cb18! In this case, can you please first ensure your Companion app is up-to-date in Google Play Store. Secondly, can you try clearing the Companion app cache, as per this message: https://discord.com/channels/285728493612957698/1047111711230009405/1329938288286502973 Does that solve the situation for new recordings?

user-42cb18 27 January, 2025, 09:21:30

Thanks, Neil! It worked.

user-486277 27 January, 2025, 11:55:21

Dear support, is the neon hardware besides the smartphone explosion-proof? Thanks

user-d407c1 27 January, 2025, 12:29:10

Hi @user-486277 ! Could you clarify what you mean by "explosion-proof"? Are you looking for any standards focus on equipment designed to operate safely in potentially explosive atmospheres, such as areas with flammable gases or dust?

While, does not have any specific certification for that, note that Neon adheres to a variety of standards to ensure safety compliance, see below:

  • EMC Emission Standards: EN 55032:2015, AC2016, A11:2020, A1:2020 Class B, and FCC Part 15, Class B β€” regulating electromagnetic emissions to minimize interference with other devices.
  • EMC Immunity Standards: EN 55035:2017 β€” ensuring reliable operation in the presence of external electromagnetic disturbances.
  • Low Voltage Directive: Directive 2014/35/EU β€” guaranteeing electrical safety for devices operating within specific voltage ranges.
  • EMC Directive: Directive 2014/30/EU β€” ensuring the device neither causes nor is affected by excessive electromagnetic interference.
  • Radio and Telecommunications Terminal Equipment Directive (R&TTE): Directive 1999/5/EC β€” ensuring safe and efficient operation of radio and telecommunication equipment.

Additionally, kindly note that the module or frame do not have any battery.

user-486277 27 January, 2025, 12:44:54

I meant the certification for areas with potentially explosive atmospheres. I needed to know in which areas we can measure. Thanks for the fast answer.

user-80c70d 28 January, 2025, 10:36:45

Hi, we are having an issue with our recording. We are recording two 30-minute sessions. While everything works perfectly fine for some recordings, others (and we could not find an error on our end) do not have any events in the recordings. This is most often but not always the first out of the two 30-minute recordings. Specifically, we do not get the events 'recording.start' or 'recording.end'. Additionally, we are sending synchronization triggers via LSL and also do not get any of these.

user-f43a29 28 January, 2025, 10:54:30

Hi @user-80c70d , do you mean those events are not appearing in the Neon recording data or in the LSL output? For example recording.start and recording.end are missing on Pupil Cloud?

user-80c70d 28 January, 2025, 10:55:31

Hi, yes exactly, these events are sometimes missing on Pupil Cloud. Other times they appear as expected

user-f43a29 28 January, 2025, 10:55:51

Could you open a ticket in πŸ›Ÿ troubleshooting ? We can continue the discussion there. Thanks!

user-24f54b 29 January, 2025, 03:17:04

Hi all, if I were to use a USB C cable extender to allow me to keep the companion phone further away from the bare metal neon during experiments, do you think that would cause slowing or degradation of the signal?

user-d407c1 29 January, 2025, 14:29:48

Hi @user-24f54b ! πŸ‘‹ While you could try a cable extender, it is prone to signal degradation as you mentioned, so we can’t recommend it. Could you share more details about your study? We might be able to suggest alternative solutions, such as using the real-time API to control the device.

user-13f7bc 29 January, 2025, 13:56:02

Our lab is aiming to have multiple accounts for the neon eyetracker because some of the data will be collected just during public engagement events and thus not worthy of keeping, while other will be during our data collection. Is this okay having multiple gmail accounts associated with us? Also, how much storage does the icloud have?

user-d407c1 29 January, 2025, 14:44:17

Hi @user-13f7bc ! There is a 2-hour quota on Cloud unless you have the Unlimited Storage Add-on or are part of the Early Adopters Club. You can find more details in this announcement: https://discord.com/channels/285728493612957698/733230031228370956/1292784609938898954.

You definitely can have multiple accounts, but I’d recommend using a owner account to set up workspaces and add add-ons. Then is up to you whether to use one or multiple accounts, although for this porpoise, instead of using multiple accounts, you can create different workspaces for various projects/needs and later invite collaborators to them as needed. This minimizes the need to switch accounts on the phone/app, which can be more cumbersome.

user-13f7bc 29 January, 2025, 14:57:01

@user-d407c1 Thank you, this is very helpful

user-13f7bc 29 January, 2025, 15:43:09

how can we export the video directly from the app with the cross-hair as displayed in the app?

user-d407c1 29 January, 2025, 16:20:54

Scene video with gaze overlay is not directly exportable from the Companion App. However, you can add gaze overlay either in Pupil Cloud using the video renderer to generate the export or in Neon Player using the world video exporter for local export.

Let me know if you need further details!

user-13f7bc 29 January, 2025, 16:50:18

Thank you

user-24f54b 30 January, 2025, 01:26:54

We are using the API to control the eye tracker but I'd like there to be a lot of slack in the cable because we're working with infants and the cable needs to go around their heads a bit. I'd like to have more slack in the line so that there's less of a chance of causing discomfort to the infant or pulling the phone off the table if they move in a way we didn't expect.

user-d407c1 30 January, 2025, 08:25:01

Did you consider placing the Companion Device (phone) in a pouch or backpack after is locked? To me, that seems like the most comfortable way to carry it. Using a longer cable could not only degrade the signal but also increase the risk of tangling.

user-13f7bc 30 January, 2025, 10:57:24

What is the link to download android file transfer? The link you shared doesn't include the "android file transfer", unless I am being very daft

user-d407c1 30 January, 2025, 14:27:28

Hi @user-13f7bc πŸ‘‹ ! That should work, alternatively you can also try OpenMTP to transfer from your Android Device to a Mac.

user-13f7bc 30 January, 2025, 10:58:40

I found this one: https://android.p2hp.com/filetransfer/index.html Is this it

user-13f7bc 30 January, 2025, 10:58:41

?

user-edb34b 30 January, 2025, 14:18:16

Hello team! I have an experiment with some events defined, and I can see them correctly in the timeflow of the recordings on the Cloud. However after exportation when I want to access them from the "events" csv file, it appears that all the events have the same timestamp of 1730000000000000000 (cf screenshot). So I loose all the time components, which makes the events impossible to use. Do you have any idea how to fix that? Thanks!

user-edb34b 30 January, 2025, 14:18:19

Chat image

user-d5a41b 30 January, 2025, 14:20:32

Hi! Do you have recommendations on how to cut gaze and world video data in batch processing? I am using the pl-rec-export tool to extract the data and I want to cut the gaze and video file while keeping them synced.

user-d407c1 30 January, 2025, 14:32:16

@user-edb34b That sounds like an Excel issue when importing .csv files.

Could you try opening the original files with a text editor (e.g., Notepad) to check if they look correct? If the data appears fine, you might want to review how to properly import CSV files into Excel to avoid formatting issues.

This guide might help: Prevent Scientific Notation on Import

user-edb34b 30 January, 2025, 14:58:37

Thanks for your reply!

user-d407c1 30 January, 2025, 14:35:56

Hi @user-d5a41b πŸ‘‹! Just to clarifyβ€”are you looking to generate a video with a gaze overlay for specific sections in batches? If so using enrichment sections and the video renderer would be the easiest.

Let me know if I understood correctly!

user-d5a41b 30 January, 2025, 14:51:46

I'm trying to generate a csv file with gaze coordinates and the corresponding world video file based on certain onset and offset times to use this for further analysis. Unfortunately our data is not on Pupil Cloud. I have used this logic: https://gist.github.com/mikelgg93/8ea0ef4bba62a2fa9c0a924cf019ec18 to assign gaze data to frame numbers and pts. I have attempted to cut the video and the gaze data using the onset and offset times and I also tried using frame numbers (both with ffmpeg), but looking at the results I don’t feel confident that this produces accurate results. I either end up with a mismatch of frame numbers in the gaze data and video data or a mismatch in length (based off pts and video length).

user-a55486 30 January, 2025, 14:41:37

Dear tech, I'm using a hub to achieve PC-Neon communication through an ethernet. We would want to use the realtime api for interacting with Neon at low latency under this mode. Now we are faced with the problem of the IP address and port of the ethernet connection to specify in the realtime api, as the default IP address and port shown on the companion device seems to be for a WIFI connection not ethernet

user-a55486 30 January, 2025, 14:42:09

Would you be able to advise on the means to specify the IP and port under this setting?

user-f43a29 30 January, 2025, 14:47:40

Hi @user-a55486 , may I ask what OS you are on? The instructions for a direct connection vary with OS.

Also, have you considered connecting the computer and Neon via Ethernet with an intermediary router? Setup and configuration are significantly easier that way.

user-a55486 30 January, 2025, 14:48:32

Hi Rob, we are using Windows 11

user-f43a29 30 January, 2025, 14:50:41

Ok, and just to confirm, you want a direct Ethernet connection? In other words, using an intermediary router is out of the question?

user-a55486 30 January, 2025, 14:51:12

Correct, a router is not possible in the current setup

user-f43a29 30 January, 2025, 14:53:49

Ok, then please reference this document for Windows 11 and this Discord message (https://discord.com/channels/285728493612957698/1047111711230009405/1272483345137139732). Note that these are not official instructions, but have worked consistently in several tests.

user-d407c1 30 January, 2025, 15:04:31

@user-d5a41b Have you seen pl-neon-rec library ? I think it is better suited for this task.
It includes examples on how to use gaze and scene video from raw recordings, which should be helpful for your workflow.

Let me know if you need any further guidance! πŸš€

user-d5a41b 30 January, 2025, 15:17:11

We have data from Pupil Invisible and Neon and I have to use the same pipeline for all the data. We used dyadic eye tracking and we are working on eye contact.

user-a55486 30 January, 2025, 15:17:58

Thank you so much! We were able to follow everything until the network device sharing part. We have 2 network devices, one ethernet one WIFI, we followed the instructions to open the property of the WIFI device but found no Sharing tab:

Chat image

user-f43a29 30 January, 2025, 15:38:06

This might require assistance from Microsoft or Intel then. The Sharing properties of network devices potentially vary between manufacturers and models.

Otherwise, connecting Neon and the PC over Ethernet via a small router is known to work and can make the process significantly easier. The router does not need WiFi capability, nor does it need to be connected to the Internet.

user-f389a1 30 January, 2025, 15:19:34

I have collected data using Pupil Neon - I want to analyse continuous pupil size throughout the duration of the recording - how can I do this? (and is it possible within Neon player?)

user-d407c1 30 January, 2025, 15:41:42

Hi @user-f389a1 ! As in track visualize pupil size in real time or post-hoc next to the video or simply get the values?

user-d407c1 30 January, 2025, 15:39:42

Neon & Pupil Invisible Gaze overlay

user-b43477 30 January, 2025, 15:41:30

We're trying to track the pupil response to light. It would be great to just be able to use the built in scene camera, or perhaps the eye cameras, to make reasonable assumptions on the luminance that the eyes are exposed to.

We have however observed that the scene camera seems to have a dynamic gain, which very clearly changes as the light conditions are changed. This of course makes sense when trying to record the best possible video, but is less than ideal when trying to measure light intensity.

Would it be possible to somehow access raw camera values, or something close to that? Or perhaps log the gain settings for each frame, so we may be able to infer the lumiosity based on the gain and the measured pixel values?

user-f43a29 30 January, 2025, 18:16:59

Hi, @user-b43477 , interesting use!

For context, raw values aren't read out by the app - the images are saved as .mjpeg so aren't available in their raw format. But with that said it should be possible to turn off auto gain and exposure by using manual exposure in the Companion app settings. Do you think that would suffice? Note that auto white balance is in operation on the camera.

user-a55486 30 January, 2025, 15:48:37

Thanks Rob for the quick reply! I see, in this case we will talk to the facilities team and see if we can fit a router in for this purpose. Would you mind sending relevant info for a router so that they may try out that solution too? Thanks

user-f43a29 30 January, 2025, 15:55:01

Ah, a recommendation for a router I do not have, but you should have noticeably less difficulty with the router approach. An inexpensive router should probably be fine, for example.

user-f43a29 30 January, 2025, 15:53:39

@user-a55486 to be clear, if that is a computer where you could swap in a different, compatible network card, then that is also an option.

user-f43a29 30 January, 2025, 16:58:39

@user-a55486 my colleague, @nmt , has pointed out that we have had success with the Archer BE550 at least.

user-a55486 30 January, 2025, 17:47:01

Thanks for the recommendation!

user-b43477 30 January, 2025, 22:09:27

I think it could be worth a shot! I've been thinking about if could be possible to log the gain settings for each frame? If it was, we may just be able to calculate backwards to figure out the light intensity. It doesn't have to be perfect, but the current method is basically unworkable, as the dynamic gain makes two frames impossible to compare.

user-f43a29 31 January, 2025, 11:24:02

Hi @user-b43477 , we received your email and will continue communication there!

user-f389a1 31 January, 2025, 11:38:09

Hi @user-d407c1 - I would just like the continuous values of pupil size throughout the recording!

user-d407c1 31 January, 2025, 11:54:37

Regardless of whether you want to access the data in real time (as the recording is happening) or post-hoc (after it has been recorded), you can do so.

To access pupil size values for each eye:
- Post-hoc: Use the 3d_eye_states.csv file.
- Real-time: Retrieve the data through the Real-time API.

Since you asked about Neon Player, you can also enable the Eye State Plugin to visualize pupil size throughout the recording.

Let me know if you need any further details!

user-f389a1 31 January, 2025, 12:01:04

Hi @user-d407c1 thank you for your help!! Where can I find the 3d_eye_states.csv file from my recordings?

user-d407c1 31 January, 2025, 12:06:37

If you're using Cloud, you should find it when you download the Timeseries format.

If, on the other hand, you exported the native format from your device to your laptop, you will find it after exporting the data from Neon Player.

Keep in mind that Neon Player does not reprocess recordings. So, if Eye States were not enabled in the Companion App settings at the time of recording, that data stream will not be available.

user-a55486 31 January, 2025, 14:46:17

Hi, we got a router and is testing in the lab space now. We connected the ethernet from the hub to the router (TP Link TLSG1005D) and have the other side connected to the PC. Now we still can't discover one device over the realtime api. Any suggestions or extra operations needed on the PC to configure the connection? Thanks!

user-f43a29 31 January, 2025, 14:47:45

Hi @user-a55486 , do the Connection Troubleshooting steps here help to resolve the issue? Some routers might block mDNS traffic by default.

user-a55486 31 January, 2025, 14:53:59

I see, but may I ask what would be the expected behavior? Should the companion device give another set of IP address and port? Because right now it still gives the same IP address as if we were connecting through WIFI, although we have turned off WIFI and data connection

user-f43a29 31 January, 2025, 14:57:06

Ah, then, you might need to restart the Neon Companion app. The app requests an IP address whenever it starts.

You can do so by "force stopping" the app and then starting it anew:

  • Long press the Neon Companion app icon in the main Android launcher screen
  • Choose App info
  • Click Force stop
  • With WiFi disabled and the Ethernet cable plugged into Neon and the router, restart the Neon Companion app and wait for it to display an IP address

Just to clarify, the IP address can be found in the Stream section of the app (the button in the top right, under the Settings icon)

If this does not resolve it, then I recommend trying the steps in the Connection Troubleshooting section of the Documentation.

user-a55486 31 January, 2025, 15:18:39

After restart it shows waiting for mDNS, no streaming info available

user-f43a29 31 January, 2025, 15:19:43

Then, this could indicate that the router is currently blocking mDNS traffic. Do the steps here help? Note also that port 5353 should be open, especially for UDP communication.

user-e13d09 31 January, 2025, 17:42:12

Hello, I am having issues transferring my recordings to PupilLabs Cloud from the Companion device. I have 8 recordings and only one of them is attempting to upload to the Cloud (it is at 0%). The others are still waiting to start transferring as well. Is there a reason this is taking so long? I have not had this issue previously, most of my recordings uploaded to the Cloud in a matter of minutes.

nmt 31 January, 2025, 18:20:39

Hello, @user-e13d09! Can you please double-check the Companion device has internet connectivity, and then try logging out and back into the Companion app. That should trigger the upload. Let me know if it works!

user-24f54b 31 January, 2025, 19:14:51

Hi all, is this the router you recommend for using the monitoring app? https://www.amazon.com/TP-Link-Tri-Band-Archer-BE550-HomeShield/dp/B0CJSNSVMR?tag=googhydr-20&source=dsa&hvcampaign=electronics&gclid=Cj0KCQiAhvK8BhDfARIsABsPy4ikQPcTomv-QmFLZHyHKLNpcLNX1X_35jdwZdx4LRca4ukLGywGI2caAgvFEALw_wcB&th=1

nmt 31 January, 2025, 19:48:57

This is quite a powerful router, we've had a lot of success using it. It provides decent range and performs well in wifi-busy environments in our experience.

End of January archive