Hi community. I'm having an issue using the Neon Player plug "surface tracker". I see that there is an option to "edit surface" and "edit markers" but it seems like they are inactive buttons on the screen. Is this a bug or is there a specific setting I need to activate? Thank you for your help in advance!
Additionally, is it possible to add a surface without the markers? For example, if I want to specify the chessboard as a surface, would I need to add more markers on each corner of the chessboard
Hi @user-83d076! It's absolutely possible to edit a surface in Neon Player, e.g. such that it covers only the chessboard. You'll need to click once on the circle where it says, 'edit surface'. You might need to give it a little time, but you should then be able to drag and drop the corners of the surface to new desired locations. It's also very easy to add another surface. Just click, 'add surface' in the Plugin settings. If you haven't already, check out the Neon Player Surface Tracker docs that cover all these points. Let me know how you get on!
@nmt Thanks for getting back to me. It seems that clicking on the pink circle still isn't working for me so I might just do a reinstall. Out of curiosity, does the Neon player have an "Area of Interest" feature like the pupil cloud has?
Before trying a fresh install, it's worth clicking 'restart with default settings', which can be found in the Neon main settings. Whilst you can create multiple surfaces in Neon Player, and drag and drop the corners, this does not fully replicate the Area of Interest feature of Pupil Cloud, which also computes useful metrics. Pupil Cloud is the recommended workflow if that's what you need
@nmt Hmm, it still does not work for me. Maybe its my machine.
One of the main reasons I'm utilizing the Neon player is because the government agency I've worked for is very peculiar with where we store data. Platforms we store data on have to be vetted intensely. I would like to by using the offline software.
However, if the AOI features in Pupil Cloud is the best way to go, then I will make the argument to my leadership and see where can go
Hey @user-83d076! Pupil Cloud indeed has several benefits when compared to Neon Player - you can draw free hand AOIs, generate metrics, and moreover, these can be done on aggregrate, whereas the offline software only processes single recordings. I see you've opened a ticket. I'll respond there about the Neon Player behaviour!
Hi, we use the NEON glasses to record eye movements while watching images and videos on a screen. For the images we use the reference image mapper to map the fixations on the image (it's super helpful). Is there a way for the videos that is as simple as the one for the images to map the fixations during the videos on the screen? Thanks for your help in advance! Kind regards from Luebeck
Hi @user-75d5ea , I believe you are referring to something like our Map Gaze Onto Dynamic Screen Content Alpha Lab guide? If not, just let me know and I can provide further tips!
I want to try using the Neon Eye Tracker while reading a book, and I would like to use the Surface Tracker in Neon Player to track my gaze on the paper. Is it possible to modify the Surface Tracker in Neon Player from marker detection to document detection through coding?
Hi @user-2618c1 π !
There isnβt a document detection plugin by default. That said, Neon Player is open-source, so you can create your own plugin or modify the source code as you mention.
Check out the Plugin System API here.
Hi, I am having trouble with the pupil cloud. I want to analyze how someone interacts with a device. the device has a screen with a GUI and hardware components. there are buttons on the the GUI that the user can choose which gets them to another screen. I want to analyze each screen they go into in terms of what they look at. I want to create different enrichments and different areas of interest depending where I am in the video. is that possible? or do you have a better suggestion of how I would setup and analyze something like this?
Hi @user-3ecd80 π ! Yes, this is indeed possible.
May I first ask if you have had your free 30-minute Onboarding session yet? We can cover some of these details in real-time that way.
In the meantime, do you plan to use the Marker Mapper or Reference Image Mapper? May I also ask what kind of screen you are using (monitor, smartphone, tablet)?
At least when it comes to the general approach for analyzing sections of recordings, the process is the same for both Enrichment types:
screen_1.start
, screen_1.end
, etc.screen_1.start
and screen_1.end
.Edit
next to Areas of Interest
on the right and draw your AOIs. After the Enrichment has finished processing, you can then visualize an AOI Heatmap.I recommend also taking a look at the Pupil Cloud tutorial video.
Hi @user-f43a29, Thanks for your fast answer. No We have not had the onboarding we just got the glasses and I was playing around with it. How do I schedule the onboarding? As to your other questions: We donβt have a plan yet in terms of Marker Maker or Reference image mapper, I have been playing around with the Marker Mapper mainly because the reference image mapper failed when I tried it. From my understanding, the reference image mapper might not be a good option since the information on the screen can change.
As to screen, we are analysing the usability of medical ventilators. These ventilators have a built in touchscreen monitor.
Hi @user-f43a29, when I change the temperal selcetion it keeps the old refrence image and AOIs that I drew for the first teporal selection. What am I doing wrong?
Hi! I have a question regarding the files uploaded to the pupil cloud. Is it possible to upload a file that has already been uploaded?
Hi @user-98b2a9 , it is not possible for users to re-upload recordings/files to Pupil Cloud.
May I ask why you need to do this?
Hi! I have a few questions about API implementation and data handling: 1. Is there a C++ real-time API implementation available? (I've successfully used the Python API, but our multi-sensor integration project is written in C++) 2. For LSL streaming in Neon - I see it streams gaze data, but are videos and IMU data also included in the stream? 3. When saving data using the real-time API, is there a way to import these recordings into Neon Player later for post processing? I noticed Neon Player expects specific file formats from Neon Companion. Thanks in advance for any help!
Hi @user-b8c945 π !
We do not yet have a supported C++ implementation, but it is certainly possible. The Real-time API can essentially be used from any language that speaks the respective network protocols. When developing such an implementation, it will be helpful to reference the respective Under the Hood documentation.
Videos and IMU data are currently not streamed via LSL. However, you can run a recording in parallel in the Neon Companion app and post-hoc synchronize the data using Events.
Since the Real-time API is designed with flexibility in mind, collection of the streamed data will be dependent on how the client receives and formats it. This means that it is not in general possible to load such data into Neon Player. If you run a recording in parallel, then, as you point out, the recorded data can be loaded into Neon Player.
Hi @user-f43a29 , thank you for your detailed response! I have a few follow-up questions: 1. I noticed there are ZeroMQ documents on the website - could you clarify if ZeroMQ is only used with Pupil Core devices, while Neon devices use HTTP REST API instead? 2. Regarding data formats compatible with Neon Player, I'm curious about which data formats are compatible with Neon Player for uploading. Is there documentation available about the file formats for uploads or Neon Companion's exports, or is this information proprietary? I understand we can use parallel recordings with event synchronization as a solution, but I'm exploring if there might be alternative approaches. Thank you again for your help!
Hi @user-b8c945 , you are welcome!
ZeroMQ is for Pupil Core and technically underlies the Legacy API of Pupil Invisible. Neon's Real-time API is based on HTTP (REST) and RTSP, which can be used via UDP or WebSockets.
The data formats are completely open, not proprietary. For example, we offer a reference Python implementation, pl-neon-recording, for loading & working with the binary data directly from the Neon Companion device. I would surmise though, that you're likely to save more time by using the Native Recording Data directly from the device, rather than trying to replicate the full format when saving the real-time streamed data.
What tools from Neon Player do you plan to use?
May I also ask, when you say "uploads", do you mean uploading data to Pupil Cloud, or solely for loading into Neon Player?
Hi @user-f43a29, Thank you for confirming that ZeroMQ is for Pupil Core and for sharing the reference for working with binary data from the Neon Companion device.
We plan to use tools like the fixation and blink detector. Although we could define and perform fixation and blink analysis ourselves using the data from the Real-time API, we appreciate the convenience and user-friendly interface provided by Neon Player for this purpose. I understand that using native data from Neon Companion would likely be the most practical approach here.
Regarding "uploads," I was specifically referring to uploading data into Neon Player. I understand that uploading data to Pupil Cloud is currently only possible through the Neon Companion app.
Thanks again for your help!
Hi @user-b8c945 π ! As my colleague mentioned, saving streamed data directly into a Neon Player format is not a supported workflow at the moment.
In fact, we strongly recommend using recorded data from the Companion Device rather than relying on the stream alone, to ensure the best sampling rate and to avoid potential network issues.
That said, if you still want to explore this path despite our recommendations, you could consider reverse-engineering the pl-rec-export
or pl-neon-recording
tools to mimic our recording format and load it onto Neon Player. However, please note that we wonβt be able to assist you on this endeavor.
If your goal is simply to have an easier way to load recordings from the companion device into Neon Player, we encourage you to suggest this feature in the π‘ features-requests channel.
Hi! Quick question, were doing a recording using neon and a flight simulator, one issue we keep having is the glare from the screen making it hard to see in the post processing on the specifics an individual is fixating / gazing at within the cockpit , any suggestions ? Thank you!
Hi @user-a5c587 π! Do you mean glare or overexposure? And if the first, is the glare appearing on the scene camera? If so, you can try placing a polarizer filter in front of it. This can help reduce glare, but keep in mind that if your monitors already have a polarized filter oriented in one direction, adding another filter in the opposite direction will block light transmission entirely, making the screen content invisible.
An alternative approach, though a bit more complex, would be to use a third-party camera to map gaze onto that camera. You can learn more about this option here. However, I assume you're currently using Marker Mapper or Reference Image Mapper to remap gaze onto the screen, which would be trickier if using a 3rd party camera.
Are you able instead to record the screen content of the simulator and remap gaze onto that? You can find something like that here.
Hi, I wonder whether we have description on the exported file of neon player, including .csv file from surface tracker.
Hi @user-2618c1 π, Are you referring to something like this ?
Let me know if this is what you were looking for or if you need further clarification!
Yes, Could you provide more details about the .csv file surf_positions_<surface_name>? Specifically, I would like to understand the meaning of dist_img_to_surf_trans. Thank you!
@user-2618c1 The dist_img_to_surf_trans
in the .csv file represents the transformation matrix that maps coordinates from the distorted image space to the surface coordinate system.
This matrix is used to provide gaze from cameraβs distorted image in the surface. You can find its implementation in the surface.py
file of the Neon Player repository.
Hi all, I'm having some trouble accessing the companion app in a browser despite the phone and computer being on the same network. Could someone give me some troubleshooting tips? Thank you.
Hi @user-24f54b! Have you checked our troubleshooting section for connection problems? https://docs.pupil-labs.com/neon/data-collection/monitor-app/#connection-problems
Please make sure the phone and laptop are connected to the same network. Are you maybe using an institutional network?
Yes it's institutional
Institutional networks usually block MDNS; I will strongly recommend using a dedicated router if possible or using a hotspot from a third-party device. You'll need to connect both the phone and the laptop/PC to the hotspot or dedicated network to be able to stream the data using the Neon Monitor App
Once both devices are connected, can you try typing the IP in the browser rather than neon.local:8080 and see if it works?
The phone isn't on any kind of service
I've connected the PC to the hotspot produced by the phone but I'm not sure what to do about the phone itself
Work & university WiFi networks can pose issues, as they might either block the ports needed for streaming, or are congested with traffic from other users, which then interferes with streaming.
You can use a mobile hotspot from your personal phone (or a dedicated router as mentioned in my previous message). You can then connect the Companion Device and the laptop to this mobile hotspot, and you should be able to stream the data without issues.
@user-480f4c That worked thank you.
Hi @user-24f54b! To add to my colleague's answer, if you're using Windows 10 or 11, you can create a hotspot directly on your PC and connect your Companion Device to that network.
This setup can help free up resources on your Companion Device, especially if you plan to use 200Hz streaming with eye states.
@user-d407c1 That's a good option thanks. It isn't working at the moment presumably because the source of the connection is still the institutional wifi but the fact that it works with my personal phone ought to help. We're going to be doing our experiments at a hospital, if I were to ask the hospital's IT department to set something up that works better do you think they would be able to?
To clarify, I wasnβt referring to forwarding the institutional network connection, but rather to using a local hotspot without any internet access.
In many hospitals, institutional networks are either isolated from external devices or highly safeguarded. However, they might allow you to use your own router or local hotspot without internet access. This way, you can create your own local network without restrictions, providing a stable connection for your Companion Device.
Let me know if you have any questions or need further clarification!
Thank you. I don't seem to be produce the hotspot at all without wifi. Do you know why that would be?
Unfortunately it seems not all Windows machines support hostednetwork
. Frankly, I believe a local router might be the easiest solution other than mobile hotspot.
Hi, Iβm looking to calculate eye velocity and acceleration. Can someone advise me on which metrics I should use for this purpose? There are many options, such as gaze_x, gaze_y, or eyeball centers, and Iβm not sure which are the most suitable. Thank you !
Hi @user-2b5d07 π! Are you looking to obtain eye rotational velocities or relative velocities?
Gaze X and Y are provided in the scene camera coordinates, so it would be relatively straightforward to compute velocity or acceleration in this 2D space as pixels per second.
Additionally, for gaze, we also report azimuth and elevation, which you could use for similar calculations in angular space, but these have their origin on the scene camera.
If looking for purely eye movement speed of the eye, you can use the optical axis vectors reported for each eye in the eye states file.
Thanks for your answer! In fact, I am working on eye movement detection, so I might indeed need the relative velocity. I assume this is also what you base fixation detection on ? I plan to use this to determine microsaccades and smooth pursuits !!
Hi @user-2b5d07 , I'm stepping in briefly for my colleague, @user-d407c1 .
The fixation detector is detailed in the associated whitepaper. You might also be interested in the related published article on fixation detection strategies.
The open-source implementation is available in pl-rec-export.
Hi. I have a question about the exported output data for the visualizations in the pupil lab cloud. I read in the documentation for AOI that I can download the aoi_fixations.csv and aoi_metric.csv.
I canβt find those in the download tab in the cloud. Iβm only able to download the .png of the aoi heatmap which isnβt what Iβm looking for
Let me know if Iβm overlooking something!
Hi @user-83d076 , when you have the respective Project open and are in the Downloads tab, then you want to download the green elements with corresponding Enrichment type & name. In there, you will find aoi_fixations.csv
and aoi_metrics.csv
. More details about those files can be found in the Documentation.
Thank you for answering! I was looking at the wrong place this whole time. Another question regarding features. Is there a feature that lists how often they leave a region of interest for another. Iβm doing research on a laptop screen and I want to know if the user bounces around one area or mostly stays in place
Hi @user-83d076 , you might be interested in either of these tools from my colleagues:
Otherwise, if you'd like to see this as part of a Pupil Cloud pipeline, then you can suggest it in π‘ features-requests .
Hello everybody! I have question about statistic analysis of data - which soft i can use for this ? I heard about "Neon player" but I didn't find instruments for data analysis there... Maybe I missed something. I would really appreciate any help.
Hi @user-0c254f , the different analysis plugins of Neon Player are listed in the menu bar on the left here. If you are looking for something like a statistical report, then you can take the exported CSV files from Neon Player and process them in an analysis environment of your choice, such as Python or R, or tools like SPSS or JASP, or even Excel. We enjoy Python, but there is no specific recommendation that Python must or should be used. Ultimately, whatever is easiest for you.
To clarify, we don't provide outputs like reports in order to provide users with flexibility and to always be open to new developments in the field. Having said that, you might be interested in one of our Support Packages.
Hi, I have a question about the surface coordinates for the marker mapper. How are the positions normalised? by what formula? I've been trying for some time to map the bindings to the reference image (after Marker Mapper enrichment) in matlab. Nothing works, there is always an offset of between 50 and 200 pixels (after reprojection of the data in pixels) between the mapped data and the reference image. Thanks,
Hi @user-f76dbc , may I ask some clarifying questions to be sure I can help you optimally:
If you'd like, you can invite us to the Workspace as Collaborators [email removed] and we can take a look and provide more direct feedback. Or, you can share a screenshot or video with us at [email removed] referencing this conversation.
Hi, If this is in the wrong spot please let me know where I should be posting. I am looking at the Anonymization Add-on for Pupil Cloud. Can I confirm that this does not modify the source file on the companion device?
Hi @user-f7408d! Welcome. This is right place to ask questions. You're correct - this does not modify the source file on the Companion Device.
Hi, I try to run neon player from source code Then, I drag folder of neon recording to program, and I got this:
AttributeError: module 'av' has no attribute 'AVError'
INFO player - launchables.player: Process shutting down.
I follow the instruction in github to download source code and install all library to virtual environment.
Hi @user-2618c1! Is there a reason you're trying to run Neon Player from source? I'd highly recommend trying the pre-compiled bundle in the first instance - you can download it from the releases section of the repo, or simply click the download link from here: https://docs.pupil-labs.com/neon/neon-player/#neon-player
Hi all! I am recording data with Neon glasses. In my exports from neon player, I find the two following files: gaze_positions.csv (in exports folder) and gaze.csv (in offline_data folder). What is the difference between them when it comes to the gaze x and gaze y values? And which one should I be using?
Hi @user-42cb18! Thanks for your question.
Please use gaze_positions.csv
that's found in the export folder. This data will respect both the slider values in the Player timeline (if you want to crop the data export at given times) and the post-hoc gaze offset correction that's possible in Neon Player.
There are additional data, such as spherical gaze coordinates, currently found in the offline data folder - we will aim to update Neon Player next week such that the full range of data are available in the export directory!
hi! I wanted to ask whether there is a way to manually upload a recording to pupil cloud from a hard drive or backup location. I would prefer to avoid automatic upload from the companion device.
Hi @user-13d297! We only support recording uploads from the Companion Device to ensure data integrity. May I ask why you'd prefer not to use auto-uploads?
Hi! I use NEON in a driving simulator. I would like to send core UDP Messages to the NEON-smartphone i.e. "BlinkerRechtsAktiv" according to actions of the driver. I can configure clients / -servers with UDP / HTTP and send messages to a specific IP:port, but I can not use the Python real Time API in this simulator. Is this possible to add events via basic UDP/HTTP Messages only by IP:Port address of the NEON?
Hi @user-3bdb49 , while Neon does not understand messages for Pupil Core, it does actually use UDP and HTTP, and no need to use the Python package if that is not feasible in your case. The Real-time API is actually programming language agnostic.
In your case, you will want to reference the Under the Hood documentation, in particular this section on sending Events to Neon.
Context: I have a Neon (is this thing on) and I use the Neon Player (desktop) to process the recording.
Currently, to process the recording, on an i7 machine with about 64GB of RAM (there is a GPU but the software doesnt use it), it takes about 45 mins to an hour per person. Thats roughly about 25 hours of processing time, if one were to do it back to back.
My questions:
Hi @user-d6a352 , batch processing is a default feature of Pupil Cloud, which also offers helpful data logistics. This helps save group effort, for example.
When using Neon Player, are you mainly exporting the raw data to CSV files or are you also using a specific plugin or Event marking protocol?
Hi all, I'm sometimes running into this problem even when the computer is producing a hotspot and the phone is connected to it and the app is open. Is there a troubleshooting step for this?
Hi @user-24f54b , have you already tried the Troubleshooting steps here ?
You could also try clearing the cache of the app. This will not delete your recordings:
App info
Storage & cache
Clear cache
Could you also test with the hotspot of your personal cellphone or with a router? Just to be sure that the hotspot in the computer is not causing the connection issue.
If you again reach a white screen in Neon Monitor, then you can also try this:
Also, is it possible to bypass the phone altogether? Using only the API maybe?
No, it is not possible to bypass the phone altogether. NeonNet runs in the Neon Companion app and the API is also hosted by the app.
Hi, I wanted to confirm if the calculation shown below is correct for determining angular velocity, where ΞΞΈ represents the azimuth change and ΞΟ represents the elevation change. Thanks !
Hi @user-2b5d07! This looks correct. Check out this notebook for a Python implementation: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb
Is it possible to prepare a visualization not for all the recordings in the particular project? Otherwise, you have to download unnecessary files.
Hi @user-337880 π ! Which visualisation are you referring to? Is it the video renderer?
Yes, video renderer.
Yes, you can define which recordings should generate the visualization through the enrichment sections.
Essentially, it uses events to determine which recordings to export. By default, it relies on recording.begin
and recording.end
, which are included in all recordings. However, you can adjust this in the temporal selection to use different events if needed.
Thank you for the clarification. So, in the individual recording, I should define a new event (start and finish) and then clarify which event from which recording I'm choosing to create a visualization?
Yes, kind of. Letβs assume you want the entire recording length for 5 out of 15 recordings. For the recordings youβre interested in, you can create a new start.event
at the same point as recording.begin
or any other point throughout the recording. In this specific case, thereβs no need to create a corresponding finish eventβjust set the start event to be start.event
. This way, only those 5 recordings matching the start.event
will be included.
Thank you very much.Worked perfect.
Hi everyone,
I have a hardware-related question regarding the Neon glasses from Pupil Labs. Can participants wear their own glasses underneath the Neon glasses? We have a set of corrective lenses ranging from -6 to +6. Would not using these corrective lenses and allowing participants to use their own glasses affect the data?
Additionally, if I donβt need glasses, is it necessary to use the 0-correction lenses, or can the Neon glasses be used without any lenses?
Thanks in advance!
Hi @user-6e3b6d π. It's sometimes possible to wear the Neon frames first and then place third-party glasses over the top, or vice versa, while still achieving good-quality recordings. This largely depends on the form factor of the glasses and whether they block Neon's eye or scene cameras. However, we typically don't recommend this approach as it's not usually ideal. For those who require vision correction, the prescription lens kit is the recommended solution. Of note, Neon works perfectly fine with contact lenses, as they do not affect Neon's measurements! No, it's not necessary to use plano lenses (0-optical correction) or any lenses with Neon, so you can simply leave them out if you prefer.
Hello, I want to plot the gaze movement curves over time and separately for each eye to observe coordination. However, the gaze coordinates are in pixels, while the coordinates of the eye ball center are in millimeters. Could you explain how to convert millimeters to pixels? Thank you!
Hello, @user-2b5d07! Reflecting back on what you've described, it seems you're considering how to map from each eyeball centre (measured in millimetres: 3D scene camera space) to Neon's 2D gaze coordinates (measured in pixels: 2D camera space), with the goal of calculating each eye's movements separately. To achieve this, you're thinking about converting the eyeball centres from millimetres to pixels. Is that correct?
Yes, that's exactly what I want to do
Alright, thanks for confirming. Technically, this is possible, but I'm not sure that it makes conceptual sense. Just so I understand your requirements, can you elaborate on your overall goal - instead of describing what you want to compute, can you describe your overall research question?
I was able to get it to work once using a computer connected to the password-protected version of our university's wifi to produce a hotspot which I then connected the phone to. This took a long time and many attempts to plug the neon back into the phone. Were you saying that a dedicated router not connected to the internet would be more consistent than this?
Hi, @user-24f54b!
I'll step in for Rob here.
It's quite common for institutional wifi networks to have firewalls that block the connection. For that reason, we do recommend using a dedicated, standalone, router.
It's also possible to set up a local hotspot from a computer or even the Companion Device.
I think it would be worth trying with your Companion Device now:
1. Set up the hotspot and connect your computer to it
2. Connect Neon to the Companion Device and click the streaming icon in the top right of the app's home screen
3. Instead of typing neon.local:8080
in your computer browser, type the full IP Address
Does that solve it?
Do you mean setting up the hotspot from the phone that has the companion app or another one?
Set up the hotspot from the Companion Device
Thank you. That works more often. It's still a bit inconsistent but if I tinker with disconnecting and reconnecting the neon and the wifi on the phone it works.
What should I tell the IT department at the hospital we are working with? To arrange a router but not connect it to the broader network?
If you think the hotspot is working sufficiently for your testing, then it would be fine to continue using it in my view. With that said, if your participants are going to be moving around in a large area, it might be preferable to use a more powerful standalone router to ensure a robust connection. We have had a lot of success with the Archer BE550 model!
They aren't going to move around very much but I'd still like to go with whatever is the most reliable option. If you think a powerful standalone router is the best option I will ask the hospital's IT department for that.
Iβm working on eye movement detection, focusing on smooth pursuit movements. To analyze binocular coordination and detect subtle discrepancies, I need to plot gaze features and monocular coordinates together. Since these data are on different scales (mm vs. pixels), I need to convert millimeters to pixels for accurate visualization and comparison.
Hi @user-2b5d07 ! π Quickly stepping in here for my colleague @nmt . I believe there might be a small misunderstanding about how gaze is estimated.
By default, the gaze point data is calculated using both eye images, meaning the data is derived from both eyes.
To analyze binocular coordination more accurately, we recommend using the 3D Eye State measurements. These measurements provide independent optical axes for the left and right eyes, originating at their respective eyeball centers and passing through the center of the pupils. The default outputs are vectors in Cartesian coordinates, which you can work with directly or convert into spherical coordinates if needed.
Regarding your described approachβtransforming eyeball centers from millimeters to pixels and mapping vectors to the 2D gaze point in image space does not make much sense. One of the main reason is that this method doesnβt yield technically independent eye movements, since both vectors would be tied to the 2D gaze point, they would essentially move in similar coordination.
Instead, the 3D Eye State measurements provide a more physiologically accurate representation of eye poses and better reflect the independent coordination of eye movements. Let me know if youβd like further clarification!
Thank you for your response! I would like to be able to back up the eye tracking files on a hard drive for later processing. I tried this but was unable to upload the data to pupil cloud
Yes that's expected. Recordings can be uploaded to Pupil Cloud from the Companion Device. You can also download recordings from Pupil Cloud to store in an offline context. But they can't then be re-uploaded. I hope this helps!
Hi all, what are some troubleshooting methods if the API is not discovering the Neon even though the monitoring app is working. E.g. the following set of commands in matlab's python integration aren't working:
% Import the necessary Python modules simple = py.importlib.import_module('pupil_labs.realtime_api.simple'); time = py.importlib.import_module('time');
% Start eye tracker recording device = simple.discover_one_device(); recording_id = device.recording_start(); disp(['Started eye tracker recording with id ', char(recording_id)]);
Hi @user-24f54b , while putting together our pl-neon-matlab integration, it was found that the py.importlib
method is not generally applicable to all packages.
I recommend checking out the existing integration, which has been tested on a variety of systems and MATLAB + Python versions. It should save you significant effort in getting setup. You might find the examples helpful and this tutorial.
If you continue to experience issues or have any questions about it, feel free to ask!
I see, thank you!
Hi, all! In some of my recordings, when I replay the video on pupil cloud, I see that the gaze data doesn't start from the beginning to be tracked (I see gaze data later in the 15th sec of my recording) . Is it a setting that I have to fix or is it the conditions of my recording environment that could be responsible for not tracking gaze relatively soon?
Hi @user-42cb18! In this case, can you please first ensure your Companion app is up-to-date in Google Play Store. Secondly, can you try clearing the Companion app cache, as per this message: https://discord.com/channels/285728493612957698/1047111711230009405/1329938288286502973 Does that solve the situation for new recordings?
Thanks, Neil! It worked.
Dear support, is the neon hardware besides the smartphone explosion-proof? Thanks
Hi @user-486277 ! Could you clarify what you mean by "explosion-proof"? Are you looking for any standards focus on equipment designed to operate safely in potentially explosive atmospheres, such as areas with flammable gases or dust?
While, does not have any specific certification for that, note that Neon adheres to a variety of standards to ensure safety compliance, see below:
Additionally, kindly note that the module or frame do not have any battery.
I meant the certification for areas with potentially explosive atmospheres. I needed to know in which areas we can measure. Thanks for the fast answer.
Hi, we are having an issue with our recording. We are recording two 30-minute sessions. While everything works perfectly fine for some recordings, others (and we could not find an error on our end) do not have any events in the recordings. This is most often but not always the first out of the two 30-minute recordings. Specifically, we do not get the events 'recording.start' or 'recording.end'. Additionally, we are sending synchronization triggers via LSL and also do not get any of these.
Hi @user-80c70d , do you mean those events are not appearing in the Neon recording data or in the LSL output? For example recording.start
and recording.end
are missing on Pupil Cloud?
Hi, yes exactly, these events are sometimes missing on Pupil Cloud. Other times they appear as expected
Could you open a ticket in π troubleshooting ? We can continue the discussion there. Thanks!
Hi all, if I were to use a USB C cable extender to allow me to keep the companion phone further away from the bare metal neon during experiments, do you think that would cause slowing or degradation of the signal?
Hi @user-24f54b ! π While you could try a cable extender, it is prone to signal degradation as you mentioned, so we canβt recommend it. Could you share more details about your study? We might be able to suggest alternative solutions, such as using the real-time API to control the device.
Our lab is aiming to have multiple accounts for the neon eyetracker because some of the data will be collected just during public engagement events and thus not worthy of keeping, while other will be during our data collection. Is this okay having multiple gmail accounts associated with us? Also, how much storage does the icloud have?
Hi @user-13f7bc ! There is a 2-hour quota on Cloud unless you have the Unlimited Storage Add-on or are part of the Early Adopters Club. You can find more details in this announcement: https://discord.com/channels/285728493612957698/733230031228370956/1292784609938898954.
You definitely can have multiple accounts, but Iβd recommend using a owner account to set up workspaces and add add-ons. Then is up to you whether to use one or multiple accounts, although for this porpoise, instead of using multiple accounts, you can create different workspaces for various projects/needs and later invite collaborators to them as needed. This minimizes the need to switch accounts on the phone/app, which can be more cumbersome.
@user-d407c1 Thank you, this is very helpful
how can we export the video directly from the app with the cross-hair as displayed in the app?
Scene video with gaze overlay is not directly exportable from the Companion App. However, you can add gaze overlay either in Pupil Cloud using the video renderer to generate the export or in Neon Player using the world video exporter for local export.
Let me know if you need further details!
Thank you
We are using the API to control the eye tracker but I'd like there to be a lot of slack in the cable because we're working with infants and the cable needs to go around their heads a bit. I'd like to have more slack in the line so that there's less of a chance of causing discomfort to the infant or pulling the phone off the table if they move in a way we didn't expect.
Did you consider placing the Companion Device (phone) in a pouch or backpack after is locked? To me, that seems like the most comfortable way to carry it. Using a longer cable could not only degrade the signal but also increase the risk of tangling.
What is the link to download android file transfer? The link you shared doesn't include the "android file transfer", unless I am being very daft
Hi @user-13f7bc π ! That should work, alternatively you can also try OpenMTP to transfer from your Android Device to a Mac.
I found this one: https://android.p2hp.com/filetransfer/index.html Is this it
?
Hello team! I have an experiment with some events defined, and I can see them correctly in the timeflow of the recordings on the Cloud. However after exportation when I want to access them from the "events" csv file, it appears that all the events have the same timestamp of 1730000000000000000 (cf screenshot). So I loose all the time components, which makes the events impossible to use. Do you have any idea how to fix that? Thanks!
Hi! Do you have recommendations on how to cut gaze and world video data in batch processing? I am using the pl-rec-export tool to extract the data and I want to cut the gaze and video file while keeping them synced.
@user-edb34b That sounds like an Excel issue when importing .csv
files.
Could you try opening the original files with a text editor (e.g., Notepad) to check if they look correct? If the data appears fine, you might want to review how to properly import CSV files into Excel to avoid formatting issues.
This guide might help: Prevent Scientific Notation on Import
Thanks for your reply!
Hi @user-d5a41b π! Just to clarifyβare you looking to generate a video with a gaze overlay for specific sections in batches? If so using enrichment sections and the video renderer would be the easiest.
Let me know if I understood correctly!
I'm trying to generate a csv file with gaze coordinates and the corresponding world video file based on certain onset and offset times to use this for further analysis. Unfortunately our data is not on Pupil Cloud. I have used this logic: https://gist.github.com/mikelgg93/8ea0ef4bba62a2fa9c0a924cf019ec18 to assign gaze data to frame numbers and pts. I have attempted to cut the video and the gaze data using the onset and offset times and I also tried using frame numbers (both with ffmpeg), but looking at the results I donβt feel confident that this produces accurate results. I either end up with a mismatch of frame numbers in the gaze data and video data or a mismatch in length (based off pts and video length).
Dear tech, I'm using a hub to achieve PC-Neon communication through an ethernet. We would want to use the realtime api for interacting with Neon at low latency under this mode. Now we are faced with the problem of the IP address and port of the ethernet connection to specify in the realtime api, as the default IP address and port shown on the companion device seems to be for a WIFI connection not ethernet
Would you be able to advise on the means to specify the IP and port under this setting?
Hi @user-a55486 , may I ask what OS you are on? The instructions for a direct connection vary with OS.
Also, have you considered connecting the computer and Neon via Ethernet with an intermediary router? Setup and configuration are significantly easier that way.
Hi Rob, we are using Windows 11
Ok, and just to confirm, you want a direct Ethernet connection? In other words, using an intermediary router is out of the question?
Correct, a router is not possible in the current setup
Ok, then please reference this document for Windows 11 and this Discord message (https://discord.com/channels/285728493612957698/1047111711230009405/1272483345137139732). Note that these are not official instructions, but have worked consistently in several tests.
@user-d5a41b Have you seen pl-neon-rec library ? I think it is better suited for this task.
It includes examples on how to use gaze and scene video from raw recordings, which should be helpful for your workflow.
Let me know if you need any further guidance! π
We have data from Pupil Invisible and Neon and I have to use the same pipeline for all the data. We used dyadic eye tracking and we are working on eye contact.
Thank you so much! We were able to follow everything until the network device sharing part. We have 2 network devices, one ethernet one WIFI, we followed the instructions to open the property of the WIFI device but found no Sharing tab:
This might require assistance from Microsoft or Intel then. The Sharing properties of network devices potentially vary between manufacturers and models.
Otherwise, connecting Neon and the PC over Ethernet via a small router is known to work and can make the process significantly easier. The router does not need WiFi capability, nor does it need to be connected to the Internet.
I have collected data using Pupil Neon - I want to analyse continuous pupil size throughout the duration of the recording - how can I do this? (and is it possible within Neon player?)
Hi @user-f389a1 ! As in track visualize pupil size in real time or post-hoc next to the video or simply get the values?
Neon & Pupil Invisible Gaze overlay
We're trying to track the pupil response to light. It would be great to just be able to use the built in scene camera, or perhaps the eye cameras, to make reasonable assumptions on the luminance that the eyes are exposed to.
We have however observed that the scene camera seems to have a dynamic gain, which very clearly changes as the light conditions are changed. This of course makes sense when trying to record the best possible video, but is less than ideal when trying to measure light intensity.
Would it be possible to somehow access raw camera values, or something close to that? Or perhaps log the gain settings for each frame, so we may be able to infer the lumiosity based on the gain and the measured pixel values?
Hi, @user-b43477 , interesting use!
For context, raw values aren't read out by the app - the images are saved as .mjpeg so aren't available in their raw format. But with that said it should be possible to turn off auto gain and exposure by using manual exposure in the Companion app settings. Do you think that would suffice? Note that auto white balance is in operation on the camera.
Thanks Rob for the quick reply! I see, in this case we will talk to the facilities team and see if we can fit a router in for this purpose. Would you mind sending relevant info for a router so that they may try out that solution too? Thanks
Ah, a recommendation for a router I do not have, but you should have noticeably less difficulty with the router approach. An inexpensive router should probably be fine, for example.
@user-a55486 to be clear, if that is a computer where you could swap in a different, compatible network card, then that is also an option.
@user-a55486 my colleague, @nmt , has pointed out that we have had success with the Archer BE550 at least.
Thanks for the recommendation!
I think it could be worth a shot! I've been thinking about if could be possible to log the gain settings for each frame? If it was, we may just be able to calculate backwards to figure out the light intensity. It doesn't have to be perfect, but the current method is basically unworkable, as the dynamic gain makes two frames impossible to compare.
Hi @user-b43477 , we received your email and will continue communication there!
Hi @user-d407c1 - I would just like the continuous values of pupil size throughout the recording!
Regardless of whether you want to access the data in real time (as the recording is happening) or post-hoc (after it has been recorded), you can do so.
To access pupil size values for each eye:
- Post-hoc: Use the 3d_eye_states.csv
file.
- Real-time: Retrieve the data through the Real-time API.
Since you asked about Neon Player, you can also enable the Eye State Plugin to visualize pupil size throughout the recording.
Let me know if you need any further details!
Hi @user-d407c1 thank you for your help!! Where can I find the 3d_eye_states.csv file from my recordings?
If you're using Cloud, you should find it when you download the Timeseries format.
If, on the other hand, you exported the native format from your device to your laptop, you will find it after exporting the data from Neon Player.
Keep in mind that Neon Player does not reprocess recordings. So, if Eye States were not enabled in the Companion App settings at the time of recording, that data stream will not be available.
Hi, we got a router and is testing in the lab space now. We connected the ethernet from the hub to the router (TP Link TLSG1005D) and have the other side connected to the PC. Now we still can't discover one device over the realtime api. Any suggestions or extra operations needed on the PC to configure the connection? Thanks!
Hi @user-a55486 , do the Connection Troubleshooting steps here help to resolve the issue? Some routers might block mDNS traffic by default.
I see, but may I ask what would be the expected behavior? Should the companion device give another set of IP address and port? Because right now it still gives the same IP address as if we were connecting through WIFI, although we have turned off WIFI and data connection
Ah, then, you might need to restart the Neon Companion app. The app requests an IP address whenever it starts.
You can do so by "force stopping" the app and then starting it anew:
App info
Force stop
Just to clarify, the IP address can be found in the Stream section of the app (the button in the top right, under the Settings icon)
If this does not resolve it, then I recommend trying the steps in the Connection Troubleshooting section of the Documentation.
After restart it shows waiting for mDNS, no streaming info available
Then, this could indicate that the router is currently blocking mDNS traffic. Do the steps here help? Note also that port 5353 should be open, especially for UDP communication.
Hello, I am having issues transferring my recordings to PupilLabs Cloud from the Companion device. I have 8 recordings and only one of them is attempting to upload to the Cloud (it is at 0%). The others are still waiting to start transferring as well. Is there a reason this is taking so long? I have not had this issue previously, most of my recordings uploaded to the Cloud in a matter of minutes.
Hello, @user-e13d09! Can you please double-check the Companion device has internet connectivity, and then try logging out and back into the Companion app. That should trigger the upload. Let me know if it works!
Hi all, is this the router you recommend for using the monitoring app? https://www.amazon.com/TP-Link-Tri-Band-Archer-BE550-HomeShield/dp/B0CJSNSVMR?tag=googhydr-20&source=dsa&hvcampaign=electronics&gclid=Cj0KCQiAhvK8BhDfARIsABsPy4ikQPcTomv-QmFLZHyHKLNpcLNX1X_35jdwZdx4LRca4ukLGywGI2caAgvFEALw_wcB&th=1
This is quite a powerful router, we've had a lot of success using it. It provides decent range and performs well in wifi-busy environments in our experience.