πŸ‘“ neon


Year

user-1423fd 01 April, 2024, 17:09:28

Hi, this morning (around 11am eastern standard time) I was testing out my Neons and was prompted to perform this firmware update. Well it has been about two hours since this update started and I have yet to see any progress made on the update. Any advice would be greatly appreciated! Thanks!

Chat image

user-480f4c 02 April, 2024, 06:29:22

Hi @user-1423fd - Could you open a ticket in the πŸ›Ÿ troubleshooting channel? Then, we can assist you with debugging steps in a private chat.

user-9bbaa9 02 April, 2024, 12:43:34

Hey there! We have currently an issue with the frame fit of the neon. Partcipants reported the frame pressure on their head is causing mild pain. At the same time our rate of simulator sickness (driving similator) rapidly rose. The frame is obviously 3D printed and there are serveral public 3D Models: - https://github.com/pupil-labs/neon-geometry - https://www.shapeways.com/shops/pupil_store

I have access to a 3D printer and would like to replace the current frame to a minimalistic style of frame like is this thing on and maybe try adjusting the pressure of the frame. But I canΒ΄t find any suited 3D models to start with. Is there a community or even official repo you can point me to? Thx!

user-d407c1 02 April, 2024, 13:01:55

Hi @user-9bbaa9 ! Thanks for the feedback, we would like to hear more, and see how we can improve our designs.

That said, I am curious to know about how the frame is pressuring them, are they also wearing a helmet? What frame are you currently using? Is it the Ready Set Go?

Additionally, how do you link the rouse in the simulator sickness and the use of eye-tracking? The only think I could link up would be if you were using the I can see clearly now and the prescription or the lenses optical center would not match their PD.

Only the Just Act Natural frame, and the geometries of the Bare Metal and Module are open source, you can find them on the directory that you already shared.

PS. If you do not want to disclose details about the study, we will be happy to continue the conversation by email or DM.

user-9bbaa9 02 April, 2024, 13:08:24

Hey there!

user-0001be 02 April, 2024, 19:55:20

Hi @user-d407c1 & @user-f43a29 !! I downloaded plimu from github. I have Python 3.10 installed on my PC. It states it needs conda. What is that for and can you walk me through how to integrate it with Python 3.10 with this plimu? Thank you! πŸ™‚

user-d407c1 02 April, 2024, 20:25:04

Hi @user-0001be ! No need for β€˜conda’ can be normally installed as a python package as far as I remember

user-f43a29 02 April, 2024, 20:26:55

Hi @user-0001be πŸ‘‹ ! Conda, aka Anaconda is a way to install Python and many other programs/packages. It is not a strict requirement of plimu, but it should make running the plimu utility a bit easier here, as you should not have to search for your system's console scripts path. You can install a Conda supplied version of Python in parallel with the Python that you already have installed, so that you do not have to take the time to remove the Python that you already have. After Conda is installed, you can open a terminal and follow these steps to create a new environment with Python 3.10. You should only need the steps in "Environment Basics" in this case. Then, you can continue with the rest of the steps in the plimu instructions.

user-0001be 02 April, 2024, 20:30:39

@user-f43a29 @user-d407c1 okay! Let me explore them! Thank you both!

To run it, I just type this in the Command Prompt? plimu_viz --address [IP OF COMPANION PHONE] --port [PORT TO USE] --show_stars [BOOLEAN]

user-f43a29 02 April, 2024, 20:35:24

When running command line programs, you want to replace the bracketed parts in an example with the required info. You can just provide the IP address. The other options are set to sane defaults. So, something like:

plimu_viz --address 192.168.1.36

will do it. Simply replace my address with the IP address that is shown in the Stream section of your Companion app.

user-0001be 02 April, 2024, 20:40:53

Okay. I'm getting this error. Is there a location where I am suppose to put the extracted zip plimu file? Like to install the file. Trying the simple way without conda now.

C:\Users\Rachel>plimu_viz --address 172.26.218.132 'plimu_viz' is not recognized as an internal or external command, operable program or batch file.

user-f43a29 02 April, 2024, 20:42:20

Are you running that within the activated Conda environment? If not, then you first need to find your system's console scripts path with this command:

python -c 'import sysconfig; print(sysconfig.get_path("scripts"))'
user-f43a29 02 April, 2024, 20:42:27

In my case, that returns:

user-f43a29 02 April, 2024, 20:43:05
/Users/robertennis/.pyenv/versions/3.10.13/bin

So, the full command in my case is:

/Users/robertennis/.pyenv/versions/3.10.13/bin/plimu_viz --address 192.168.1.36
user-f43a29 02 April, 2024, 20:44:41

Conda should handle this step for you and should generally make things easier.

user-0001be 02 April, 2024, 20:47:07

Hmm I'm not getting the path for python, said syntax error

user-0001be 02 April, 2024, 20:48:45

C:\Users\Rachel\AppData\Local\Microsoft\WindowsApps\python.exe

C:\Users\Rachel>python --version Python 3.10.11

C:\Users\Rachel>python -c 'import sysconfig; print(sysconfig.get_path("scripts"))' File "<string>", line 1 'import ^ SyntaxError: unterminated string literal (detected at line 1)

user-0001be 02 April, 2024, 20:49:39

I'll get Anaconda then. Just to make sure, so it won't mess up / confuse what I currently have with the existing pip that I have?

user-f43a29 02 April, 2024, 20:50:43

Yes, there might be an issue with doing a direct copy-paste from Discord chat. Conda should do the trick. Conda will manage its own Python install and its own set of Python utilities completely independent of your current Python. They will not interfere with each other.

user-0001be 02 April, 2024, 20:53:06

Okay thanks @user-f43a29 ! πŸ™‚ Let me download that now

user-f43a29 02 April, 2024, 20:53:48

Ok, no problem! I will be signing off soon, but one of us will be sure to get back to you if you have more questions. Wishing all the best

user-0001be 02 April, 2024, 21:30:58

I have installed Anaconda3 in my C:\Anaconda3. Then I went to Environments and Created a new Enviroment called PupilLabIMU. (is this correct) - red box. Does this mean the Conda environment is activated?

Then I went to Command Prompt to input pip install -e, it gives the error option requires 1 argument

Chat image

user-f43a29 02 April, 2024, 21:57:06

That means the environment has been created and activated, but you need to open a terminal that is aware of that. If you click that green arrow, you should see a menu with "Open Terminal" as an option. Once you have the terminal open, then you need to run "pip install -e ." with the dot at the end, while you are in the plimu directory that you got from the Github repo. You will need to use the "cd" command to change to the directory with the plimu code. You also want to be sure that you choose Python 3.10 when you created the environment.

user-660f48 03 April, 2024, 01:55:01

Hi. I am having problems with one of our neon glasses. We cannot get it to work. The app asks for a fdga update, but once we do this update nothing happens. When we reopen the app, the app once again asks for the update. Can you please help?

nmt 03 April, 2024, 04:03:18

Hey @user-660f48! Please follow these steps: 1. Disconnect Neon from the phone 2. Long-press on the Neon Companion App icon, select 'App info' and then, 'Force stop' 3. Restart the Companion App 4. Reconnect Neon to the phone This should trigger the update to run properly. Let me know if that works!

user-359513 03 April, 2024, 05:22:54

Hi. I am having problems with create enrichment. As i cannot create enrichments right now due to whatever reason, is there a way of export just data for the number of fixations and saccades? Look forward to hearing back from you.

user-480f4c 03 April, 2024, 05:35:19

Hi @user-359513 - Can you please clarify why you can't create enrichments? Did you get an error after creating/running one? In that case, could you open a ticket in the β πŸ›Ÿ troubleshooting channel? Then, we can assist you with debugging steps in a private chat.

Fixations and saccades can be downloaded with the raw Timeseries Data + Scene Video. Simply right-click on the recording of interest and select this option for download.

user-359513 03 April, 2024, 07:28:39

and After 20 or 30 minutes, an error message appears.

user-d407c1 03 April, 2024, 07:29:53

Hi @user-359513 πŸ‘‹ ! Thanks for following up. Can you please share what kind of enrichment are you trying to run and what error message is shown?

user-359513 03 April, 2024, 07:36:15

Sorry, I didn't understand what it meant.γ… γ… 

Chat image Chat image

user-f43a29 03 April, 2024, 08:39:13

Hi @user-359513 , if you hover your mouse over the "Error" or open that enrichment and look in the top left, by "Enrichment Type", do you see an error message? What is the text of the error message?

user-359513 03 April, 2024, 09:09:16

Chat image

user-359513 03 April, 2024, 09:11:13

Chat image

user-359513 03 April, 2024, 09:12:14

Chat image

user-d407c1 03 April, 2024, 09:22:57

Hi @user-359513 ! This means that the reference image was not found or could not be matched with the scanning recording. Do you show that picture on a screen? In that case, it is not surprising that it did not work.

If you want it to work, I would:

A) Use an reference image that takes everything in your environment around. (monitor/cables, etc) to match, like here where the content is not used to locate the image but the surrounding.

B) For a potentially more robust solution, you can print or display Apriltags and use the Marker Mapper

user-359513 03 April, 2024, 10:15:21

Thank you .I will try it~

mpk 03 April, 2024, 14:08:51

Not yet but this would be a great refute suggestion for our cloud api . --- Actually I stand corrected. We already support! πŸ™‚

user-a7636a 04 April, 2024, 15:04:58

Hello! We are doing a experiment were we need to track the total time spent gazing on the screen and when not. We have set up markers around the screeen and managed the create the enrichment in pupil cloud. All seems to work as it should. We where wondering what the best way to collect the data on gaze time spent on the area of interest vs the overall gaze time. Can you do it in pupil cloud? If not what other options do we have?

user-480f4c 04 April, 2024, 15:26:21

Hi @user-a7636a ! In your case, I'd recommend getting this information from the fixation.csv file that is included in your Marker Mapper export folder. This files includes a column fixation detected on surface(True or False) that allows you to isolate fixations that were on your AOI and calculate the average or total duration for these fixations based on the duration [ms] column. To get the overall fixation time in your recording you could follow the same approach without filtering for fixations detected on surface or not.

Please also note that you can use our AOI Editor tool on Pupil Cloud, that allows you to manually draw AOIs on surfaces after completing your Marker Mapper enrichment. Our docs on that are currently in progress but you can find a summary of this tool here: https://discord.com/channels/285728493612957698/733230031228370956/1199762718949912758

user-d407c1 05 April, 2024, 06:13:48

Hi @user-a7636a ! To ensure we're aligned, could you clarify whether you're interested in the time spent gazing at or fixating on a particular area of interest? While both metrics can be obtained for each AOI, they serve different purposes.

user-0001be 04 April, 2024, 20:49:21

To follow up on the Marker Mapper, I have 10 power point slides of different shapes on each slide in the middle, so I just put 4 markers at the corners of the slides. So it is the same 4 across 10 slides? Then for AOI, do I need to save the jpg of each power point slide and do the video for all 10 slides? does the background envionment matter then? I can bring my laptop to different rooms and have various participants look at the same set of powerpoint slides but in a different environment. Would this be an efficient way to use the Marker Mapper?

user-d407c1 05 April, 2024, 06:28:40

Hi @user-0001be ! Your setup for using Marker Mapper sounds spot-on. The surrounding environment won’t impact the results, provided the markers are clearly visible with good contrast in the scene camera. Essentially, as long as they’re easy to see and cover a good portion of the field of view, you’re all set.

Your approach with one marker on each corner of the slide is excellent, just ensure the markers are not repeated.

To enhance your setup, consider using a unique set of markers for each slide. This way, you can assign a specific enrichment (Marker Mapper) to each surface/slide, keeping the data organised and separated per enrichment, and not having to rely on setting events accurately.

Regarding the AOI tool, Marker Mapper currently does not allow you to upload the image of the surface, but it rather takes a snapshot of the surface during the enriched time frame. If your slides have different markers, Marker Mapper will automatically capture and use an image from each one.

If you have any more questions or need further assistance, feel free to ask!

user-359513 05 April, 2024, 12:30:33

I'm very curious why this happens on my pupil cloud?

Chat image

user-f43a29 05 April, 2024, 12:32:41

Hi @user-359513 πŸ‘‹ ! Were you trying to apply a search filter to your list of Enrichments?

user-f43a29 05 April, 2024, 12:35:52

To be more precise, did you click on this button marked in red?

Chat image

user-359513 05 April, 2024, 12:44:44

no~

user-f43a29 05 April, 2024, 12:46:25

Actually, I think it is best to open a πŸ›Ÿ troubleshooting support ticket for this one and we can continue conversation there.

user-359513 05 April, 2024, 12:49:30

yes, I deleted an Enrichment and saw this scene. and I also have other Enrichments.

user-0001be 05 April, 2024, 18:20:46

At the Enrichments section, I created MM for each slide of my PowerPoint. Then after it has all been processed, I created the Visualization. heatmap. It seems that the Heatmap is offset, I'm looking at a persons face, so we look at the eyes nose and mouth, but the heatmaps seem to be shifted to the left. Is there some sort of a shift when it is being mapped? How can I ensure the accuracy?

user-5c56d0 08 April, 2024, 05:56:49

Thank you for your management. Could you please answer the following questions?

Q1

I am using Neon. In the downloaded data, there is camera footage of the eyeball called "Neon Sensor Module v1 ps1". Is the starting frame of this camera footage (the timestamp of the first frame of the video) the same as the first column of the csv "gaze" or the csv "3d_eye_states"?

For example, the timestamp in the first column of the csv "gaze" was 1711970000000000000. Also, the timestamp in the first column of the csv "3d_eye_states" was 1711966049444420000. In this case, what is the starting frame of the camera footage "Neon Sensor Module v1 ps1" (the timestamp of the first frame of the video)? β€» The camera footage "Neon Sensor Module v1 ps1" has a total of 7373 frames. The csv "gaze" has a total of 7371 rows of data. The csv "3d_eye_states" has a total of 7373 rows of data.

I want all download data to correspond to the same column of timestamps.

Q2. Does your answer apply to both neon and Pupil invisible?

user-d407c1 08 April, 2024, 08:48:56

Hi @user-5c56d0 πŸ‘‹! Looks like you're diving into the raw dataβ€”great to see your enthusiasm! A quick note as you explore: these files might come in multipart formats (like ps1, ps2, etc.). If you encounter multiple parts, you'll need to merge them appropriately.

Besides multipart handling, you'll also find a .time file with the same name as your .mp4 files. These .time files are key for syncing, as they contain timestamps for each frame.

You can use these timestamps to generate your own eye_timestamps.csv file, akin to the world_timestamps.csv:

This is how you read them:

def read_timestamps(path: Union[str, Path]) -> np.ndarray:
    """Read gaze timestamps from a binary file.gaze psX.time"""
    return np.fromfile(str(path), dtype="<u8")

The same principle with .time files and multipart handling applies for Pupil Invisible.

Note that the csv files from Cloud may differ as the recordings are reprocessed to ensure you have 200Hz data regardless of the sampling data on the phone.

user-d407c1 08 April, 2024, 08:52:42

Hi @user-1e7d05 ! Would you mind creating a ticket at πŸ›Ÿ troubleshooting? So that we can streamline the communication and ask private details such as the account, order ids or email addresses to follow-up, rather than on the public channel?

user-1e7d05 08 April, 2024, 08:54:09

Sorry, somehow overlooked it!

user-b55ba6 08 April, 2024, 09:53:07

Hi! I am looking for the coordinates of the pupil-center but in the eye-centered video (coordinates in pixels of the image of each eye). I am interested in having an idea of the noise of the pupil position from the infrarred camera perspective and in pixels. Is this possible in a straightforward way? Thanks!

user-d407c1 08 April, 2024, 10:08:50

Hi @user-b55ba6 ! NeonNet does not expose these parameters when estimating pupil size and eyeball centre.

However, you have the option of running Pupil Core's pupil detectors, which are open-source, over the eye camera videos, to extract these values.

Note, that the accuracy of these measurements will significantly depend on the quality of your eye images.

user-7413e1 08 April, 2024, 11:05:43

hello - I have an error message "low storage", how do I free up space without deleting files from the cloud too?

user-d407c1 08 April, 2024, 11:12:10

Hi @user-7413e1 ! Deleting/removing recordings from the Companion Device doesn't remove them from the Cloud. Please make sure they're fully uploaded to the Cloud before you delete anything from your device.

user-7413e1 08 April, 2024, 11:14:20

Ok thanks - but if I remove them from the companion app, it would delete them from the cloud, is that right?

user-d407c1 08 April, 2024, 11:17:25

Deleting/removing recordings from the Companion Device does not delete them from Cloud

user-7413e1 08 April, 2024, 11:24:01

I see: I thought companion app and companion device did not necessarily mean the same thing. Thanks for clarifying

user-d407c1 08 April, 2024, 11:25:05

No worries! Let us know if you have any other questions

user-ea768f 08 April, 2024, 11:30:04

Hello! I'm using Neon in a study to track eye and head movements during different tasks. I'd like to determine if there are differences in head movements across these tasks. I'm new to this area and data type so I'm looking for some help on understanding the IMU outputs. Can anyone offer advice on how to interpret these IMU readings and quantify head movement variability? Thanks!

user-f43a29 08 April, 2024, 12:20:56

Hi @user-ea768f πŸ‘‹ ! Have you already had your free 30-minute onboarding video call with us?

And what will your tasks be?

In the meantime, if you have not done so already, I recommend checking out our documentation on the data that the IMU provides and the coordinate system that it uses. The IMU provides relative head rotation (roll/pitch/yaw) also known as neck bending/flexion/rotation respectively, as well as rotation speed and rotation acceleration about these axes, and all of the data are timestamped. For example, you could see if a task causes observers to rotate their head left/right more slowly/quickly by analyzing the "gyro z [deg/s]" column of the "imu.csv" file and looking for differences.

user-15a4d1 08 April, 2024, 17:26:59

Am I correct that the microphone stream isn't exposed in the realtime APIs?

user-d407c1 08 April, 2024, 17:50:05

That's correct!

user-52e548 09 April, 2024, 01:26:31

Hi, Team! Can you tell me about saccades detection?

It was discussed a few days ago that a trial version of saccades.csv can be downloaded from time series data. In the data I measured, the minimum amplitude [deg] recorded was about 2 degrees. What is the minimum measurable amplitude [deg]?

What are the current specs and what are the target specs for future development (if you can provide disclosure)? My main intention of this question is if it can be used for the study of micro saccades with amplitudes of 1 degree or less. Thanks!

user-d407c1 09 April, 2024, 06:31:27

Hi @user-52e548 ! Thanks for your question. We'll be sure to share more details as soon as everything is ready πŸ˜‰. For now, what I can share is that saccades are essentially the secondary output of the fixation detector. This means anything not classified as a fixation by our criteria here will be categorised under saccades in that file. Stay tuned for more updates!

user-359513 09 April, 2024, 06:59:00

Hello.^^ I did some enrichments yesterday, the status were completed at that time. But when I opened these this morning, the status were not started.I want to konw how to solve this problem.

Chat image

user-d407c1 09 April, 2024, 06:59:59

Hi @user-359513 ! Could you try performing a hard refresh on your browser? Just in case, it had cached that page

user-359513 11 April, 2024, 02:11:51

I want to know how to solve this problem?

user-359513 09 April, 2024, 07:01:25

I did it, but it doesn't work. It still not started.

user-f43a29 09 April, 2024, 11:27:09

@user-d569ee Those images are not undistorted, so if you used the scene camera preview, you will want to undistort the images before doing the FOV calculations. If you still find discrepancies after that, then feel free to reach out again.

user-b55ba6 09 April, 2024, 13:29:02

Hi,

I am interested in measuring Eyeball pitch and yaw, in degrees. I am looking at the eye_state variable. For all subjects, all variables are between 0 and 1, but I do not see a clear pattern. Would the change in any of these variables reflect pitch and yaw, in termes of eye movements?

Thanks!

user-d407c1 09 April, 2024, 14:17:51

Hi @user-b55ba6 !

The optical axis is a 3D vector in a right-hand coordinate with a coordinate system orientation that differs slightly from other conventional spherical coordinates systems.

Given a 3D vector where z is forward, y is down, and x is to the right, as we show in the documentation. You can find its azimuth (ΞΈ) and elevation (Ο†) angles using the following formulas:

  • Radious (r) : The most straight forward, but not useful, since it is normalised. r= sqrt(x^2 + y^2+ z^2)

  • Azimuth (ΞΈ): The angle between the z-axis and the projection of the vector onto the xz-plane. It's calculated as: ΞΈ = (Ο€/2) - atan2(z, x)

Akin to yaw, 0 is forward and positive is to the right (clockwise) on the xz plane.

  • Altitude/Elevation (Ο†): The angle between the vector and its projection onto the xy-plane. It's calculated as: Ο† = arcsin(-y/r)

Akin to pitch, and where the positive value would indicate pointing upwards. Hope this helps!

user-eebf39 09 April, 2024, 20:00:54

We're doing real-time robot control. The core allowed us to get gaze info with the Python API, the Invisible required a phone app to run. Does the Neon allow simple Python API interface from a laptop?

user-cdcab0 10 April, 2024, 03:47:20

That sounds like a cool project!!

The Neon does use a companion app on an Android device, but there are two important things you should know.

  1. We provide a real-time API so that you can stream data and messages between a computer and the companion app.
  2. You can use Neon with Pupil Capture (the software used with Pupil Core), but only if you run from source, and only on Mac and Linux
user-415050 09 April, 2024, 22:21:08

I’d like to buy eye tracking glasses hardware ( just glasses frame with three camera) and I don’t want to any software. How can I get a quote?

user-480f4c 10 April, 2024, 06:04:15

Hi @user-415050 , could you please send us an email to sales@pupil-labs.com in that regard? Alternatively, you can select the product of interest on our website, add it to cart and fill out the quote request form there. Our Sales team will receive then your details and generate a quote for you.

user-359513 10 April, 2024, 06:25:33

Hello. I accidentally deleted an important recording, can I retrieve it?

user-d407c1 10 April, 2024, 06:31:29

Hi @user-359513 πŸ‘‹ ! Where you deleted from? The Companion App or Cloud?

Companion App If it was in the Companion App, but you have the Cloud backup enabled and it was already uploaded, you can still retrieve it from Cloud. Deleting recordings in the app do not affect recordings in Cloud.

If you do not have the backup enabled or it wasn't yet uploaded, then there is nothing we can do.

Cloud If it is in Cloud where you deleted it, you may still find it on the trash. Check out here how you can access the trashed recordings and restore them from Cloud. https://discord.com/channels/285728493612957698/1047111711230009405/1185182070947991583

user-359513 10 April, 2024, 06:34:03

Thank you.I deleted it in the cloud.

user-d407c1 10 April, 2024, 08:43:27

Then you should be able to restore it from the trash.

user-359513 10 April, 2024, 10:08:10

Thank you very much. I restored it from the trash ^^

user-359513 11 April, 2024, 01:54:01

Hello,Can the heatmaps of all experimental subjects be merged?

user-f43a29 12 April, 2024, 10:14:36

Hi @user-359513 , when you run an Enrichment on a project, then it enriches all the recordings in that project by default. And the Visualizations, such as Heatmap, will produce an aggregate visualization of all those enriched recordings, also by default. So, if you did not limit the Enrichment or Visualization to a single recording, then the heatmap already merges multiple experiment subjects. You can double-check this by clicking on "Edit" under "Recording Data" in Heatmap visualization display, marked with the red square in the attached image.

Chat image

user-359513 11 April, 2024, 02:50:03

And I uesd 8 images to have the experiment, No. 1,3,4.5,7 images can make the enrichments ,but 2,6.8 cannot do it. I set them up the same way. What caused this ?

user-f43a29 12 April, 2024, 10:20:50

Do you mean that you used 8 separate reference images and ran 8 separate Reference Image Mapper (RIM) enrichments? When you say "cannot do it", do you mean that you get an error or that you do not see any gaze data mapped to the Reference Image? If a RIM Enrichment is having trouble, then this couuld mean that the elements in the Reference Image do not appear in the recording or that the scanning recording needs to be redone. I recommend double-checking our Best Practices for RIM.

user-359513 11 April, 2024, 02:55:09

Also, the enrichments is completed, but there is no heatmap data in this participants but other participants' data is. What can I do for this?

user-a5a6c3 11 April, 2024, 12:54:23

Hello, i'm trying to do real time blink detection and i found the repository published on git hub "real-time-blink-detection". After cloning i created a simple python file with the code described in "Part 2: Blink detection in real time using Pupil Lab's Realtime Python API" but it gives me the error in the txt file i uploaded here. It seems device doesn't have that attribute but if i try to write "device." it shows me that attribute as a suggestion. How can i fix this? Update: Downgrading the real time API from 1.2.0a3 to 1.1.2 resolved the issue.

Chat image error.txt

user-d407c1 11 April, 2024, 13:56:15

Hi @user-a5a6c3 ! not sure how you got to that version of the realtime API but the realtime blink detector repository has a requirements file with the dependencies you need. Simply use pip3 install -r requirements.txt, as outlined in the installation instructions.

user-359513 12 April, 2024, 10:35:14

Chat image

user-359513 12 April, 2024, 10:35:31

Like this

user-f43a29 12 April, 2024, 10:35:44

Double-click the Enrichment in the middle with the error, then put your mouse over the word "Error" in the new display that appears

user-359513 12 April, 2024, 10:40:59

Thank you very much ^^

user-f43a29 12 April, 2024, 10:50:22

@user-359513 , my colleague, @user-07e923 , pointed out to me that you are presenting images on a tablet?

As mentioned previously by @user-d407c1 , for tablets, the Marker Mapper with AprilTags will generally be easier and more robust, and if you still want to use RIM, then your reference image should not be the image that you show on the tablet (see https://discord.com/channels/285728493612957698/1047111711230009405/1225012659875745852 for Miguel's comments about that). The reference image should include the whole table. You can even take the scanning recording and the reference image with the tablet turned off (i.e., showing a blank, black screen).

user-359513 12 April, 2024, 10:52:51

Ok ~I understand now. Thanks a lot.

user-f43a29 12 April, 2024, 10:53:20

No problem!

user-e3da49 12 April, 2024, 13:36:39

thank you

user-d407c1 12 April, 2024, 14:02:43

where are you using that API URL?

user-0055a7 15 April, 2024, 11:00:13

Hi guys!

Experiencing an issue with enrichment data. Using a Marker Mapper on a video but when the enrichment data is downloaded the aoi_metrics is empty of data. Any ideas? An uninformed guess is that the fact that all 4 markers aren't visible at all times but I have no idea.

This is the enrichment ID if it helps: d162b105-d31d-44cd-80f1-7a9f4c4677ad

user-480f4c 15 April, 2024, 11:06:21

Hi @user-0055a7 - Can you please clarify whether you used the AOI Editor Tool to draw AOIs for this specific enrichment? The aoi_metrics.csv file has data only if AOIs have been defined using the AOI tool

user-0055a7 15 April, 2024, 11:26:59

okay, I think making an AOI and selecting the entire area within the markers fixed it for me! thanks for the help :)

user-613324 15 April, 2024, 21:48:52

Hi Neon team, in the files that exported by Neon Player, there is a csv file named "world_timestamps.csv". It has two columns: '# timestamps [seconds]' and 'pts'. I have read the documentation and relative threads on discord, but still having trouble understand their meanings. So I understand that '# timestamps [seconds]' in this file denotes the timestamp in second relative to the first camera frame in the scene. But I have no idea of 'pts'. The document says it is "an abbreviation for presentation timestamps and refers to the media file's internal time representation. It can be used to seek or identify specific frames within the media file.", but I'm not sure what it means. I don't think it refers to frame number, right? So can you explain this? Also, how to associate this timestamps with the timestamp of the eye video (as in Neon Sensor Module v1 ps1.time of the Native Recording Data) as well as the timestamp in the Timeseries Data (e.g., gaze.csv) that downloaded from Pupil Cloud? I need to sync the scene video from Pupil Player with the eye video and data downloaded from Pupil Cloud.

user-d407c1 16 April, 2024, 06:33:16

Hi @user-613324 ! In simple terms, the Presentation Timestamp (PTS) is a marker used to indicate the exact time when a particular frame or piece of media should be consumed/played. It ensures that video and audio are synchronised properly and play at the correct speed and timing. It is used internally to seek those frames. When working with Neon Player, the first think you need to consider is that it internally reuses a vast of components from Pupil Player including a conversion to Pupil Time.

This time format is used internally in the app, but when you export the data, it matches the Cloud exports.

If you simply need to match the eye videos with the scene video, you can use the eye video overlay.

user-613324 16 April, 2024, 14:44:24

Thank you for the reply. Yes, I am aware of eye video overlay. But I want to know how this was done. Specifically, how to match the timestamp of the Pupil Player exported video with eye video downloaded from Pupil Cloud/Native Recording Data. And how to match these videos with the timeseries data

user-861e5b 16 April, 2024, 08:57:41

Hi Miguel, I moved here for my problem now.. sorry I confused the channels. I have the nein glasses. When I create the heatmap in visualizations, it does not show any data, or just 0 for all AOI

user-d407c1 16 April, 2024, 08:59:44

Thanks for following up here. May I ask which enrichment you have used? Was it a reference image mapper or marker mapper? And does it show as finished?

user-d407c1 16 April, 2024, 09:01:04

how many recordings do you have enriched?

user-861e5b 16 April, 2024, 09:01:17

I tested it on just one

user-d407c1 16 April, 2024, 09:03:31

That's it! For heatmaps to be generated, your project should include at least two recordings: a scanning recording (which should be less than 3 minutes) and a normal recording. The scanning recording does not count for heatmap generation.

user-861e5b 16 April, 2024, 09:03:16

Might it be a problem that the reference image is not being localized continuesly but only on some parts of the recording?

user-861e5b 16 April, 2024, 09:04:21

oh and do all recordings have to include the same reference image then?

user-d407c1 16 April, 2024, 09:14:36

Please allow me to elaborate a bit further.

If you want to generate a heatmap over a reference image using multiple recordings from different subjects, you need to :

  • Ensure all relevant recordings, including a scanning recording of the reference image, are added to a single project.

  • Initiate the enrichment process. By default, the reference image mapper will include every recording in the project. However, you can narrow this scope by utilising events and the advanced temporal selection features to select specific data.

  • Once each recording is appropriately mapped and the enrichment process has been completed. Navigate to the visualizations section to create your heatmap. Note that the scanning recording itself will not contribute to the heatmap. You have the option to select which of the enriched recordings contribute to the final visualization on the left side.

If you have multiple reference images you want to map, you would need one enrichment per image and probably a scanning recording per image. Have a look at this example https://docs.pupil-labs.com/alpha-lab/multiple-rim/#map-and-visualize-gaze-on-multiple-reference-images-taken-from-the-same-environment

user-d407c1 16 April, 2024, 15:23:09

Yes that is one way, for matching eye images and finding their timestamp you do not need Pupil Player, may I ask what is the end goal?

user-613324 16 April, 2024, 15:26:14

Thanks! The end goal is to scrutinize the data in every possible details for potential artifacts or outliers, before diving into data cleaning and further data analysis

user-82f555 16 April, 2024, 20:10:31

Hey all, I am looking to purchase the Neon glasses Is This Thing On bundle. I am wondering if the frame ends up not working for us, will i be able to take it apart and have it in the same configuation as the bare metal? Can I separate the components and have them functional without breaking anything? I could screw off the metal piece and retrieve the connector/ cable?

user-480f4c 17 April, 2024, 07:39:11

Hi @user-82f555! Neon has a modular design. This means that you can have one module and easily swap it between different frames or use the bare metal bundle for custom prototyping.

Could you share more details about your research setup? That will help me provide better recommendations as to which frame will meet best your research requirements. If you prefer, we can also arrange a Demo and Q&A session to discuss in more detail about the different options.

user-78a2e6 17 April, 2024, 07:51:09

Hey all, We have ordered one of your Neo Eye trackers for our research institute. Unfortunately, the frame (we have "I can see clearly now") arrived damaged. The glue on the side of the frame seems to have come loose. What is the quickest way to get this replaced?

user-480f4c 17 April, 2024, 07:52:46

Hi @user-78a2e6! Sorry to hear that - Can you please send an email to [email removed] sharing the order ID? A member of the team will reply to you asap.

user-861e5b 17 April, 2024, 13:56:39

This is an example of the problem. The

user-487cf7 18 April, 2024, 04:35:35

Hi, I would like to know which frame for the Neon works best for use with a sports helmet. Thank you.

user-d407c1 18 April, 2024, 05:59:53

Hi @user-487cf7 ! That would be the newest Ready Set Go frame. It is similar to the Is this thing on frame but with thinner, flexible arms and a head-strap to secure it.

user-186965 18 April, 2024, 07:35:33

Hello guys! πŸ‘‹ I am facing issues while processing the marker mapper enrichments for one of my recording. These are the affected enrichments ids - 45ea9d72-b79d-4ca1-b0f1-4f309a207294 and 912d8c39-6175-45d3-a76c-7f8f1137a88f (I tried to run twice to confirm under the same project). The recording id is 1e298d24-198c-4dec-a906-98eb51bd788a if you need it. Could you please help here? Thanks!

user-480f4c 18 April, 2024, 07:39:02

Hi @user-186965 - Can you please open a ticket in our #⁠troubleshooting channel? Then, we can assist you with debugging steps in a private chat.

user-585ca3 18 April, 2024, 07:42:23

Is it possible to develop own applications/use cases on top of Neon? I cant find any API information for example gaze/video data

user-07e923 18 April, 2024, 07:45:42

Hi @user-585ca3, we offer a realtime api that allows you to retrieve video feed of the scene camera and the eye cameras live. This allows you to build some cool stuff like using gaze as a clicker

user-585ca3 18 April, 2024, 07:54:53

Cool, would the Neon companion app still work on other android devices than the one listed on the docs (Motorola Edge 40 Pro)? Lets say a Samsung s24 ultra

user-07e923 18 April, 2024, 08:02:29

Neon is supported on OnePlus8, OnePlus8T, OnePlus 10Pro, and Moto Edge 40 Pro. We recommend using these devices because we have done rigorous testing on them. Edit: as a clarification, if you buy a Neon bundle, it includes a companion device. We currently ship the Moto Edge 40 Pro in the bundle.

user-480f4c 18 April, 2024, 08:04:35

@user-186965 can you please try again to open a support ticket?

user-186965 18 April, 2024, 08:06:18

yep works now

user-054372 18 April, 2024, 09:47:42

hi, i recorded some videos on the pupil neon for my dissertation project. what software can i use to put my videos in so they can give me figures on gaze and fixation patterns?

user-054372 18 April, 2024, 09:47:58

also, what can i use to code based off this?

user-054372 18 April, 2024, 09:52:17

this is my first time using the pupil neon and i have little to no knowledge about how to use this data to represent the gaze and fixation patterns

user-054372 18 April, 2024, 09:52:32

and i want to code based off the data analysis

user-054372 18 April, 2024, 09:57:15

@user-480f4c

user-480f4c 18 April, 2024, 10:03:59

Hi @user-054372 - Welcome to the community πŸ™‚

First, I'd recommend having a look at the Neon Ecosystem to learn about the tools that support you during data collection and data analysis.

For data analysis, we recommend that you upload your recordings to Pupil Cloud which offers tools for mapping gaze on areas of interest, among others.

Can you elaborate a bit on your research and planned analysis?

Please also note that we offer a 30-minute onboarding call for every purchase to help you setup and guide you through our analysis software. Let me know if you would be interested in scheduling one and I can send you a link.

user-054372 18 April, 2024, 10:05:22

my research is completing jigsaw puzzles whilst wearing the pupil neon, i have already recorded and made videos. im just not sure how to work out gaze and fixation patterns in visuals and code based off this information

user-054372 18 April, 2024, 10:08:18

how do i use the pupil cloud for data analysis?

user-054372 18 April, 2024, 10:08:51

also yes please if we can schedule a meeting?

user-480f4c 18 April, 2024, 10:15:56

@user-054372, I highly recommend going through our docs for a detailed Pupil Cloud walkthrough.

  • Here you can find a guide on how to make your first recording and upload it to Pupil Cloud.
  • Please also watch this video for an onboarding guide to Pupil Cloud.
  • Once on Pupil Cloud, you will see the list of your recordings, and you can simply download the raw data by right clicking on the recording and selecting Download > Timeseries Data & Scene Video.
  • To map gaze on areas of interest, I highly recommend going through the documentation of the Reference Image Mapper. This is a tool that allows you to map gaze and fixations from the eye tracking recording onto an image of your area of interest.

Let's discuss all this in more detail during an onboarding call. Feel free to schedule an Onboarding Meeting using this link

user-054372 18 April, 2024, 10:12:47

@user-480f4c

user-054372 18 April, 2024, 10:39:04

what software would you recommend using for coding the data i have collected using the map gaze etc?

user-054372 18 April, 2024, 10:39:09

@user-480f4c

user-480f4c 18 April, 2024, 10:43:09

I'm not sure I fully understand your question. You can get the raw data on Pupil Cloud and you can map the data using our tools (e.g., the Reference Image Mapper as mentioned in my previous message). Then you can further analyse it using any programming language (e.g., Python). For more analysis options, you can also look at our Alpha Lab tutorials.

user-054372 18 April, 2024, 10:43:30

okay thank you

user-dd2e1c 18 April, 2024, 12:14:10

Hello Neon team! This is a timeline in the Neon Players. {time} - {frame of worldcam} is different in the same time. Can I ask why it is different (9128 and 18227 in the same time)? I used the same Neon and expected the same frame number in 10m... I know the world camera, which is 30hz. 18227 frames can be understandable in the 10m video, but why did the 9128 frames occur in 10m?

Chat image

user-cdcab0 18 April, 2024, 16:06:25

Hi, @user-dd2e1c - between two videos the frame numbers may not line up exactly the same, but they should be close... that 9128 is way off though, like you said. Could you share your recording with us? If it's on Pupil Cloud, you could add me [email removed] to your workspace. Otherwise you could zip up the recording folder, upload it to some online storage of your choice (Google Drive, Dropbox, OneDrive, etc), and share a link with me here or to data@pupil-labs.com

user-21bcad 18 April, 2024, 12:22:33

Hi @user-480f4c - sorry to bother you but I'm having problems with my Neon adult glasses (just act natural) when recording data. I'm getting this error message intermittently. Sometimes it doens't occur, sometimes it won't go away. We've installed updated software, checked for any visible hardware issues, ensured the phone is fully charged ... running out of things to try! Unplugging and replugging does not help. We've used this device fewer than 10 times in total 😦

Chat image

user-07e923 18 April, 2024, 13:25:07

Hi @user-21bcad, can you create a ticket in πŸ›Ÿ troubleshooting? We'll proceed with helping you debug the issue there.

user-054372 18 April, 2024, 15:02:13

hi all my videos are over 3 minutes so i cant create enrichments?

user-480f4c 18 April, 2024, 15:11:31

Hi @user-054372 - there is no duration limits for your recordings. What you probably refer to is the scanning recording that is required for the Reference Image Mapper enrichment.

For the Reference Image Mapper enrichment, you need 3 things:

a) The eye tracking recording, i.e. wearing Neon and looking at the area of interest. There is no limit for this recording's duration.

b) The image of your area of interest (we call this "the reference image")

c) A recording made with Neon while holding the glasses in your hand and recording the area of interest from diverse angles/perspectives (we call this the "scanning recording"). This recording only needs to be up to 3 minutes.

user-054372 18 April, 2024, 15:02:30

@user-480f4c

user-82f555 18 April, 2024, 15:04:28

Any way we can make that a 60 day return window?

user-d407c1 18 April, 2024, 15:12:55

Hello @user-054372 ! We appreciate your active participation in our community. To streamline our communication, we kindly ask that you limit tagging Pupil Labs team members to necessary instances. Rest assured, leaving a comment will get you a timely response from either me or one of my colleagues.

Regarding your query, is it about the reference image mapper enrichment?
To use this enrichment, you will need an additional scanning recording. If you foresee a need to use part of the recordings as scanning recordings in the future, we encourage you to support this feature request https://discord.com/channels/285728493612957698/1212053314527830026 by upvoting it.

user-82f555 18 April, 2024, 15:33:31

Is there any way someone can take a video and show the flexibility of the "is this thing on" frame? like spread it out as far as it will go, and see how much resistance there is

user-d407c1 18 April, 2024, 16:15:11

@user-82f555 We will publish soon (prob this week) a tweet showcasing the flexibility of the ready-set-go frame arms, which will be more in-line with what you are looking for.

user-82f555 18 April, 2024, 17:14:02

nice!

user-82f555 18 April, 2024, 17:14:52

very excited to see

user-82f555 18 April, 2024, 17:47:45

Is there any way we can get a video showing the flexibiility sooner?

user-82f555 18 April, 2024, 17:52:35

I'm looking on the youtube channel but don't see anything

user-51f934 19 April, 2024, 03:11:58

I have been trying to create enrichment, but I keep getting errors. I followed all the instructions in creating different scanning videos but it is still not working I read the previous discussion on this, and I noticed people get this resolved by sending the IDs of their enrichment and pictures Here is a screenshot of my enrichment and its ID d6e3afa2-fb92-40ea-bcf1-701f440a796c

Chat image

nmt 19 April, 2024, 03:16:05

Hi @user-51f934! Thanks for sharing the screenshot. Is that your reference image on the right? May I ask why the glasses wearer is shown in the reference image? The reference image should show the main area or feature of interest that the wearer is looking at, and that needs to be mostly static.

user-420e66 19 April, 2024, 13:01:38

what is the differnce between the red and blue circle within the data collected and what do the numbers represent?

user-d407c1 19 April, 2024, 13:03:36

Hi @user-054372 ! I moved your message here, since you seem to refer to Cloud. In the visualisation you mentioned, the red circle represents the gaze point. The blue circles indicate fixations, where the number next to each blue circle is the fixation ID, and the size of the circle is relative to the duration of the fixation.

user-054372 19 April, 2024, 13:04:41

thank you

user-861e5b 19 April, 2024, 14:41:01

Hey everyone, I have another question: Is it possible to record blink and pupil dilation and show and use it in Pupil lab in the cloud?

user-480f4c 19 April, 2024, 14:49:09

Hi @user-861e5b - Blinks and pupillometry are already recorded in all Neon recordings. Blinks are available in the blinks.csv and pupillometry is provided in the 3d_eye_states.csv. You can find them in the raw data export for every Neon recording. Simply right-click on the recording of interest and select Download > Timeseries Data.

user-480f4c 19 April, 2024, 14:56:52

Can you create a ticket in our πŸ›Ÿ troubleshooting channel and share with us there the recording IDs of the recordings that don't have blink data? We can assist you there in a private chat.

user-861e5b 19 April, 2024, 15:22:29

Another question: Does the neon player also include functions like enrichments?

user-480f4c 19 April, 2024, 15:24:37

Enrichments live only on Pupil Cloud. However, Neon Player has a plugin similar to the Marker Mapper enrichment: the Surface Tracker. I recommend having a look at the Neon Player docs.

user-ea768f 19 April, 2024, 15:53:50

Hello, how can I add the answer of a multiple choice question to the recording name? The documentation indicates that this is possible, but I get a message that says 'Only "Short answer" type questions can be used in the recording name.' However, this does not seem to work either. Thanks!

user-07e923 19 April, 2024, 16:36:35

Hi @user-ea768f, you'll need to create your questions on Pupil Cloud. They are called templates. Go to your workspace and click the dropdown arrow > Workspace setting > Templates > New Templates.

Give your templates a name, so that it is easy to keep track. Then, add a new question card. In the question card, select Multiple Choices instead of short answer. You'll need to fill in the text for the question first, press enter, then start filling in your choices and press enter.

user-ea768f 19 April, 2024, 16:42:54

Hi Wee, thanks for your response. I get a warning saying that only Short Answer can be added to the recording name (see screenshot attached). When I create a Multiple Choice question, there is no option to add it.

Chat image

user-82f555 19 April, 2024, 17:54:28

Hi, do any of the technical specifications change between bundles? Or does the technical information remain the same because the module/metal piece is the same in all of the glasses, the frame is the only thing that changes?

user-07e923 19 April, 2024, 19:42:44

Hi @user-82f555. Neon is essentially a module, which contains the scene camera, 2 IR eye cameras, IMU, and 2 microphones. We also sell additional frames and you can transfer the module to different frames. So, yes, the technical specification of the Neon module is essentially the same across different bundles. What differs are the frames that you are getting.

user-054372 20 April, 2024, 09:11:39

hi, is there a diagram available anywhere to show how the neon works? how information is recorded and what happens?

user-07e923 20 April, 2024, 09:29:41

Hi @user-054372 , do you have anything specific in mind that you want to know? We have documentation for Neon if you would like something to browse through now. Otherwise, I highly recommend scheduling a demo call with us, so that we can show you how the device works, and answer some of your questions. You can schedule a demo call with my colleague, @user-480f4c.

user-054372 21 April, 2024, 21:24:46

how do i use the data from gaze.csv and convert into UTC timestamps?

user-054372 21 April, 2024, 21:25:04

is there code available for this conversion?

nmt 22 April, 2024, 05:11:37

Hi @user-054372! They are already UTC timestamps, in nanoseconds, as detailed in the recording format section of the docs. Check out this message for converting to other units: https://discord.com/channels/285728493612957698/1047111711230009405/1217853265123737783

user-0001be 22 April, 2024, 16:08:33

Hi! In the create new workspace, what is "upload world video"? does the invited collaborated need to have Gmail as well in order to be an editor, admin or viewer? Can we send invitations to our work email (microsoft) domains?

user-07e923 22 April, 2024, 17:26:14

Hi @user-0001be, Upload world video refers to uploading the scene camera video. When this is disabled, the scene camera video will not be uploaded onto Cloud for that particular workspace. You'll have to contact us if you wish to turn on scene video upload.

Your collaborators can use any Email address to access Cloud. Make sure that the collaborators sign up for an account on Pupil Cloud using the same email address as what you sent.

user-0001be 22 April, 2024, 17:30:57

@user-07e923 Thank you!

user-0001be 22 April, 2024, 17:35:58

I just created an account on Cloud when I sent an invite out for a specific worksapce I am getting this error: User/162c5d71 Not A Member Of Workspace/Ebc21ffe. What does it mean?

Update: it's working now I resend the invitation after a few hours. Thanks!

user-e77359 22 April, 2024, 17:52:44

Hi, I recorded a session that has been stuck in the "processing" status for the last hours. I can download it without any problem to my computer and it plays ok. Any ideas on how to unblock it to work with it in Pupil Cloud?

Chat image Chat image

user-07e923 23 April, 2024, 05:55:13

Hi @user-e77359, are you still experiencing this issue with one of your uploads?

user-ed237c 22 April, 2024, 19:28:54

Hello everyone,

I'm currently working on syncing video frames with corresponding timestamps using the Neon glasses. I have downloaded the gaze.csv file and I need to ensure accurate matching of each frame from the video to its respective timestamp.

Could someone confirm if the first Unix timestamp in the gaze.csv file exactly corresponds to the very first frame of the video? If not, what's the lag between the timestamps and the video frames?

Any tips or guidance on precise matching from frames to timestamps would be greatly appreciated!

Thank you in advance for your help!

user-480f4c 23 April, 2024, 06:47:49

Hi @user-ed237c. The time each sensor starts varies. The difference between sensor start times ranges from 0.2 to 3s on average. Please have a look at this relevant message https://discord.com/channels/285728493612957698/1047111711230009405/1167160541484175512.

Regarding your question on matching from frames to timestamps, we have reference code that you can use to match and merge gaze, world timestamps and video data. Please see this relevant message: https://discord.com/channels/285728493612957698/1047111711230009405/1200036401316642826

user-293aa9 23 April, 2024, 13:56:25

Hi, I am writing a thesis in which I have used the Neon glasses for eye tracking, and in this regard I am looking into how the Neon glasses estimate the users gaze. Is it correctly understood that they use a similar approach as the Invisible glasses (a CNN to predict 3D gaze direction vector and then mapped to 2D scene pixel coordinate), since they have been deprecated in favour of the Neon? The only paper I found on how the Neon glasses work only mentions that it uses "NeonNet", without any further specification. Is there anyone in here who would care to expand or correct my understanding? πŸ™‚

user-480f4c 23 April, 2024, 13:58:18

Hi @user-293aa9 - have you checked Neon's white paper ? It should have all the information you need.

user-487cf7 23 April, 2024, 15:27:39

Thanks for clarifying! Yes, it would actually be for baseball and motorcyle racing. When I proceed with the purchase, is there a field where I can enter this information about making it detachable? Will I be able to attach/detach as needed? Thank you!

user-07e923 23 April, 2024, 15:33:56

Hi @user-487cf7 , this is Wee stepping in for Miguel. Here's the update:

We can remove the strap of RSG for you during the order process when you add a note under comments. Once we do this, the strap cannot be reattached. I hope this helps!

user-487cf7 23 April, 2024, 18:06:13

Ok great thank you! Once an order is placed how long does it take to arrive? I am in Cleveland Ohio USA.

user-d407c1 24 April, 2024, 06:10:03

Hi @user-487cf7 πŸ‘‹ ! The fulfilment time is approximately 4 weeks currently. Please add about 3-4 days for shipping to the US.

user-613324 24 April, 2024, 00:10:57

Hi Neon team, I'm using the realtime api to automatically stop the recording when the experiment is done by this command: device.recording_stop_and_save(). However, in one occasion, our study coordinator manually press the stop recording button on the Neon Companion app right before device.recording_stop_and_save() was called, and it raised the error "400, Cannot stop recording, not recording". It makes sense since the recording already stopped and there was nothing to stop when this function is called. I'm wondering is there any realtime api command that can return the recording status of the eye tracker? If so, then I can check the recording status first, and then call the recording_stop_and_save() only when the recording is still on. Thanks

user-d407c1 24 April, 2024, 06:45:56

Hi @user-613324 πŸ‘‹ ! Even when the pupil isn't visible on the eye camera, our neural network can still estimate gaze direction. This was often the case with Pupil Invisible, the predecessor of Neon. However, measuring pupil size might still be challenging if the pupil is not visible, as the network requires visible data to make accurate estimations.

For this specific participant, I recommend adding some extra padding, possibly foam, on the forehead. This adjustment can help capture the eyes from a better angle. That said from the images you share it seems like the module is tilted rather than straight, is it possible to make it more parallel to the forehead?

Regarding your real-time API question, you can check the device status with the following command: status = device.get_status() . Then status contains a field named recordingwhich you can use to determine whether there was a recording started or not.

user-613324 24 April, 2024, 00:51:34

Hi Neon team, we are using the ready-set-go frame (the previous version, the one with the elastic strap) and attached are the recorded eyes from one of our subjects. As you can see, only half (or 2/3) of each eye was recorded. and when look away, the pupil almost moved out of the video. This is due to the fact that this subject has a pretty flat nose. I tried different pads (the one that touches the forehead) that comes with the frame, but even the thickest pad gives this results. So my question is, does this type of eye recording yield valid gaze estimation? Is there any concern for it? Is so, what should we do to avoid the potential problems?

Chat image Chat image

user-688acf 24 April, 2024, 11:24:13

hello, after the latest update of the neon companion app when trying to open recordings with neon player i get the message "this version is too old, please upgrade"; when can the newest version of neon player be expected? at least on the homepage i can not find a newer one

user-480f4c 24 April, 2024, 11:38:51

Hi @user-688acf, thanks for bringing this to our attention. We're working now on a fix. Thanks for your understanding and patience πŸ™‚

FYI Neon Player can be downloaded here

user-688acf 24 April, 2024, 11:45:11

that's where i have downloaded it but it's 4.1.2 still ; how long will the fix take?

user-480f4c 24 April, 2024, 11:45:52

The new app version provides many stability fixes, including the issue you reported when switching wearers.

In response to your question as to whether the wearer profile is a needed feature, I'd say yes, and it is highly recommended to use different wearers for your participants. This allows you to keep your data tidy and organized and be able to identify which user was associated with which recording.

Wearer profiles also allow you to save physiological parameters of a subject to improve the quality of the eye tracking data. Offset correction for example can be applied to improve the accuracy of the gaze estimation in those wearers that have offsets.

user-480f4c 24 April, 2024, 11:46:49

Could you open a ticket in β πŸ›Ÿ troubleshooting? We'll continue the conversation there

user-cdcab0 24 April, 2024, 12:29:42

Hi, @user-688acf - Neon Player has been updated to v4.1.3 which should load your newest recordings

user-688acf 24 April, 2024, 12:33:03

up and running again πŸ™‚ thank you for the quick response and fast fix!

user-cdcab0 24 April, 2024, 12:39:18

You're welcome πŸ™‚

user-0001be 26 April, 2024, 19:10:49

Hello! I am getting this in the API is this expected? I'm using the Just Act Natural Glasses.

Serial number of connected glasses: -1

This is the line ```print(f"Serial number of connected glasses: {device.serial_number_glasses}")

```

Also, what does "unable to create socket pair" mean? Everything seems to be running fine.

user-f43a29 27 April, 2024, 10:00:40

Hi @user-0001be , the "unable to create socket pair" message is diagnostic info that you will see when running the Real-time API on Windows, typically while it is trying to find a device before it has established a connection. As you have noticed, if everything is running fine, then you are good to go.

If you want the serial number of the Neon module, then you actually want to print the "device.module_serial" attribute.

user-4b18ca 29 April, 2024, 08:47:41

Hi all! For our ethics committee I require a PDF of the user manual and CE certification (possibly EU KonformitΓ€tserklΓ€rung?). Possibly other documents about certifications - I'm just starting to deal with it. Where can I find such documents? Best, Johannes

user-07e923 29 April, 2024, 09:15:06

Hi @user-4b18ca, could you write to [email removed] We'll follow-up with the document over Email. Thanks!

user-a5a6c3 30 April, 2024, 10:02:19

Hello Neon team, i have a specific question regarding a function of the API called "receive_matched_scene_video_frame_and_gaze()". For context, I am trying to implement the steps in your "Pupil Labs fixation detector" white paper.

For now i used as inputs the data from this function ( receive_matched_scene_video_frame_and_gaze() ) to calculate the fixations. Because of this i have a limited sampling rate of about 30 Hz that, from what i understand, is dictated by the sampling rate of front-facing scene camera.

In your white paper you used as inputs 2 data streams: the time series of gaze data at 200Hz and the video from the front-facing scene camera at 30Hz. You upsampled the optical flow calculated from the data stream of the front-facing scene camera to 200 Hz and then you used it to detect fixations.

The function "receive_matched_scene_video_frame_and_gaze()" gives me matched gaze and scene data but what i need is the time series of gaze data at 200Hz that has "highlighted" the gaze data with the same timestamp of the scene_video_frame.

Is it possible to call 2 separate functions to have the video from the front-facing scene camera at 30Hz, keep time series of gaze data at 200Hz and AFTER using the timestamps to match the data ? Is there another way?

Thanks in advance!

user-d407c1 30 April, 2024, 11:54:48

Hi @user-a5a6c3 πŸ‘‹ ! Yes! What you ask should be possible, matched_scene_video_frame_and_gaze() function is meant to assist you in the matching the closest gaze point in timestamp with the received scene camera frame, but you have access to the camera streams and the gaze stream independently through the realtime API and those sensors already come with their own timestamp.

I'd recommend using different threads to read both streams and looking into our async to ensure non-blocking operations.

user-0001be 30 April, 2024, 13:33:04

Hello! What’s the difference between the new Ready Set Go! Frame and Is This Thing On? Frame?

user-d407c1 30 April, 2024, 14:08:05

Hi @user-0001be πŸ‘‹ ! The ready set go has flexible arms, a head-strap and the cable is not routed inside the arm https://discord.com/channels/285728493612957698/733230031228370956/1231886621696331786 While the Is this thing on? has a more rigid arms, it can be worn without the head-strap and the cable is routed on one of the arms. Also the Ready Set Go is slightly lighter

End of April archive