πŸ‘“ neon


Year

user-d714ca 02 October, 2023, 05:24:18

Hello! I try to connect Neon to my Macbook, and use Pupil Capture to catch eye movement. However, it said that the selected camera is in use or blocked. And Local USB: camera disconnected. How can I solve this question

user-480f4c 02 October, 2023, 06:04:22

Hi @user-d714ca πŸ‘‹πŸ½ ! To use Neon with Capture, you need to run it from source. Have you checked our guide? https://docs.pupil-labs.com/neon/how-tos/data-collection/using-capture/

user-d714ca 02 October, 2023, 06:08:12

Yes, I follow it. But still does work.

user-cdcab0 02 October, 2023, 08:03:07

Hey, @user-d714ca - looks like there's a bit of information that's out of date/missing on the readme. I'm working on updating it, but wanted to send this so you could get up and running sooner. * ~~The readme says to use the develop branch, but it's missing some code which may be affecting you. master is up to date and should work, so to switch back run git switch master ~~ Fixed, though you will want to update your local copy: git pull * Due to an open issue with how Mac handles USB devices, you'll need to run Pupil Capture with admin rights. To do this, add sudo before the python command: sudo python main.py capture

user-030d1f 03 October, 2023, 00:34:14

where do i buy the eye tracking for htv vive pro

user-480f4c 03 October, 2023, 06:24:56

[email removed] πŸ‘‹πŸ½ ! You can either contact us at sales@pupil-labs.com or feel free to fill out the quote request form at this cart link: https://pupil-labs.com/cart/?htcvive_e200b=1

user-cc3f72 03 October, 2023, 06:36:35

Hello,nice to meet you. I am a Japanese office worker, is there any way I can try your company's NEON? Sorry for my poor English.

user-480f4c 03 October, 2023, 06:49:29

Hi @user-cc3f72 πŸ‘‹πŸ½ ! While we do not offer free loans, we do have a 30 day free return policy, such that you can try Neon and return it if it doesn't meet your requirements. You pay shipping costs, we refund the price of the hardware. Also feel free to contact us at [email removed] and book a demo and online Q&A session with one of our product specialist to discuss how Neon can meet your research requirements.

user-2eeddd 04 October, 2023, 03:26:07

To use these python codes (https://docs.pupil-labs.com/neon/real-time-api/track-your-experiment-progress-using-events/) do i need to connect neon eye tracker to computer? But i can't connect eyetracker to computer with the lines that eyetracker originally has.

wrp 04 October, 2023, 03:30:54

Hi @user-2eeddd πŸ‘‹ - You do not need to connect Neon to a computer. Neon Companion App streams data over your local Wifi.

If you're looking to add events with a convenient user interface, you can use Neon Monitor app: https://docs.pupil-labs.com/neon/getting-started/understand-the-ecosystem/#neon-monitor

If you're looking to add events programmatically then you would use the real-time API.

The key here is that both your computer and Neon need to be on the same WiFi network. For better performance, we recommend using a dedicated router if possible. Does not need to be connected to the internet.

user-648ceb 04 October, 2023, 07:08:30

We have multiple pupil labs products (invisible and neon). For ease sake I added the same account to all phones to get all data stored in one account. I tried logging into pupil cloud using two different account in different Google Chrome windows, but I kept getting and error in both windows so I decided to try connecting multiple Companion apps to the same account. But, I get an error when trying to include the data from the NEON into a project.

Is this because I cannot combine data from Invisible and Neon in one user/cloud

Chat image

user-c2d375 04 October, 2023, 07:20:47

hI @user-648ceb πŸ‘‹ I am sorry to hear you're experiencing issues with Pupil Cloud. There's absolutely no issue with including recordings made with both Neon and Invisible in the same project. Could you please clarify whether the recording you're attempting to add to the project is still in the processing phase, or if it's displaying a ⚠️ error icon?

user-648ceb 04 October, 2023, 07:38:42

It should be done processing and I can view the recording just fine

Chat image

user-33607d 04 October, 2023, 17:18:15

Hi, Can someone give me an update on when the pupillometry features will be available on the Cloud?

user-480f4c 05 October, 2023, 06:39:33

Hi @user-33607d πŸ‘‹πŸ½ ! We are currently working on developing pupillometry as a feature in the Cloud. By the end of October, you will be able to obtain pupil size estimations post-hoc.

user-d714ca 05 October, 2023, 02:07:29

Hello! Can I do batch processing on the Cloud? Especially for the enrichment Reference Image Mapper?

user-480f4c 05 October, 2023, 09:28:51

Hi @user-d714ca πŸ‘‹πŸ½ ! Do you mind clarifying what do you mean by batch processing? See some points below that might help:

  • Batch exporting recordings is possible programatically with the Cloud API (https://api.cloud.pupil-labs.com/v2), but for example batch creating Reference Image Mapper enrichments is not available programatically.

  • However, note that you can apply the same Reference Image Mapper enrichment automatically on all (or some) of the recordings that you have added to your project. Assuming you have a project with 5 recordings of 5 different participants looking at an area of interest, you can start a Reference Image Mapper enrichment which will be applied to all of these recordings, as long as they all have the same events as the start and end events you'll use for defining the period of the recordings where the enrichment will be applied (by default, these are recording.begin and recording.end)

user-328c63 06 October, 2023, 00:19:58

Hello! Within the analysis pipeline, is there any way to do a consecutive gaze heat map for an entire video recording? Additionally, is pupil capture and player compatible with the pupil neons?

user-d407c1 06 October, 2023, 06:48:05

Hi @user-328c63 ! Would you mind developing the first request? Do you want a heatmap on the scene camera coordinates as a gaze distribution over the recording? or you would like to obtain a heatmap over a surface or reference image in the environment? - The first one is not available out of the box but it is trivial, as you get a gaze.csv with the coordinates and a row per gaze estimate. - The second one is available through the Marker Mapper or Reference Image Mapper on Cloud.

https://docs.pupil-labs.com/enrichments/reference-image-mapper/ https://docs.pupil-labs.com/enrichments/marker-mapper/

Neon recordings made with the Companion Device (phone) are not compatible with Pupil Player, you can use Neon with Pupil Capture but there are some caveats, as for example you would not be using NeonNet (the deep neural network to estimate gaze) and therefore you will need to calibrate it. We strongly recommend using Neon with the Companion Device and Pupil Cloud but if you want to know more about how you can use the Core pipeline with Neon have a look here: https://docs.pupil-labs.com/neon/how-tos/data-collection/using-capture/

user-480f4c 06 October, 2023, 06:51:23

Hi again@user-b03a4c πŸ‘‹πŸ½ ! I'm replying to your message (https://discord.com/channels/285728493612957698/285728493612957698/1159717473021599744) here.

The Gaze Overlay enrichment has been changed to "Video Renderer". This was part of a series of updates we announced recently on Pupil Cloud. You can find the details here: https://pupil-labs.com/releases/cloud/v6.1/ We renamed the Gaze Overlay Enrichment to Video Renderer and moved it into the Analysis view (see attached screenshot).

Please note that the Video Renderer is only available on Pupil Cloud currently. However, we are currently working on an offline solution for Neon that will allow users to export eye-tracking footage with gaze overlay.

Chat image

user-b03a4c 06 October, 2023, 06:59:18

thank you for quick replying! I got it.

user-0262a7 06 October, 2023, 09:05:54

Hello, I wonder if I need any driver or pre-install for the Neon directly connected to my Windows PC? cuz I can't find Neon in the Video Source. it only shows a bunch of unknown @ Local USB, and in the Device Manager, there's a "not installed driver warning" for Neon Sensor Module v1. Thanks

user-0262a7 06 October, 2023, 09:09:43

ps I'm trying to use the Pupil Capture app

user-480f4c 06 October, 2023, 09:10:42

Hi @user-0262a7 πŸ‘‹πŸ½ ! Please note that using Neon with the Pupil Core pipeline only works under MacOS and Linux and you'll need to run it from source. You can find the relevant documentaion here: https://docs.pupil-labs.com/neon/how-tos/data-collection/using-capture/

user-0262a7 06 October, 2023, 09:13:18

Got it, Thank you so much!

user-858e7e 09 October, 2023, 13:12:05

Hi, I am getting huge angles (some around 17Β°) as output on the gaze.csv of my recordings whereas the real measured angles are no bigger than 4Β°. Is this due to distortion? I am using pupil cloud, does it not automatically correct distortion? Do I need to apply an enrichment? Thanks!

user-d407c1 09 October, 2023, 13:30:20

Hi @user-858e7e ! Would you mind providing some more information? First of all, are you using Neon or Invisible? Secondly, where do you get 17Β°, on azimuth or elevation? And how do you know the angles are no bigger than 4Β°?

user-5c56d0 10 October, 2023, 02:01:39

Dear sir Thank you for your help. Could you please answer the following? Best regards.

Q1 Is there an eye tracker model that can measure the distance between the user's eye and the object (the object being viewed by the user)?

Chat image

user-5c56d0 10 October, 2023, 02:07:49

Could you please answer the following as well?

Q2 Is there an eye tracker model that can measure the angle of eye orientation (the angle of how many degrees the eyeball is tilted)?

Chat image

nmt 10 October, 2023, 03:39:17

Hi @user-5c56d0 πŸ‘‹. At the end of this month, we aim to release our new eye state estimation pipeline for Neon.

It will output the orientation of each eye, as depicted in Q2.

In theory, eye state can be used to examine viewing distance using ocular vergence, as depicted in Q1. However, in practice, this can be a bit tricky depending on the circumstances. For example, if the object is very far away, the eyes' visual axes may never converge.

We'd be excited to see how you fare with such investigations!

Initially, eye state (and pupil size measurements) will be available post-hoc, once recordings have been uploaded to Pupil Cloud.

Keep an eye out for the update! We will announce it here and you'll also see it in Cloud.

user-b03a4c 10 October, 2023, 04:02:19

Hi I'm ken. I want to use NEON with Pupil capture because I want to export recorded video to Pupil player. Although I followed instruction in homepage, Pupil capture did not recognize NEON and the screenshot is here. Would you tell me how to deal with it.

I use Apple M2 Mac. OS: Ventura

Chat image Chat image

user-d407c1 10 October, 2023, 05:52:01

Hi @user-b03a4c are you running it from source code and starting it with sudo privileges?

user-fb5e57 10 October, 2023, 07:01:26

Hello, I am a university student in Japan. The Neon website says that pupil diameter measurement will be available in the third quarter of 2023, but when exactly will it be available?

user-c2d375 10 October, 2023, 07:03:51

Hi @user-fb5e57 πŸ‘‹ We are currently finalizing our pupillometry pipeline, and it will be available as a feature in Pupil Cloud by the end of October

user-d714ca 11 October, 2023, 05:32:33

Hello, I would like to know if there was any paper using Neon to do the research.

user-c2d375 11 October, 2023, 06:20:23

Hi @user-d714ca πŸ‘‹ Neon is still pretty new in the world of eye-tracking tech. Even though many researchers are already putting it to work, you might not find published papers just yet. To explore studies utilizing Pupil Labs products, I recommend checking our publication list at https://pupil-labs.com/publications/

user-d714ca 11 October, 2023, 07:48:32

Thank you for your info

user-fd8e85 11 October, 2023, 14:00:19

Hi, I reported an issue with the IMU timestamp vector on github (https://github.com/pupil-labs/pl-rec-export/issues/2), but I am not sure this is the right place. The timestamps seem to be inconsistent, and I wonder if you could help me debugging this. When plotting the raw data (extimu ps1.time) I have this timestamp vector (timestamp vs. index)

Chat image

user-480f4c 11 October, 2023, 15:28:30

Hi @user-fd8e85 πŸ‘‹πŸ½ ! We are looking into this and we'll get back to you as soon as possible! Thank you for your understanding and patience! πŸ™‚

user-fd8e85 11 October, 2023, 14:00:50

When plotting the timestamp from the imu.csv file I get from Pupil-labs cloud, I have the following graph.

Chat image

user-fd8e85 11 October, 2023, 14:01:24

Both are wrong, is this an identified bug? I believe that both the companion app and the FPGA firmware are up to date

user-3f477e 11 October, 2023, 15:15:58

Hello, I have two questions regarding the reference image mapper: 1) Does the reference image mapper work for mapping AOIs in a 360Β° indoor setting? Most AOIs will not move but the person wearing the tracking device will move around in the room. What happens if not all AOIs can be captured in a single photo as reference image? Is it possible to provide more than one reference image without slicing the recording into pieces for mapping? I found this guide in which a similar situation is described: https://docs.pupil-labs.com/alpha-lab/multiple-rim/ It however appears that for that example the recording was split manually (using events) to apply the mapping on the most appropriate reference image. Is that necessary? 2) Is the reference image mapping available if pupil cloud is not used?

user-480f4c 11 October, 2023, 16:33:42

Hi @user-3f477e πŸ‘‹πŸ½ ! Please see below my points:

1) It is highly recommended to slice the recording into pieces for the mapping as this will significantly reduce the processing and completion time for your enrichments. For the example you mentioned (someone walking around looking at different AOIs and mapping gaze onto these AOIs), I'd highly recommend following the Alpha Lab guide on applying multiple Reference Image Mapper enrichments. Regarding your point that the slicing was done manually (using events), that's indeed the case for this guide. Note that you can add events either post-hoc in Pupil Cloud as done in this guide or in real-time using the Neon Monitor App or the Real-time API.

2) Unfortunately, the Reference Image Mapper enrichment is only available on Pupil Cloud.

I hope this helps, but please let us know if you have any further questions on that!

user-3c037a 11 October, 2023, 15:50:24

Is it possible to synchronize and collect the neon signal with a motion capture system like Vicon?

user-480f4c 11 October, 2023, 16:01:03

Hi @user-3c037a πŸ‘‹πŸ½ ! There are different ways of synchronising Neon with other sensors. Specifically, for Neon-Vicon synchronization, you have the following options:

1) We maintain an LSL relay for publishing data from Neon in real-time using the LSL framework. This enables a unified and time-synchronised collection of measurements with other LSL supported devices. You can find the LSL relay on PyPi (https://pypi.org/project/pupil-labs-realtime-api/) and relevant instructions on readthedocs (https://pupil-labs-realtime-api.readthedocs.io/en/stable/). However, as far as I can see it's unclear whether the Vicon system is supported by LSL (see here: https://labstreaminglayer.readthedocs.io/info/supported_devices.html#supported-motion-capture-hardware). I'd recommend contacting them to confirm.

2) Alternatively, you can use Neon's Real-time API: https://docs.pupil-labs.com/neon/real-time-api/introduction/. Using the Real-time API you can access the raw data, control experiments remotely, send/receive trigger events and annotations, and also synchronise Neon with other devices.

I hope this helps!

user-3c037a 11 October, 2023, 16:06:59

Thanks @user-480f4c ! It was very helpful.

user-3f477e 11 October, 2023, 16:44:52

Thank you @user-480f4c

user-b8ccc0 12 October, 2023, 02:36:08

Hello, I have a question. I noticed a slight misalignment between the video obtained from GazeOverlayEnrichment and the video I created with a scene camera overlaying gaze data. I suspect that the cause might be a timing synchronization issue. When trying to synchronize the timing using the 'timestamp[ns]' in the 'world_timestamps.csv' and 'timestamp[ns]' in 'gaze.csv', what's the best way to achieve this synchronization? Do I need to perform any resampling or other steps?

nmt 12 October, 2023, 09:08:59

Hi @user-b8ccc0! There's nothing special here. Matching gaze samples to the nearest existing world frame should suffice. What are you using to render your video? If you could outline the steps you've taken I might be able to point something out.

I do have a follow-up question for my own interest - why do you need to create your own video when we already provide one?

user-5eb072 12 October, 2023, 05:08:33

Do you have a distributed of the neon in Australia? SO we can avoid paying Customs taxes?

nmt 12 October, 2023, 09:12:20

Hi @user-5eb072 πŸ‘‹. Thanks for your question. We do ship to Australia directly from Berlin. Prices on our website are exclusive of import duties/taxes. If you would like a quote or further information about the import process, reach out to sales@pupil-labs.com and someone will assist you from there!

user-d2d759 13 October, 2023, 02:24:41

Hi there, I'm wanting to know the amplitude and speed of reading of the subjects from the data/recordings on Pupil Cloud. I was wondering, is this possible and how do I get these two parameters from the data? Thanks πŸ˜€

user-cdcab0 13 October, 2023, 10:02:22

Could you elaborate a little on what you're trying to measure/study? Are you talking about visually reading printed text? If so, what do you mean by amplitude?

user-5216cd 13 October, 2023, 09:50:27

Hello I have a question. Is there any way we can see how the calibration process working?

user-cdcab0 13 October, 2023, 10:07:44

There isn't a calibration process for Neon device like there is for other eye trackers. Images of the eyes are mapped to gaze locations in the scene using a neural network

nmt 14 October, 2023, 02:43:03

Hi @user-29f76a! Responding to https://discord.com/channels/285728493612957698/285728493612957698/1162362434485501982 here. It would first be worth double-checking the module is securely attached to the frame. E.g. if you've previously removed the module from the frame, check the screws are secure. If that's all in order, there might be another issue. If so please, send your original order id to info@pupil-labs.com and someone will assist you from there!

user-29f76a 18 October, 2023, 14:43:58

Hi, I didn't detach anything and the frame looks like in order. I plugged it properly but still, the frame was not connected to the apps. I will send an email soon.

user-d714ca 14 October, 2023, 07:56:49

Hello, I have three questions. 1) Can we preprocess the data in the Pupil Cloud? 2) I would like to know if there are any papers that talk about the steps to process invisible/Neon export data. 3) Are there any other free ways that can be used to process Neon data.

user-480f4c 16 October, 2023, 05:16:31

Hi @user-d714ca πŸ‘‹πŸ½ ! Regarding your first three questions (https://discord.com/channels/285728493612957698/1047111711230009405/1162660268376084570), could you please clarify what kind of preprocessing did you have in mind? Ultimately, data preprocessing depends on your specific setup and research questions. It would be helpful if you could provide more details on that. Please also find below some additional points that might be helpful:

As for the follow-up question in your last message (https://discord.com/channels/285728493612957698/1047111711230009405/1163107071651225691), fixation detection is automatically done once the data is uploaded on Pupil Cloud and the parameters cannot be adjusted. May I ask why you would like to adjust the fixation threshold?

user-d714ca 15 October, 2023, 13:32:15

Sorry to disturb you. I have another question. I want to know where can I adjust the fixation threshold and visual angle. Can do it in the pupil Cloud?

user-d714ca 16 October, 2023, 16:43:32

@user-480f4c Thank you very much for your response. For the preprocessing, for example, we now find a systemetic inaccuracy of the fixations in a sample. We want to fix it so that when we do the reference map enrichment, the fixation can locate in the reference image. Another preprocessing example is that we show a stimuli to the client, but he uses his hand block his eyes. However, the fixations are located in the reference image. We want to delete it.

user-5216cd 18 October, 2023, 04:20:00

Hi, when scene camera overlayed with the estimated eye position from gaze, we noticed a constant vertical error compare to target . Is there any reason we get this, and is there any way we can adjust this?

Chat image Chat image Chat image

user-480f4c 18 October, 2023, 06:34:24

Hi @user-5216cd πŸ‘‹πŸ½ ! Have you tried the offset correction feature in the Neon Companion App already? You can find this feature by clicking on the name of the currently selected wearer on the home screen and then click "Adjust". The app provides instructions on how to set the offset. Once you set the offset, it will be saved in the wearer profile and automatically applied to future recordings of that wearer (unless you reset it). Please note that you'd need to set the offset correction prior to the recording.

user-d714ca 18 October, 2023, 10:48:10

Hello, do you have any publications related to Marker Mapper?

user-480f4c 19 October, 2023, 07:00:18

Hi @user-d714ca πŸ‘‹πŸ½ ! We do not have specific publications related to Marker Mapper, however, you can find a list of publications that have used our products in our publication list: https://pupil-labs.com/publications/ Please also have a look at this relevant post: https://discord.com/channels/285728493612957698/633564003846717444/1072072155547832350 - May I ask what you'd like to learn about the Marker Mapper?

user-45f648 18 October, 2023, 18:04:38

For IRB purposes what are the potential risks for using pupil neon glasses? Currently have the following: "Possible risks include the disturbance by IR light and/or radiation if the IR sensors that the device uses are on the same wavelength as that of the illuminators of the device. It should be noted, however, that both risks are very low-level."

nmt 19 October, 2023, 05:53:58

Hi @user-45f648 πŸ‘‹. I'm not sure I fully grasp what your statement is intending to convey, since it appears to mention two devices. Is your question about potential risk to the Neon wearer from IR illumination? If so, Neon has passed IR safety testing, so there's no risk during normal usage.

user-ac085e 18 October, 2023, 20:59:05

Hi pupil labs team. I wanted to ask if there has been any updates or changes to the Neon firmware or app that would results in code being changed for interacting with the real time API client?

user-ac085e 18 October, 2023, 21:00:30

Our simulator code was successfully interacting with the Neon as of last month (starting/stopping recording, labeling events). As of today, however, our code connects to the neon but no longer has any control over it.

nmt 19 October, 2023, 06:18:55

Hi @user-ac085e! Our real-time API should function as expected. I just verified that the original examples for our Python Client, which demonstrate how to connect to the device and start/stop recordings, are working perfectly. Which Companion app and real-time api versions are you running? And could you share the part of your code that's not working, along with the terminal output?

user-b03a4c 19 October, 2023, 10:31:51

Hi, I'm Ken. I want to create heatmap based on my recording using enrichment(Reference Image Mapper) However, it takes so longtime and did not complete making heatmap. Could you teach me a possible reasons?

user-480f4c 19 October, 2023, 15:13:48

Hi @user-b03a4c πŸ‘‹πŸ½ ! In general, it is highly recommended to slice the recording into pieces for the mapping as this will significantly reduce the processing and completion time for your enrichments. You can "slice" the recordings directly on Cloud by adding events. Please have a look at this guide that also used event annotations for the RIM enrichments: https://docs.pupil-labs.com/alpha-lab/multiple-rim/

Could you please clarify if your enrichment was successfully completed? Can you see the heatmap after running the enrichment?

user-d714ca 21 October, 2023, 05:59:29

I want to report a problem that the Neon Companion has always failed to open. How to fix it?

nmt 21 October, 2023, 06:06:37

Hi @user-d714ca! Please long-press on the app icon, click on information, and then select "Force Stop". Then try re-opening the app and ensuring that it's up-to-date in Google Play Store

user-d714ca 21 October, 2023, 06:07:24

Thank you! Iet me try it now

user-d714ca 21 October, 2023, 06:08:48

It seems work! Thank you!

user-cd0336 23 October, 2023, 10:13:49

Hi everyone, i'm Daniel, a researcher from the UK who is looking to use the Neon to hopefully track gaze of young stroke survivors to identify obstacles to walking in real-world settings. I was just looking for advice from those that have used the Neon before or another Pupil-Labs eye tracker that may be suitable for a project such as this. Additionally, as this is a one off project i was wondering if there may be a company/institution based in the UK that would be willing to reach out to me with advice or potentially let us rent/borrow their gaze tracking glasses. Many thanks in advance for all your help

user-37a2bd 23 October, 2023, 11:52:52

Hello, Can someone help with the gaze metrics that have to be run through python? I am not able to get it done by myself even though I have a little knowledge in python. Is it possible for someone to get on a call with me? Or maybe someone who can be my point of contact so that I can send any questions/doubts directly to them?

nmt 23 October, 2023, 19:51:09

Hi @user-37a2bd πŸ‘‹. Neon's gaze data doesn't need to be processed through Python scripts. We do, however, have several experimental post-processing tools in the Alpha Lab section of our docs. Are you referring to these? If so, Discord is the appropriate place for asking questions πŸ™‚. Could you elaborate a bit on which scripts you've attempted to run and the problems you've encountered?

user-594678 23 October, 2023, 20:09:10

Hi Pupil team! Does Neon use dark-pupil detection?

user-d407c1 23 October, 2023, 20:15:05

Hi @user-594678 ! Neon uses a different approach for gaze estimation. It does so by the means of a deep neural network. This neural network has been trained with a large cohort of people and simply by "seeing" the eye images is able to define the gaze position.

user-594678 23 October, 2023, 20:17:36

Thanks @user-d407c1 for the quick response! I wonder when do you think the white paper will be out? Also, should I think then in general what Neon does is capture the eye positions thru the IR cameras and do some processes that was trained with a DNN to finally output the eye positions?

user-ccf2f6 23 October, 2023, 20:24:05

hi Pupil Labs, We're working on an application that streams from multiple Neon devices on the same network. We currently have four and will be extending to more but we're wondering if there's a way to mock the real-time APIs to stream data from simulated devices for testing purposes?

nmt 24 October, 2023, 17:04:30

Hi @user-ccf2f6 πŸ‘‹. While it may sound simple in theory, it is actually very challenging to implement in practice due to all of the intricacies of how the data is streamed, and knowledge about the app's internals that we cannot disclose. We recommend using an actual Neon + phone combination for development work.

user-2cfdc6 24 October, 2023, 09:08:51

Hiya, I am trying to use the Reference Image Mapper. I have uploaded the Reference image and selected the Scanning recording, but the Processing % is always cero. Any suggestion to how make it works?

user-2cfdc6 24 October, 2023, 09:13:00

Chat image

nmt 24 October, 2023, 16:47:59

Hi @user-2cfdc6. Depending on the duration and number of recordings you're running the reference image mapper on, it can take some time to compute. Can you try refreshing your browser window and seeing if the progress has changed?

user-c10c09 24 October, 2023, 09:16:43

Hi team. Any suggestions for how I can make this easier on myself (identifying fixation points for outside in bright sunlight), or shall I recommend an option to change the font colour for future recordings (or is this an option I missed)? Thanks - some of the recordings are easier, but transitions from shade to sunlight cause this issue where it's impossible (rather than just quite difficult - even light grey pavement).

Chat image

user-29f76a 24 October, 2023, 10:15:32

Same happened here. please help us @nmt

nmt 24 October, 2023, 16:46:46

Hi @user-c10c09 πŸ‘‹. Thanks for sharing the feedback. You're right that the web player in Cloud only showed the id fixations in white, and there wasn't an option to change it. We've just pushed an update such that the id fixations are more salient πŸ™‚

user-37a2bd 24 October, 2023, 09:26:21

Posting the error message here -

Chat image

nmt 24 October, 2023, 16:42:10

Thanks for sharing the output of the terminal. That helps. Let's move this conversation to the software-dev channel - it's better suited to there!

user-29f76a 24 October, 2023, 13:44:23

Anyone have the same problem? @user-d407c1 @nmt @

Chat image Chat image

user-c2d375 24 October, 2023, 13:54:51

Reference Image Mapper fail

user-594678 24 October, 2023, 17:09:25

Hi @nmt ! Thank you for pointing a relevant material. I will take a look into it. Looking forward to reading the whitepaper for Neon as well!

user-01a3d9 24 October, 2023, 20:34:10

Hi team, I’ve been facing some issues with the companion app’s network interfaces not working correctly. Every so often (usually when multiple devices are on the network) the RTP server cannot be connected to. When this happens, the glasses’ LED indicator is constantly solid upon reconnecting them to the phone (first image). When they are unplugged from the phone the graphic in the app does not switch back to the picture of the phone as it should (second image). This is only resolved when I restart the phone. Replacing the glasses with a new pair does not resolve this.

Is this a bug in the companion app?

Also note that GET <ip>:8080/ resolves fine but GET <ip>:8080/api/status hangs indefinitely.

Chat image Chat image

user-37a2bd 25 October, 2023, 07:11:06

Hi Pupil Team, According to this link - https://docs.jupyter.org/en/latest/running.html there should be an "Areas of interest" tool under the tools section. I've attached an image of the same. Is this feature yet to be released?

Chat image

user-37a2bd 25 October, 2023, 08:48:47

Sorry I meant to include this link. https://pupil-labs.com/releases/cloud/v6.1/

user-480f4c 25 October, 2023, 12:01:20

Hi @user-37a2bd! The "Areas of interest" tool is not available yet, but it will be available in the next Pupil Cloud update. In the meantime, you can follow this tutorial to define areas of interest on a reference image (after running the Reference Image Mapper enrichment): https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/

user-d714ca 25 October, 2023, 07:29:10

Hello! I want to use the Neon Companion on another phone, but it said 'Your device is incompatible with Neon Companion'. What can we do?

user-c2d375 25 October, 2023, 07:31:38

Hi @user-d714ca πŸ‘‹ the Neon Companion app is only compatible with OnePlus 8, 8T and 10.

user-d714ca 25 October, 2023, 07:50:10

Thank you~

user-af038f 25 October, 2023, 08:19:00

What is the cost (US) for the glasses and software? I have not seen any prices.

user-c2d375 25 October, 2023, 08:30:02

Hi @user-af038f πŸ‘‹ The cost of the bundle can vary slightly depending on the specific frame you'd like to include. The bundle includes Neon as well as a complimentary membership to Pupil Cloud. You can find detailed pricing information for Neon here: https://pupil-labs.com/products/neon/shop

user-e141bd 25 October, 2023, 20:28:40

Hi! We were trying to download some recordings recorded yesterday and today but there is error (see attached). Can anyone help us with this? Thank you!

Chat image

nmt 25 October, 2023, 20:38:22

HI @user-e141bd πŸ‘‹. Can you please DM me the ID of an affected recording and I will ask the Cloud team to investigate.

nmt 26 October, 2023, 12:29:01

Hi @user-e141bd. The Cloud team have reprocessed your recordings, so they should be working as expected now. Could you please upgrade the Neon Companion App in Google Play Store? This should prevent future instances of the error!

user-e141bd 25 October, 2023, 20:46:14

Thanks, @nmt ! DMed you with the IDs just now.

nmt 25 October, 2023, 20:50:11

Thanks. I'll report back asap

user-b55ba6 26 October, 2023, 10:35:44

Hi!

I am interested in working with the WORLD images from Neon. I would like to know what is the best way to get the best quality images from the scene camera. Raw files provide an mp4. Is there a recommended way to process videos? Is there a way to get the raw frames?

I use opencv library to get frames. The first frames - from 0 to 1500 frames approximately ,that also are the first seconds the camera is open - seem to have poorer quality. As I am new to image processing,, I am not sure if this has to do with raw data, my opencv video-to-frame-processing, or any other step in the image processing steps. Looking at the API scripts I see you recieve data using AV libraries.

Do you have any suggestions for image handling from Neon, especially for researchers that would consider images as data?

Thanks!!

user-480f4c 26 October, 2023, 18:14:19

Hi again @user-b55ba6! OpenCV is known to be unreliable, often leading to frame drops (see also this relevant message: https://discord.com/channels/285728493612957698/633564003846717444/1020311042099785748). We recommend pyav (https://pyav.org/docs/stable/) instead. You can have a look at this example that shows how to extract individual frame images from the world video using pyav. This tutorial was created for Pupil Core recordings, but a similar approach can be applied for Neon recording: https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb. See also this relevant tutorial on frame extraction using ffmpeg https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb

user-b55ba6 26 October, 2023, 11:24:36

Also, I have a timestamp question. I am looking at world_timestamp.csv timestamp, gaze.csv timestamp and imu csv timestamp. I see that the world camera is the first to provide data, then imu (almost around 2 seconds after) and then gaze data almost 500 ms after imu. Does this make sense? Does this mean eye camera is also delayed (and this explains that gaze data comes also some seconds after)?

user-480f4c 26 October, 2023, 17:59:17

Hey @user-b55ba6 πŸ‘‹πŸ½ ! It is known that the time each sensor starts varies. The difference between sensor start times ranges from 0.2 to 3s on average, so we would recommend starting the recording and waiting a few seconds to ensure all sensors have started properly before any important part of your experiment begins.

In case it's relevant, we have a guide on timestamp matching across sensors https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/

I hope this helps πŸ™‚

user-29f76a 26 October, 2023, 15:35:04

Hi everyone. Has anyone here read publications using Neon setup especially telling more about methodology and analysis?

user-2eeddd 27 October, 2023, 05:03:32

HI

import nest_asyncio nest_asyncio.apply()

from pupil_labs.realtime_api.simple import discover_one_device device = discover_one_device()

from datetime import datetime import matplotlib.pyplot as plt import cv2

eye_sample = device.receive_scene_video_frame()

dt = datetime.fromtimestamp(eye_sample.timestamp_unix_seconds) print(f"This scene camera image was recorded at {dt}")

matplotlib expects images to be in RGB rather than BGR. Thus we use OpenCv to convert color spaces.

eye_image_rgb = cv2.cvtColor(eye_sample.bgr_pixels, cv2.COLOR_BGR2RGB) plt.figure(figsize=(10, 10)) plt.imshow(eye_image_rgb) plt.axis("off");

gaze_sample = device.receive_gaze_datum()

dt = datetime.fromtimestamp(gaze_sample.timestamp_unix_seconds) print(f"This gaze sample was recorded at {dt}")

plt.figure(figsize=(10, 10)) plt.scatter(gaze_sample.x, gaze_sample.y, s=2000, facecolors='none', edgecolors='r') plt.xlim(0, 1600) plt.ylim(1200, 0)

I ran the above code and 'unable to create requested socket pair' error came out. What is this problem about and how can i fix to run this code?

nmt 27 October, 2023, 21:03:43

Hi @user-2eeddd πŸ‘‹. There could be some network connection issue. Does Neon Monitor work for you? Read how to set it up here: https://docs.pupil-labs.com/neon/getting-started/understand-the-ecosystem/#neon-monitor

user-c16926 27 October, 2023, 09:01:49

Hi pupil labs team. I wonder if there is a way to get the gaze positions form neon without using pupil cloud? We have some files that have already been deleted from the phone and cannot be uploaded to pupil cloud anymore.

user-c2d375 27 October, 2023, 09:08:10

Hi @user-c16926 πŸ‘‹ Do you mean that you exported those recordings from the phone on a computer, and then deleted them from the phone?

user-37a2bd 27 October, 2023, 13:11:39

Hey guys, I had a question with regard to the neon glasses and the outputs. So say in a project we had 5 participants and had 5 different products that we tested. Now when we do a reference image mapper enrichment for any product does it take the data of all 5 participants and gives us an output or do we have to do a separate reference image mapper enrichment for each participant?

user-c2d375 27 October, 2023, 13:44:18

Hi @user-37a2bd πŸ‘‹ Indeed, it is correct that if you run a separate Reference Image Mapper enrichment for each product, the data export will include all participants who looked at that specific product. So, instead of creating separate RIMs for each participant, my recommendation is to create separate enrichments for each product. To make the most out of the Reference Image Mapper enrichment, I'd suggest you consider two different methods:

1) Create five distinct projects to distinguish between the different products that participants looked at, especially if your experiment involves looking at one product per recording. 2) Add all the recordings to the same project, include custom events on the recordings to mark when a participant begins and finishes looking at the product(s), and create separate enrichments for each product, making sure to select the custom events to run the enrichment only over the portion where each specific product was present on the scene. This is particularly useful if multiple products are exposed to the wearer during the recording. You can follow a similar approach as described in this Alpha Lab tutorial (https://docs.pupil-labs.com/alpha-lab/multiple-rim/#map-and-visualize-gaze-on-multiple-reference-images-taken-from-the-same-environment).

Either way, once the enrichment process is completed, the data export will include gaze and fixation data from all the participants. To distinguish among them, you can match the recording id column in these files with the recording id column in the sections.csv to identify the respective wearer or recording names. Please take a look here for more information about the RIM export (https://docs.pupil-labs.com/export-formats/enrichment-data/reference-image-mapper/#reference-image-mapper)

user-37a2bd 27 October, 2023, 14:37:34

Alright Eleonora, thanks.

I was looking at the Demo workspace in the cloud and noticed that all the videos are in the same project folder. I have a few more questions arise from this -

Question 1 - Does the Reference Image Mapper exclude all of the scanning videos automatically from the data points? (since there are multiple scanning videos in the folder) If they do exclude the scanning videos, what is the criteria that is to be met so that the software understand that it is a scanning video, for e.g., does the wearer name have to be something specific like "scanning"/"scanner"? Question 2 - Do we have to create separate project for a product/section if we have a participant viewing multiple products/sections? Is it necessary to get the most accurate output?

user-c2d375 27 October, 2023, 15:13:23

Regarding the demo workspace, your observation is correct. All the recordings are indeed within the same project, but the Reference Image Mapper enrichments are executed on different custom events to ensure that each enrichment processes data only from the relevant section of the recording.

For instance, if you click on the first Reference Image Mapper named "Adel_Dauood_1_standing" and click the downward arrow next to "Advanced Settings," you'll notice that the custom events Adel_Dauood_1_start and Adel_Dauood_1_end have been chosen as the start and end points, respectively. This means that this specific enrichment processes data solely within the segment of the recording defined by these two events, and only for those recordings where these events are present.

On this regard, you'll observe "5/36 recording matched," indicating that out of all the recordings in the project, only 5 contain these two custom events. Consequently, the reference image mapper will only be applied to these segments, and the export will exclusively include data related to them.

user-c2d375 27 October, 2023, 15:13:40

To address your questions:

1) Yes, the scanning recording is automatically excluded from the RIM processing, as you need to select a specific scanning recording when configuring the enrichment. If you have multiple scanning recordings in the project, to prevent them from being processed by RIM, I recommend not adding any custom events to them. Instead, retain the default recording.begin and recording.end. By running your enrichments solely on custom events (and not on the two default events), they won't be matched and consequently will be excluded from the processing. 2) In this case, you do not need to create separate projects, but indeed separate Reference Image Mapper enrichments, each with distinct images of the various products/sections, to ensure that gaze data is mapped only to the specific object. The demo workspace, for instance, includes several recordings where the wearer looked at different paintings, and subsequently, multiple enrichments were executed to map gaze data to the different artworks (selecting the relative custom events). This approach is quite similar to what is demonstrated in this Alpha Lab tutorial (https://docs.pupil-labs.com/alpha-lab/multiple-rim/#map-and-visualize-gaze-on-multiple-reference-images-taken-from-the-same-environment) with pictures of different areas within the same living room.

user-37a2bd 27 October, 2023, 17:48:15

Thanks @user-c2d375 , understood everything you explained! I do however have anther question, the heatmaps that are generated for the RIM will be a consolidated one of all the wearers then would it not? If that is the case is it possible to generate a heatmap of a product/section only for one wearer? I see that as a possibility only if we create separate projects for each wearer, which would be a little tedious, or do you guys have a separate program for that like how you have for the gaze metrics and still and dynamic scanpaths?

user-ccf2f6 27 October, 2023, 18:40:49

Hi team, I’ve been facing some issues

user-37a2bd 29 October, 2023, 11:32:34

Hi Team, I had a question on whether it is possible to get the following metrics from Pupil's equipment. I am attaching a photo of the output we get for individual participants from another set of Eye Tracking equipment. Please let us know how much of it is possible to get from the Pupil Neon which we are currently using or any other equipment that you guys have in your roster.

Chat image

nmt 29 October, 2023, 19:02:32

Hi @user-37a2bd. It is possible to compute those metrics from Neon's data streams. However, we do not explicitly report them. You would need to process the exports and compute the desired metrics offline. The approach you take would largely depend on the experiment or use case.

user-28ebe6 30 October, 2023, 15:26:38

@&288503824266690561 what does a solid green LED indicator mean? The glasses won’t connect to the app but the green LED turns on when connected to the companion device.

mpk 31 October, 2023, 06:56:45

Hi! I think this might be a hardware issue. Please reach out to info@pupil-labs.com to get further diagnostics.

user-e141bd 30 October, 2023, 20:01:05

Hi @nmt, thanks for having the team reprocessed the recordings for us! It seems like two of the recordings are still showing the same video transcoding error. I will send you the ID of those two recordings. Do you mind taking a look at them again? Thanks so much! Or should we try to download the videos locally from the phone for now?

user-37a2bd 31 October, 2023, 08:14:42

Hi Team, We ran a project yesterday with the neon, we had a total of 10 RIMs out of which I have attached 5 of the heatmaps. The heatmaps seem to be way off on two of them and the rest seem to not collect the data accurately. We processed two sets of 10 RIMs in the same project. The first time we processed it the heatmaps seemed to be accurate but the second time we ran it the heatmaps were off. Unfortunately we didn't save the heatmaps the first time around. Could you guys let me know what the problem is? Also is there any time limit within which we have to work on the uploaded files? I tried working on an RIM from a video that was recorded 10 days back and it gave me an error saying the following - "Error: The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image."

Chat image Chat image Chat image Chat image Chat image

user-d407c1 31 October, 2023, 08:31:47

Hi @user-37a2bd

The message that you saw "Error: The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image." indicates that the reference images can not be found in that video.

You can see here how to perform the scanning recordings https://docs.pupil-labs.com/enrichments/reference-image-mapper/

Without seeing the recordings is hard to say whether they are appropiate, so feel free DM and invite me to your workspace if you want some more concrete feedback.

Regarding the heatmaps, kindly note that the scanning recording is by default also included in the generation of the heatmap, so that might be the reason why you might be seeing it off. In an upcoming update of Cloud (targeting next. month) you will be able to select the recordings contributing to the heatmap on an easy way. For now, you will have to determine them using events, meaning you will have to create an event start/end on the recording that you are interested on, and then use them on the enrichment. Let me know if you need assistance with it.

user-37a2bd 31 October, 2023, 08:16:57

Let me know if you would like any other data from the above project.

user-37a2bd 31 October, 2023, 10:51:00

Thanks again!

user-ac085e 31 October, 2023, 16:32:08

Hey all, does anyone know if there are any limitations to the amount of "wearers" you can create within a workspace?

nmt 01 November, 2023, 09:51:08

Hey @user-ac085e! There are no limitations on the number of wearers you can have in a workspace πŸ™‚. Although if it gets really cluttered, it might be preferable to set up a new workspace, but it depends on your project/preferences.

End of October archive