πŸ‘“ neon


Year

user-5543ca 01 July, 2024, 12:07:55

Hi Rob, we simply need to extend it since there is not a good place to keep the phone nearby when our participant wears the Neon.

user-f43a29 01 July, 2024, 12:12:55

Just so I am certain that I understand, it is not possible for the participant to put the phone in their pocket or into a small pouch/purse or something like a tiny backpack?

Third-party cables can be variable in their quality. Let me check with the others about which cables could work for you.

user-5543ca 01 July, 2024, 12:08:30

We want to avoid any wire pulling sensation by extending the length of the wire.

user-5543ca 01 July, 2024, 12:09:43

I got a USBC cable that can transfer power and data, but surprisingly it didn’t work.

user-5543ca 01 July, 2024, 13:59:23

Thank you, let me check and get back to you

user-5543ca 01 July, 2024, 18:50:08

I used a USB-C 3.2 - https://www.amazon.com/UGREEN-Extension-Extender-Nintendo-MacBook/dp/B08NTCD6VQ/ref=sr_1_8?crid=WWC8J90UU9YI&dib=eyJ2IjoiMSJ9.HQB8KNMxDBJJJmaAFrLoK1KUVDz2QGzB2sol-9k07nTcgj9too8FGVxgjZKLFaQRf0lgMLAGxwg8quYABvZf2X51Wn5gvx3CMPISlpFDKOX5lnnem29JNEYwRUurbmj0OPdKGJdHaOeIZ9R7OGmodrgNGwLci5Xuiu7hZLc3cLiWoTAHP4woIq-8xYuWllBnR7py6uOMHVuyjN4D2pSWhJDy6PZ87LXZMPWFm7E7zDk._fTkiZMyVLSHd0x9BK2Za9ZsSkCC2uO8dUuCb5Ixpgg&dib_tag=se&keywords=unitek%2Busbc%2Bextender&qid=1719859731&sprefix=unitek%2Busbc%2Bextende%2Caps%2C118&sr=8-8&th=1

user-f43a29 02 July, 2024, 13:25:35

Hi @user-5543ca ,the cable on Neon is USB 2.0, but USB 3.0 cables should also work. We cannot say why that cable did not work for you exactly, as we have not tested it. It could be that the extension cable is too long or it could be another factor.

Are you allowed to share more details or a photo of your setup? It would be great to find a way to avoid using an extension cable, in order to ensure the best data quality.

user-5543ca 01 July, 2024, 18:50:35

The cable on Neon is USB-C 2 or 3.x?

user-2109ae 02 July, 2024, 11:27:44

Hi! I don't know if this is a stupid question, but I haven't been able to find anything online regarding this. Is it possible to connect the Neon glasses directly to a computer? I saw someone in this server used some kind of ethernet hub to connect; is that another solution that bypasses using a mobile phone? Thanks!

user-480f4c 02 July, 2024, 11:42:52

Hi @user-2109ae! Thanks for reaching out πŸ™‚ This is definitely not a stupid question! Sorry if this hasn't been clear through our docs. Let me try to clarify a few points:

  • Neon needs to be connected to the phone as the calibration-free gaze estimation pipeline relies on the Neon Companion App that is compatible with the phone that we ship with Neon.

  • If you want to control your recordings remotely, this is possible by streaming the data from the phone to a computer! This is how: 1) using Neon's Real-time API 2) using our Monitor App 3) or using a USB hub with ethernet socket

  • If you really want to connect the Neon glasses to a computer, we have made Neon compatible with Pupil Capture (see this tutorial). ⚑ ⚠️ However, this is not a turnkey solution, and more importantly, gaze estimation in this case will rely on Pupil Core technology (calibration required etc), rather than a deep-learning approach.

Do you mind sharing more details about your setup? That will help me provide better feedback on how you can use Neon to meet your research needs.

user-2109ae 02 July, 2024, 12:32:29

I do have a question regarding the streaming API! I am able to identify the glasses on the network as well as receive the front video stream via RTSP (not WebRTC). However, this video stream is without the gaze overlay. Is it possible to add this to the strean itself, just like it is added to the stream on the connected phone? Or do I need to add this manually? Thanks!

user-d407c1 03 July, 2024, 06:16:10

Hi @user-2109ae ! You will need to draw the gaze circle manually, although if you are using the Python Client. Here we provide an example on how to do so, in a simple and asynchronous way.

user-5543ca 02 July, 2024, 15:01:05

Thanks Rob.

user-5543ca 02 July, 2024, 15:04:33

We will definitely need an extended cable for our setup, as we cannot make our participants put it in their pockets as you previously suggested. Let me try out the solutions first and get back to you with more information if necessary.

user-f43a29 03 July, 2024, 09:52:38

Hi @user-5543ca , would any of the potential solutions in the attached image help?

Chat image

user-5543ca 03 July, 2024, 12:58:15

Thank you for sharing it. We will consider these options, it might work.

user-3f7dc7 04 July, 2024, 13:33:56

We had the same concern, not wanting the participants to have the phone on them during an experiment. We didn’t want to go to them or ask them to trigger the record on and off between seated trials. We ended up mounting the phone on a tripod nearby.

nmt 04 July, 2024, 15:04:50

Details of both approaches can be found in our online docs

user-d407c1 04 July, 2024, 15:23:44

Hi @user-3f7dc7 ! It should be possible, although they use different libraries for the network communication, so it won't be straightforward. Pupil Core uses ZMQ under the hood, and how you can start a recording is documented here.

Neon on the other hand, can simply start a recording using an HTTP Post request to neon.local or your companion device's ip followed by :8080/api/recording:start. We also provide a Python client with an example on how to start, stop the recording here.

My recommendation, would be to start them and then send an event to synchronise them rather than relying both to start recording at the same time. Moreover as we recommend recording the calibration with Pupil Core, check out the Best Practices.

Alternatively, and specially if you have more sensors that you want to synchronise, I would recommend using Lab Streaming Layer.

user-3f7dc7 04 July, 2024, 15:34:48

Thanks! Yes, sending an event sounds something we can do.

user-00729e 05 July, 2024, 15:17:36

Hi everyone. I use a local network to connect to the glasses with the realtime API. With my Pupil Invisible, the device could be found using the device discovery functions. With my Neon, this does not work. Even if I just use the example code from the website. Instead, it only works if connect via the IP address. (device = Device(address=ip, port="8080")) Any Idea why this could be?

user-d407c1 05 July, 2024, 15:30:22

Hi @user-00729e πŸ‘‹ ! It should work with both. Just to be sure can you share which version of the app and realtime are you using, and whether do you have both Pupil Invisible and Neon apps installed on the same device?

user-00729e 05 July, 2024, 15:34:13

Thanks for the reply. I updated both realtime API and companion app just today, it's still the same. I used separate devices. I was wondering if it might be a system or firewall setting on the phone that came with the Neon.

user-d407c1 05 July, 2024, 15:40:13

@user-00729e We ship the Companion Devices untouched, and they do not come with any blocking firewall, have you seen this relevant section in our docs?

The local network must allow MDNS and UDP traffic for the app to work. In large public networks, this may be prohibited for security reasons. You can circumvent this by running a separate WiFi using the phone's hot spot functionality or an extra router.

https://docs.pupil-labs.com/neon/data-collection/monitor-app/#connection-problems

user-00729e 05 July, 2024, 15:44:23

I configured my own local WiFi. It has previously worked just fine with the Invisible's companion device

user-d407c1 05 July, 2024, 15:47:42

Does it resolve in the web browser if you enter neon.local:8080 ?

user-5a2e2e 05 July, 2024, 21:29:31

Hello, I have a question about using the realtime API with Psychtoolbox in MATLAB. When we send events using pupil_labs_realtime_api('Command', 'Event') like how it is done in the demo, we are noticing that it takes about .5-1s to run this line. Is there any way to speed this up? We are syncing to eeg data at a fast rate.

user-f43a29 06 July, 2024, 07:54:34

Hi @user-5a2e2e , that is the previous version of the Matlab integration. We are in the process of switching the documentation over to the current version, which can be found at the pl-neon-matlab repository. To reduce the chances for conflicts, first remove the pupil_labs_realtime_api function and then follow the installation instructions for the current version. Also, just a note in case it isn’t clear, but since you are syncing to other devices, then when using the real-time API over WiFi, you want to use a dedicated, local WiFi router, rather than work or university WiFi. Ideally, the router is not connected to the internet.

Since precise time sync is important, may I ask what you are doing when syncing? Does your experiment code need to actively react to Neon’s data stream in real-time, or you just want everything properly time stamped and aligned during post-hoc analysis?

user-ee7c3e 08 July, 2024, 07:41:46

From the announcement

We also added built in support for Lab Streaming Layer (LSL) Is there documentation on that somewhere? I'm mostly interested in whether video is sent over LSL, which I know was worked on a few years back (with Core I guess) but I didn't heard from since.

wrp 08 July, 2024, 07:43:15

Thanks for following up on this. Currently it is just gaze + event data being streamed to LSL. I will clarify the announcement accordingly.

user-5ab4f5 09 July, 2024, 11:32:11

I did some recordings in neon companion but did not notice that audio recording was on disabled when i did the recording. Is there any way to retrieve it after all, or is it lost?

user-480f4c 09 July, 2024, 11:55:16

Hi @user-5ab4f5! If the audio recording was disabled, then unfortunately the audio data was not recorded. Please make sure to enable it next time you want to record audio data as well, because by default this setting is disabled. FYI, in case you haven't done so already, make sure to update the Neon Companion App to the latest version (see our announcement for some new features: https://discord.com/channels/285728493612957698/733230031228370956/1259740457244823563)

user-07e525 09 July, 2024, 12:40:20

Hi, I have a problem with the neon companion app. It keeps hanging up and the data displayed for the wearers is not correct. I created a new wearer the day before yesterday and the display in the app says that the account was created 449 days ago and that I haven't made any recordings with the wearer yet, which isn't true either. Are there any ways of correcting this? Or what can I do? I already updated the App on the newest version.

user-07e525 09 July, 2024, 12:44:34

small addition: not only the app hangs regularly, but also the device. I keep getting error messages "the process system is not responding" and similar and I have had to completely shut down and restart the cell phone several times until it worked again. That shouldn't be normal, right?

user-480f4c 09 July, 2024, 12:45:17

@user-07e525 Thanks for reaching out. Can you please create a ticket in our πŸ›Ÿ troubleshooting channel? We can assist you there in a private chat.

user-07e525 09 July, 2024, 12:46:02

Yes, thank you πŸ™‚

user-cc6fe4 09 July, 2024, 15:01:51

Hi, I have some recordings that when uploaded to the cloud say have errors. Is there a way to reupload them or use the exported recrding from the companion (phone) to then upload to the cloud? thanks in advance

user-07e923 09 July, 2024, 15:05:27

Hi @user-cc6fe4, thanks for getting in touch πŸ™‚ Could you open a ticket in πŸ›Ÿ troubleshooting and provide the recording IDs? You can find the IDs on Pupil Cloud by right-clicking the recording > view recording info.

user-cc6fe4 09 July, 2024, 16:37:11

sure, thanks

user-f43a29 09 July, 2024, 23:54:38

Apologies, @user-5a2e2e . I had received the notification. No excuse for my delay.

So, you might want to give the Lab Streaming Layer (LSL) some consideration, as it simplifies the process of synchronizing and temporally aligning disparate data streams. Check our recent announcement for details on how to activate our LSL relay in the latest version of the Neon Companion App. Also, check LSL's Supported Devices page to see if your EEG device already has an integration.

When done this way, the LSL integration will automatically start the data streams when LabRecorder is started, so starting a recording in parallel in the App is up to you.

Then, you should send some events and check that LSL is picking them up correctly and synchronizing them as expected with your EEG data. The results will be saved in a unified file with Neon's data streams and the EEG data stream.

Also, if you still experience timing issues with the new version of the Matlab integration, please let us know.

user-3b5a61 10 July, 2024, 01:58:15

Hi folks. I've been using the Neon for a while now, it works great and all. However, when I download gaze data from Pupil Cloud the files seem to 80% of the time being corrupt (the zip won't open). Is there an easy way to fix this, or is this a known issue?

user-07e923 10 July, 2024, 07:38:14

Hey @user-3b5a61, could you open a ticket in πŸ›Ÿ troubleshooting? Also, it'll be great if you can provide the IDs of the affect recordings in the ticket. To find the IDs, right click the recording on Pupil Cloud > view recording information.

user-15edb3 10 July, 2024, 05:12:24

Hi so i have been looking for this pupil lab clod enrichment like heat map, markings etc..but am not able to find that in the Interface that i have..PFA of my view..can u pls help how should i access them ?

Chat image

user-07e923 10 July, 2024, 07:36:34

Hi @user-15edb3, thanks for getting in touch πŸ™‚ To get the visualizations, you'll need to create a project first. Add the recordings you want to the project by right-clicking the recordings > create project from selection. Then navigate to the project. Click "visualizations" to add heatmaps, or click "enrichments" to run different enrichments.

user-20657f 10 July, 2024, 13:29:01

Hi there. I am curious if anyone is available for a specific inquiry I have regarding the heat maps?

user-07e923 10 July, 2024, 14:06:00

Hey @user-20657f, thanks for reaching out πŸ˜„ What's the question?

In the meantime, you can also check out the heatmap docs. Hopefully your question is answered there.

user-613324 10 July, 2024, 18:21:51

Hi Neon team, is there a way to edit the wearer profile on companion app through the realtime api before the start of the recording?

user-d407c1 11 July, 2024, 06:10:30

Hi @user-613324 ! That is currently not possible, if you would like to see this feature on our realtime API, feel free to suggest it on the πŸ’‘ features-requests channel

user-d407c1 11 July, 2024, 06:09:25

@user-20657f To add to what my colleague @user-07e923 suggested, we have uploaded a new build yesterday that we hope addresses the bug found in Windows machines. Please try the latest version.

The plugin he mentioned is open-source, and you can find the implementation here. You can combine it with this tutorial’s code here to achieve your goal.

If you struggle with coding, as my colleague outlined, you can request this feature to be added to Cloud or as part of an Alpha Lab section, in the πŸ’‘ features-requests channel. But if you are under a rush, note that we can allocate resources to quickly build this tool for you through a custom consultancy package. More details can be found here.

user-20657f 11 July, 2024, 13:51:37

Thank you so much! I put in the request! I appreciate all of the help as always!@user-07e923 @user-d407c1

user-613324 11 July, 2024, 09:15:35

Hi Neon team, I recently noticed a bug that seems to appear more and more often from Neon companion app: When uisng the realtime api to start the recording (device.recording_start()), the recording cannot be started and the program froze due to the error "400, 'Cannot start recording, previous recording not completed" (internally sent by the api). However, the recording button was not on at all, and I am pretty sure the Network connection and the connection to the eye-tracker are just fine. The Neon companion app didn't send any error message. The eye-tracker also seems to be normal (didn't detect any red light flashing). This error can only be solved by restarting the phone. Simply re-plugging the eye-tracker to the phone won't help.

user-d407c1 11 July, 2024, 09:19:34

Hi @user-613324 ! Could you kindly create a ticket at πŸ›Ÿ troubleshooting such that we can follow up? Please when doing so, provide more details like what versions of the OS, python, python-realtime API and companion app are you using.

user-613324 13 July, 2024, 14:33:55

thanks! I just created the ticket

user-2fb12e 12 July, 2024, 13:31:33

Hello pupil Team ! πŸ˜„ I worked a lot with the pupil labs Core, and we were very happy with it, but for simplicity's sake we recently bought the Neon pupil. We were very impressed by the gaze quality ! Our use-case implies to get the gaze position in real-time, and we succeed to do it without issue 😁 but with the core, we had the possibility to have the eye movement type (fixation and saccade) also in real-time. I don't find a way to do it with the Neon, and the documentation seems to say it's only possible in post-processing (download gaze file from the cloud). Do you know how can I get the eye movement type in real-time/streaming as I did it with the Pupil Core ? To stream the gaze on my computer I use pupil_labs.realtime_api.simple library and receive_gaze_datum() as function, is there another function available to get the fixation and saccade in the same way ?

user-d407c1 12 July, 2024, 13:48:34

Hi @user-2fb12e ! Currently, fixations and saccades are not generated directly on the Companion device, which means you can not stream them. We’re working towards making more streams available directly from the device, but we need to also consider the phone’s resource limitations. Even though we ship with a powerful flagship device, many resources are already being utilised.

I'd suggest adding your request to the πŸ’‘ features-requests so that you can follow and track updates on this feature.

In the meantime, you can use your computer to handle the computational load. Similar to this tutorial where we show how to get blinks almost in realtime, You could modify our open-source fixation detector to allow realtime fixation detections.

user-2fb12e 12 July, 2024, 13:56:56

@user-d407c1 Thank you very much for your quick response, I'll put it in the features-requests !

user-c541c9 15 July, 2024, 15:25:53

Hi, a quick question about cloud vs. on-device gaze stream: i.e. https://docs.pupil-labs.com/neon/data-collection/data-streams/#gaze from documentation. It says on cloud the gaze is "re-computed" at 200Hz -- Is that only to compensate for possible frame-drops or reduced frame-rate on device (say when it runs hot)? Does cloud run the same algorithms as the device -- e.g. will I miss full-quality gaze estimate if I only make use of the data the phone exports directly?

user-f43a29 16 July, 2024, 10:03:49

Hi @user-c541c9 , essentially, yes. Reduced frame rate on the device can occur, for example, because the user set the app to a lower rate in the app settings (e.g., 33Hz or even disabled altogether), or sometimes due to limited computational resources (e.g., other apps running in parallel). However, the latter usually only affects older phone models (OnePlus 8/10).

Importantly, because the eye images are still recorded at 200Hz (even if real-time gaze isn't), the gaze estimation pipeline can be re-run across all the eye images in Pupil Cloud.

The gaze estimation pipeline in Pupil Cloud is the same as that used in the latest Companion app version. You can find the version of the gaze pipeline used by your app by checking the pipeline_version field in the info.json file for a given recording.

user-5be4bb 15 July, 2024, 19:40:05

Hi, I was exporting data by three methods: from plc export function, neon player and pupil cloud. I noticed that timestamp values in plc are different that timestamps in neon and cloud (neon and cloud have same value). Also blinks and fixations data differ slightly between the three methods, my question is: Is this an issue, or no problem if I get values only from pupil cloud? because I need also pupil size data that can be exported only from the cloud. Thank you in advance for your help.

user-f43a29 16 July, 2024, 10:17:34

Hi @user-5be4bb , could you describe the timestamp differences in a bit more detail?

You should not have problems if you only use the Pupil Cloud data. In fact, if your work requires as much detail from Neon as possible, then the Pupil Cloud Timeseries data will be the easiest route, since Pupil Cloud re-runs the gaze & eyestate pipeline at the full 200Hz sampling rate for all recordings (see message above https://discord.com/channels/285728493612957698/1047111711230009405/1262711282617024553).

Also, note that eyestate is estimated in real-time on the Companion Device since version 2.8.10-prod, so pupil size is also in the Native Recording Data. If you want to inspect it on your desktop computer, you can activate the eyestate timeline in the latest version of Neon Player.

As mentioned in the README of pl-rec-export, some results will be slightly different from Pupil Cloud. It is not an issue with the software, but mainly depends on your goals for your data analysis.

Since eyestate and pupil size are important for you, you might also find it useful to set the inter-eye distance for each wearer to get more accurate eyestate estimates. Note that it will only have an effect on all recordings that you make after you change the setting.

user-5be4bb 16 July, 2024, 13:15:56

here is a table showing comparison between values. Also, another question, when using pupil invisible, we can get a csv file called gaze position, so we can know the frame index related to the world_timestamp file (scene camera data), for each gaze data at 200 Hz. But for pupils neon, I could not export this file, how can I associate the gaze data at a rate of 200 Hz to the scene data at a rate of 30 Hz?

Comparison.xlsx

user-8825ab 16 July, 2024, 15:34:12

Hi, I have 2 recordings showing that there is no scene video contained, but when we recorded, it was working brilliantly, can i get support for that please, thank you

user-07e923 17 July, 2024, 05:45:50

Hi @user-8825ab, can you open a ticket in πŸ›Ÿ troubleshooting? If the recordings are already uploaded onto Pupil Cloud, please also provide the IDs. You can find the IDs by right-clicking the recording > view recording information. Thanks πŸ™‚

user-c541c9 17 July, 2024, 09:17:40

QQ: Does the pl-neon-recording handle multi-part data files? I figured from documentation that videos, gaze etc. may be distributed across multipart files (based on size? or duration?) suffixed with ps1, ps2 etc. Reconciling timestamps etc. via official tools would be reliable and convenient.

user-cdcab0 17 July, 2024, 09:28:10

Yes, it is designed to handle multi-part data files πŸ™‚

user-c541c9 17 July, 2024, 09:36:57

I have seen LSL mentioned in the documentation. I don't know much about it. Is it something to help interface an "external" sensor/device with Neon to collect data in tandem?

Does the external sensor need a SW counterpart that knows to speak LSL? -- e.g. if I want to collect 3D info with a depth-camera together with Neon, will I have to develop some SW tools myself to connect Neon and whatever else is reading depth camera stream on a PC via LSL?

user-cdcab0 17 July, 2024, 10:11:20

Is LSL something to help interface an "external" sensor/device with Neon to collect data in tandem?

Yes! Generally though, both systems interface to LSL, rather than LSL facilitating an interface to Neon

Does the external sensor need a SW counterpart that knows to speak LSL

Yes

if I want to collect 3D info with a depth-camera together with Neon

LSL may not be the best tool for this. Rather, you could use the real-time API to send synchronizing events. For example, you can start the neon recording, then start the depth camera recording. When the depth camera recording starts, you can send an event (e.g., "depth-camera-started") to the Neon recording.

user-750dbe 17 July, 2024, 09:59:02

Dear Pupil Labs support, I want to use Matlab to set events during a Neon recording. Checking out https://github.com/pupil-labs/pl-neon-matlab i read the following: "Please note: Due to changes in MATLAB R2024a, this package is not yet supported there. Please do not use a version newer than R2023b for now."

Is this still the case, do we have to use 2023b? @user-52c68a might this be related to your question from yesterday?

user-f43a29 17 July, 2024, 10:00:15

Hi @user-4b18ca , if it is alright, I have moved your message to the πŸ‘“ neon channel, as the πŸ‘ core channel is for questions about Pupil Core.

Briefly, yes, it is still the case that Matlab R2024a is not yet supported. Whether you need to use R2023b or an older version depends on the Python version that you are using. See Mathworks' MATLAB-Python compatability table.

This pl-neon-matlab package is only for Neon, so if @user-52c68a is using Pupil Core, then it will not be relevant for their issue.

user-53a8e1 17 July, 2024, 10:01:07

Hi all. We just tried the new manual mapper and it is not working for us. When we start the enrichment it says "Internal server error" and although it gives a count of fixations it does not show them at all and skipping to the next fixations gives no result

user-07e923 17 July, 2024, 10:10:04

Hey @user-53a8e1, thanks for reaching out πŸ™‚ Could you create a ticket in πŸ›Ÿ troubleshooting? We'll continue the conversation there. Also, it'll be helpful to provide the enrichment ID. You can find this by clicking the dots > copy enrichment ID.

user-9c4952 21 July, 2024, 20:22:48

Hi All! I use Neon eye tracker and need the saccades data however when I download, it is a blank excel sheet.

user-9c4952 21 July, 2024, 20:23:05

Could you please help me with saccades data?

user-d407c1 22 July, 2024, 06:38:45

Hi @user-9c4952 ! What web browser and OS are you using ? Could you kindly create a ticket at πŸ›Ÿ troubleshooting and share the ID of your affected recording/s?

user-c541c9 22 July, 2024, 14:34:38

Hi, quick question about IED parameter in Neon. I understand if I don't set the value in the app during recording, the app assumes a default population-average value of 63mm. Now since gaze is computed real-time, is there an opportunity to edit the IED value for various subjects post-hoc in the exported recording and recompute the gaze?

user-d407c1 22 July, 2024, 14:39:21

Hi @user-c541c9 ! That’s correct! By default, it uses the average population value of 63mm if not given. Please note that this parameter only affects the eye state data reported, not the gaze estimation.

Currently, there is no way to adjust this parameter post-hoc, but if you would like to see it implemented, please refer to πŸ’‘ features-requests

user-c541c9 22 July, 2024, 14:50:34

Assuming two subjects have IED of 55mm and 65mm and they observe a point directly in front of their right eye a distance "d". When both eyes are observing this point, the left eye is going to have an angle (the right eye will have zero degree of rotation, ignoring the offset correction for sake of argument; so optical/visual axes are identical), because it has rotate itself a bit to the right to line up with the point.

I believe IED will cause the two subjects' left eye to turn different amounts -- by a larger angle for the 65mm IED person than the 55mm IED person. How will the gaze estimation not be affected then?

user-bed573 22 July, 2024, 21:16:07

Hello, I'm having trouble using my Neon device - when I connect it to the phone, the Neon app attempts to update the FPGA, but the progress bar does not move, and I have left it sitting for more than 20 minutes. My app version is 2.8.25-prod

I tried the following steps to force stop the Neon app, but still no change: https://discord.com/channels/285728493612957698/1047111711230009405/1224932216534990848

Chat image

user-07e923 23 July, 2024, 05:16:35

Hey @user-bed573, could you create a ticket in πŸ›Ÿ troubleshooting? We'll continue the conversation there. Thanks πŸ™‚

user-29f76a 23 July, 2024, 10:35:35

Hi everyone. Has anybody here encountered problem with unrecognized april tags when doing Marker Mapper enrichment? Is there any specific way to make sure the marker is rocignizable?

thanks

user-d407c1 23 July, 2024, 10:41:28

Hi @user-29f76a ! What family are you using? Kindly note that only the 36h11 family is recognised in Cloud.

Other than that, is hard to know why it might have failed without seeing the video or a screenshot of the set-up.

Please ensure that your markers have a white margin of at least the width of the smallest white square in the marker.

Have a look at this previous message: https://discord.com/channels/285728493612957698/285728493612957698/1220676625604153416

user-29f76a 23 July, 2024, 10:47:29

Oh I understand. I used 36h11. But still. Is the tags size influence the recognizability? maybe it needs to be bigger?

user-29f76a 23 July, 2024, 10:43:25

What do you mean by family? I just downloaded the apriltags from pupil labs website @user-d407c1

user-d407c1 23 July, 2024, 11:13:22

Then those would be already 36h11, and yes the size, illumination and margins can influence whether they are recognised. If you can share a picture of the set-up we could provide more concrete feedback.

user-29f76a 23 July, 2024, 12:39:28

I see, Thanks. How big the minimum tag size is commonly used?

user-d407c1 23 July, 2024, 12:51:59

That would depend on the distance at which it is displayed, among other factors. See https://discord.com/channels/285728493612957698/285728493612957698/1220676625604153416

user-80c70d 23 July, 2024, 14:02:46

We have used pupil labs eyetracking technology in our real-world spatial navigation experiment and additionally tracked our participant’s GPS position. Our goal is to combine the GPS position with the gaze ray in order to determine the real-world position of the perceived visual stimulus. This is not possible with the provided gaze data, where gaze is given in in x and y coordinates of the mapped gaze point in world camera pixel coordinates. Would it be possible to transform the gaze representation from camera pixels to the angular difference between heading and gaze, or a format that works similarly for our research interest?

user-f43a29 24 July, 2024, 07:39:51

Hi @user-80c70d πŸ‘‹ , yes, that is possible πŸ™‚

In the gaze.csv files provided by Pupil Cloud, you will see two columns: azimuth and elevation. These are the spherical coordinates of a 3D gaze ray originating at the center of the scene camera, expressed in the scene camera coordinate system. They are the undistorted and unprojected coordinates of the 2D camera pixel coordinates. Since Neon has an IMU, you also get information about head pose. More details about these data streams and their coordinate systems are in our docs.

Taken together, you can convert the gaze ray to what we call the "global pose" coordinate system, where Y points at magnetic north, Z points upwards (directly opposite gravity) and X is the cross product of these two (i.e., pointing rightwards). You get to global pose by way of the IMU data. You can also use the IMU data to construct a heading vector in global pose coordinates.

You can then get the angular difference between the gaze ray (in global pose) and the IMU heading vector with the standard vector dot product formula.

Then, if necessary, you can find the remaining transformation to represent your GPS data in the same space (or transform the data in global pose to your GPS space, whichever is easier).

Here is a Python file with three functions that show how to do the mentioned transformations to global pose. Please let us know how it works out for you.

user-95d0dd 24 July, 2024, 12:11:56

Hi, I have a problem with the saccades data. I have 48 records and 13 of them have 0 saccades. But I can see the saccades inthe videos from the PupilCloud. But when I download the TimeSeries data, the saccades.csv file is empty. This problem directly effects my analysis. Can you help with it?

user-480f4c 24 July, 2024, 12:23:53

Hi @user-95d0dd! We're aware of this issue and our Cloud team is working on a fix. The recordings will be reprocessed and you should be able to get the saccades data soon. Thanks for your understanding πŸ™πŸ½

user-480f4c 24 July, 2024, 12:27:25

@user-95d0dd actually, could you please specify which app version did you use for the affected recordings?

user-0001be 24 July, 2024, 14:09:03

Hello! I seem to be having overheating issues with the Neon module after 20 mins of use. The wearer has to then remove it from their head as it gets uncomfortably hot. Also the phone heats up quite a bit to a point we need to use a cloth or glove to hold it. Is there anyway to mitigate this? Especially the module.

user-d407c1 24 July, 2024, 14:37:59

Hi @user-0001be πŸ‘‹ β˜€οΈ ! May I know what is the ambient temperature during use and whether you are using the heatsink version of the frames? https://discord.com/channels/285728493612957698/733230031228370956/1200298162242465842

If it’s quite warm outside, which is common this season in the northern hemisphere, the module and phone can also get warmer, but there are a few things you can do to lower the temperature. 🫠

Here are some suggestions:

  • Heatsink Frames: If you are not already using the heatsink version of the frames, consider upgrading to our new improved frames featuring an aluminum heat sink. These frames dissipate heat more effectively, keeping the module around 10Β°C above ambient temperature, which should be more comfortable.

  • Storage Tips: While not in use, keep the glasses stored in a cooler environment and avoid direct sunlight.

About the phone, preventing direct sunlight, placing it on a pouch with ventilation and turning off the display can help minimising the phone's temperature. Note that if the temperature reaches certain level it will automatically shutdown to prevent damage to the CPU.

You can also reduce the workload on the phone, by downsampling the gaze estimation and disable eye state, which will result in lower temperatures. And you don't have to worry about the data sampling rate as it would be reprocessed in the Cloud to give you that data at 200Hz (inc eye state data).

user-baddae 24 July, 2024, 14:35:12

Hi. When I use the Anker USB hub, the companion device does not recognize the pupil module. The ethernet isn't recognized by the phone and the laptop at the same time. Any advice on troubleshooting?

user-d407c1 24 July, 2024, 14:45:46

Hi @user-baddae ! To be both properly recognised you may have to connect them on an specific order.

  1. Firstly, unplug all cables from the USB hub and unplug the hub from the phone.
  2. Close the Neon Companion app. Plug Neon into the port marked with 5Gbps on the hub.
  3. Start the Neon Companion App on the phone and wait for the "Plug in and go!" message
  4. Plug the USB cable of the Anker USB hub into the phone.
  5. Wait for Neon to be recognized.
  6. Now, you can connect the Ethernet cable to the USB hub and to a free Ethernet port on your computer.
  7. At this point, you should consider turning off the WiFi connection of the Neon Companion phone to be sure that it only uses the Ethernet connection to send data. It could also be worth it to turn off the phone's Hotspot and Ethernet Tethering, if those are activated. All of these options are found in Settings -> "Network & internet".
user-baddae 24 July, 2024, 15:09:46

Thank you, it works now - however only when both the computer and the phone are connected to the same wifi network. I believe it isn't using Wifi to connect and is using ethernet regardless though?

user-d407c1 25 July, 2024, 08:20:10

Hi @user-baddae ! You would need to disable the Wifi if you want it to use Ethernet. You may need also to configure the network on your computer as well to use this local ethernet.

user-f43a29 25 July, 2024, 09:00:15

Hi @user-baddae , I'm briefly stepping in for @user-d407c1 on this one.

I just want to say that the instructions for establishing an Ethernet connection vary across Operating Systems and OS versions. Basically, you need that the Ethernet connection of the USB hub is recognized as a separate wired connection. This is typically handled in the Network settings panel of your OS. You will probably need to manually add a new connection.

Let us know how it works out for you!

user-f43a29 25 July, 2024, 11:53:37

Actually, @user-baddae , could you share what Operating System you are using?

user-baddae 25 July, 2024, 12:00:56

I’m using Windows… I think the method you specified works on Linux but not sure if I can do that on Windows

user-f43a29 25 July, 2024, 12:02:35

Windows 10 or 11?

user-0262a7 26 July, 2024, 07:22:31

Hi. Are "Frame only" accessories already included "Nest PCB" & USB-C cable, or do we need to order and install/solder them ourselves?

user-07e923 26 July, 2024, 07:42:34

Hey @user-0262a7, thanks for reaching out πŸ™‚ Are you referring to the nest PCB from the accessory section of our shop? If so, this is how it looks like. It comes with the USB-C cable already soldered.

user-934d4a 26 July, 2024, 07:48:21

Hi Pupil Labs team! I am looking for information on how to establish spatial reference points, I have seen that apriltags can be used to delimit a screen for example, but I would like to know if I can use these apriltags to understand how the user moves through space.

Reference Image Mapper I don't think it works in the environment where we are going to use the glasses since it is a very changing environment.

user-d407c1 26 July, 2024, 08:00:54

Hi @user-934d4a ! Thanks for sharing your question here.

One way would be to follow this article. On it, you can see how a 3D model of the environment is being generated with a tool like LumaAI, then reference image mapper is used to get the camera poses and an Apriltag is being used to align the poses and the model together. That would be the most realistic model.

Alternatively, since you mentioned that reference image mapper may not work due to the very changing environment, you may want to have a look at the head pose tracker on Neon Player. This plugin uses several april tag markers of the same size to determine the head pose.

user-0262a7 26 July, 2024, 08:12:27

nice, thank you so much

user-baddae 26 July, 2024, 08:45:54

Hey, another question - I see that through pupil cloud recordings it's possible to get the timestamp from the module itself at the instance it records a datapoint. Is this possible in real time or is the client device (laptop with code) the only timestamp you can get in real time collection?

user-480f4c 29 July, 2024, 07:28:11

Hi @user-baddae. The timestamps are provided from Neon, so you can easily get them in real time without using Pupil Cloud if you want.

For example, you can use the receive_gaze_datum function to get the gaze data which includes the gaze coordinates and timestamp of this gaze data point in nanoseconds (see here)

user-bda2e6 29 July, 2024, 19:39:21

Hello! I saw from the Neon specs on the website that pupillometry data is available in Pupil Cloud. Is it possible to get this data offline from the companion phone directly without using Pupil Cloud?

user-d407c1 29 July, 2024, 20:37:09

Hi @user-bda2e6 ! Since the 25th of April, the pupillometry and eyestate is estimated in real-time on the Companion Device since version 2.8.10-prod. https://discord.com/channels/285728493612957698/733230031228370956/1232987799003729981

That means pupil size is also in the Native Recording Data. If you want to inspect it on your desktop computer, you can activate the eyestate timeline in the latest version of Neon Player.

user-bda2e6 29 July, 2024, 20:48:27

Thank you!!

user-c541c9 29 July, 2024, 22:09:59

Hi, quick-question: For a wearer, can offset correction be applied to a saved recording? i.e. if there is no offset correction defined for the wearer during a recording, can defining an offset for the wearer afterwards help recalculate the gaze in earlier recordings?

user-07e923 30 July, 2024, 06:18:12

Hey @user-c541c9, for context, once you've set the offset correction in the wearer profile from the Companion app, subsequent recordings will automatically apply the offset correction if you select the same wearer profile. The correction can always be modified for that profile, but it is only "applied" after you're selected the profile and made a recording.

You can change the offset correction on Neon Player when you click on the gaze icon. This is also possible on Pupil Cloud. Please note that these corrections are unique to each recording. You'll have to set the correction per recording.

It's not possible to create wearer profiles via the real-time API. If you'd like to see this feature, please upvote the feature https://discord.com/channels/285728493612957698/1264959295687229551

user-c541c9 29 July, 2024, 22:12:49

Hi, I was also wondering if wearer profiles can be created on the fly using real time API, supplying IED and offset correction values, and launch recording (such that the real-time gaze computation has the appropriate adjustments made using these two subject-specific parameters) for the newly defined wearer?

user-2e0181 30 July, 2024, 23:10:18

Hello, I would like some help on a basic tutorial to extract data (such as pupil dilation) from a Neon... please point me in the right direction

user-480f4c 31 July, 2024, 05:56:54

Hi @user-2e0181 . Pupil dilation is provided as part of Neon's raw data folder.

  • If you're using Pupil Cloud, simply right-click on the recording of interest, select Download, and then the option Timeseries and Scene Video. This folder will contain several csv files. Pupil diameter data is available in the 3d_eye_states.csv file.

  • If you don't upload your recordings to Pupil Cloud, then simply transfer the recording from the phone to your laptop/PC (please find the instructions here), download and open our offline desktop application, Neon Player, and drop the recording folder on Neon Player. Exporting the Neon data from there will provide you with a csv file that will contain the pupil diameter data.

user-dcc042 30 July, 2024, 23:31:44

Hi there, I'm interested in measuring PERCLOS, and my understanding is that the only way to do this is to test and select appropriate blink detector thresholds, and then operationalize PERCLOS as the proportion of time wherein pupils are not detectable. Does that sound correct, given the Neon hardware and post hoc software? Alternatively, it also looks like video taken of the eyes can be exported -- if that's the case, then eye aspect ratio could potentially be coded either frame by frame, or using machine vision or something, to derive PERCLOS. Has anyone here attempted to do something like this before? Any advice appreciated.

user-d407c1 31 July, 2024, 08:04:23

Hi @user-dcc042 ! The PERCLOS metric is currently not available in our streams/data output. It cannot be derived from the blink detector as it only provides binary outputs (yes/no) and does not measure eyelid openness.

We might be able to include this metric by early next year, but to help us prioritise this feature and to keep track of progress, I'd suggest that you create a feature request on πŸ’‘ features-requests and upvote it.

In the meantime, the eye videos are recorded and available on the Native Recording Format, so you can load them onto your own algorithm to compute this metric.

user-debcb6 31 July, 2024, 04:38:31

Hi, I am facing some troubles in creating enrichments through Reference Image Mapper. Here's the id of past enrichment which was successful: 5cb0206e-44c9-4d92-b5fa-a4cf1767a198 and Here are the recent enrichment ids which are showing errors: 22c39d5d-325c-425a-b995-e8a2219e7fcf and, bffe63e3-b7dc-438b-bf99-1fc513aa4d3d I cannot seem to identify where things are going wrong, Your assistance would be of great help! Thanks

user-debcb6 31 July, 2024, 05:00:04

I am attaching the screenshots of the enrichment that was completed, and the one which is continuously showing error, I have tested it with several photos taken from different angles and also took the recordings again. Thanks again for looking into this matter.

Chat image Chat image

user-480f4c 31 July, 2024, 05:49:14

Hi @user-debcb6. Sorry to hear you're having issues with your enrichments. For the enrichment that failed, it looks like the scanning recording was not optimal.

As you can see in these two images that you shared, the successful enrichment on the left shows these white dots which reflect the 3D representation of the environment which is required for the Reference Image Mapper to work. In contrast, this environment is not built on the right image for the enrichment that failed.

Here are some tips for a more optimal scanning recording:

  • Ambient Lighting: Ensure your scanning recording is made under similar ambient lighting conditions as in the reference image and your eye-tracking videos. This is crucial for successful mapping.

  • Capture Multiple Angles: Try to capture the scene from multiple angles and distances. The Reference Image Mapper works best for feature-rich environments and when many points are captured. Could you maybe try and use a reference image that captures a bigger part of the scene? For example, a more horizontal view including the monitor and features of the room, eg the desk and chairs in the background as they appear in your eye tracking recording. Adding more features improves mapping outcome.

Please review our scanning best practices: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/#scanning-best-practices

user-debcb6 31 July, 2024, 21:11:01

Hi Nadia, Thanks a lot for your help, I captured multiple angles in new recording and it worked!!😍

user-934d4a 31 July, 2024, 09:24:08

I have a couple of questions: Do you know what the maximum battery life of the device is while using Neon? Is there any way to power the smartphone while using Neon?

user-d407c1 31 July, 2024, 09:47:57

The device’s battery lasts around four hours, though this may vary slightly depending on your Companion Device model and whether you are streaming, using the LSL relay, and the real-time sampling rate.

Yes, you can charge the device while recording. Depending on the model, you can:

  • Wirelessly charge the device (only Moto Edge 40 Pro at 15 watts).
  • Use a USB Type-C dongle with power delivery to charge the device while recording (all Companion Devices).
user-934d4a 31 July, 2024, 15:04:23

I hadn't seen that button, solved! thanks

user-c541c9 31 July, 2024, 15:23:45

Hi!

I looked at the eye videos in the companion app attached to the device today and found the left eye video much darker compared to the right eye. Then I moved closer to a window (diffused sunlight coming in), the left eye video got brighter.... Is there a defect in the IR illuminator for the left eye? What should I do?

user-d407c1 31 July, 2024, 20:08:25

Hi @user-c541c9 ! It’s possible for the image of one eye to appear slightly brighter on some modules, but this does not affect the gaze or eye state estimates. Are you running a custom analysis on the eye images, or is this affecting your workflow in any way?

user-839751 31 July, 2024, 19:24:22

Hi @user-f43a29 , I am using MacOS 14.5. I found the internet sharing settings in the system settings and turned it on, but when I run the code "device = discover_one_device()", no device is found (for programming language, python is fine). I do need to achieve super precise synchronization and would need time offset to be measured and reported. In addition, is there any way to measure the accuracy of eye tracking for each participant? Since I assume each participant would be slightly different in the accuracy. Thank you for your help! πŸ™‚

user-f43a29 01 August, 2024, 08:58:40

Hi @user-839751 , I will send the instructions for MacOS 14.5 in a second message, as otherwise, it hits Discord's character limit.

As mentioned, you can find tips and code to achieve precise time sync and measure roundtrip transmission time, as well as the clock offset, in our documentation.

If you want more info about the accuracy of Neon, you can check out the associated whitepaper. There, you will find statistics about Neon's accuracy in different settings. If you would like to evaluate per-participant accuracy, you could start by replicating one of the validations from that article.

If you want to squeeze every bit of accuracy out of Neon, then you could do gaze offset correction per observer, but as the accuracy report shows, this is really only necessary for a subset of observers. You can just ask a participant to look at the center of a simple target and see if there is a significant and consistent offset. You can do gaze offset correction in the wearer profile on the Neon Companion App or post-hoc in Pupil Cloud. Note that you should make a separate wearer profile for each participant, to reduce the chance that you accidentally apply the wrong offset correction to the wrong users.

End of July archive