πŸ‘“ neon


Year

user-480f4c 02 January, 2024, 11:40:21

@user-15edb3 , I'm replying to your message (https://discord.com/channels/285728493612957698/285728493612957698/1191684449432244305) here. Can you please elaborate on your research question and planned analysis?

user-15edb3 02 January, 2024, 13:03:42

Thanks....hi..So i have this wearable glasses and i have been collecting data earlier by uploading the data in cloud and downloading.In that the gaze positions were not normalised and was in absolute values so i get gaze values like (700,400),(600,500) etc. But recently i found that in offline mode data collection the frame number also is getting recorded. However the gaze data ie, norm_x and norm_y are normalised values. However for my algorithm i need absolute values. Basically this new data is just scaled and reference is changed .But from where do i find that scaling factor.

Chat image

user-480f4c 02 January, 2024, 13:12:25

Thanks for clarifying @user-15edb3 - Neon Player exports the gaze data in normalised coordinates. However, you can easily convert to pixels by multiplying the values with the image frame resolution (see also here ). We will be adding additional information to our Neon Player docs to cover all relevant details. Stay tuned for updates! πŸ™‚

user-15edb3 03 January, 2024, 07:28:17

that seem to work partially but still i doubt if it is accurately matching with the same values as it was giving previously unscaled.or am i doing something wrong. I multiplied norm_x with 1600 and multiplied norm_y with 1200 and substacted from 1200(siince reference is tope left for unscaled and ref is bottom left for scaled ones). i feel 1600 is not the exact scalar value to be multiplied.this seemed neglible discripency but my aoi detection algorithm doest seem to work now.kindly help

user-480f4c 03 January, 2024, 09:16:54

Your approach to getting the pixel coordinates is correct. Could you elaborate a bit on your aoi detection algorithm?

user-15edb3 04 January, 2024, 07:08:26

that worked. i hd some other issue in code. thanks for ur response

user-f77049 04 January, 2024, 18:47:37

Hello, I am working on pupillometry with the NEON device, and in the raw data, I do not see any distinction made between the right and left eye. Similarly, in the online documentation, there is no clarification. Could you tell me if this is a randomly chosen eye or an average of the diameters of both pupils ? Thank you !

nmt 05 January, 2024, 02:18:59

Hi @user-f77049 πŸ‘‹. You're correct that there is no distinction made between the left and right eye. Please see this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1180106433086365786

user-f77049 05 January, 2024, 10:24:42

Thank you for the response, however, the diameters of the pupils are not the same between right and left to within 1 mm, which is huge when the variations related to cognition do not exceed 0.5 or 0.6 mm... The relative variations in pupil diameter are the same, however, it's true. It would be interesting in the future to actually have the data for each eye separated. Have a good day

user-e74b04 08 January, 2024, 12:43:14

Hello,

I am trying to connect our neon to LSL. I have installed the lsl-relay but it cannot find any devices. I was able to get the stream to my browser (even though the URL did not work, I used the IP) so I am in the same network. But I am also not the biggest python pro so might have made a mistake there. I have installed lsl-relay package in pycharm and then used the the pupil_invisble_lsl_relay.exe.

user-e74b04 08 January, 2024, 16:14:23

Alright I kind of fixed it by connecting directly using the IP.

user-cdcab0 08 January, 2024, 18:19:37

Yeah, that's typically the solution. On some network configurations, the mDNS broadcast/discovery won't work and specifying the IP address manually is th eonly way around it

user-e74b04 08 January, 2024, 18:49:40

@user-cdcab0 Thank you. Now, in a test run the gaze tracking worked. Is there a way to get the pupil size data directly or do I have to first upload the video to the cloud and then download the pupil size?

user-cdcab0 08 January, 2024, 18:52:22

That data is currently only available via cloud

user-e74b04 08 January, 2024, 18:54:17

Are there plans to get it in real-time or at least locally on the phone?

user-cdcab0 08 January, 2024, 19:05:25

That's the plan, but there is no definitive timeline for that availability just yet

user-b6f43d 09 January, 2024, 09:23:35

Hello, When I am trying to switch workspace or add a wearer. It's showing like this. What should I do ?

user-d407c1 09 January, 2024, 09:41:08

HiΒ @user-b6f43d ! could you please try logging out of the app and back in?

user-b6f43d 09 January, 2024, 09:52:33

Thank you, it works now

user-480f4c 09 January, 2024, 13:34:19

Hi again @user-b327efπŸ‘‹πŸ½ ! Regarding your question here: https://discord.com/channels/285728493612957698/285728493612957698/1194271538153783337, Neon's pupil-size measurements are provided as a single value, reported in millimetres, quantifying the diameter of the 3D opening in the centre of the iris. Currently, our approach assumes that the left and right pupil diameters are the same. According to physiological data, this is a good approximation for about 90% of wearers.

Note, however, that we do not assume that pupil images necessarily appear the same in the left and right eye images, as apparent 2D pupil size strongly depends on gaze angle. We are working on an approach that will robustly provide independent pupil sizes in the future. Unfortunately, we can not offer a concrete timeline for completion.

user-23177e 09 January, 2024, 14:40:29

Hi there Pupil labs, we received our 2 neon ready set go trackers just this week, and have been testing it. However, 1 of the 2 trackers is not working properly. It will not always connect to the companion properly, it stays stuck with the white led on. At other times it will connect (and the app keeps asking the permissions each time), but then, recordings will just stop randomly. I'll send a photo of the state when not connecting to the phone.

user-480f4c 09 January, 2024, 14:51:14

Hi @user-23177e πŸ‘‹πŸ½ ! I'm sorry to hear that you are experiencing issues with one of your glasses. Could you please send us an email to info@pupil-labs.com or send me a DM with your email? This way, I can provide you with some debugging steps to identify the root cause of this issue. Thanks!

user-23177e 09 January, 2024, 14:41:55

We've tested both companions with both neon devices, and found that the problem is not related to the companion phones. These both work fine with the other neon tracker.

user-23177e 09 January, 2024, 14:42:40

What could the issue of the neon tracker be? should we send it off for a replacement perhaps? thanks for your support!

user-23177e 09 January, 2024, 14:47:07

My colleague also just informed me that someone from our institute (Max Planck institute for Psycholinguistics) will be travelling to Berlin tomorrow, so perhaps the tracker can be dropped off at Pupil Labs for investigation/repair?

Chat image

user-15edb3 10 January, 2024, 07:30:52

hello so i have been using neon player to download the data offline.It does take long time for my large data.However am only interested in some specific excel files alone instead of the entire thing.Is there any chance to customise and speed up my process? Precisely i am in need of gaze position csv file which i get after drag drop,then click download symbol which creates folder like neonplayer/export/000/gaze_position.

user-d407c1 10 January, 2024, 08:20:58

Hi @user-15edb3 ! If you only need the CSV files, you may want to check this CLI utility as you will be able to do batch exporting of all CSV files

user-c1bd23 11 January, 2024, 13:36:43

Hi all. Trying to Define AOIs and calculate gaze metrics using the guide on Alpha Lab

user-d407c1 11 January, 2024, 14:15:27

Hi @user-c1bd23! Was it because you tried to run this cell ? That cell is to define the function that would be executed when called like this plot_color_patches(reference_image, aois, aoi_ids, plt.gca()), are you running the code as notebook? If you are not familiar with notebooks, you can try this vanilla python script, simply save it as defineAOIs.py and run it like python3 defineAOIs.py [args].

user-c1bd23 11 January, 2024, 13:38:13

When running #normalize patch values, I get the error "NameError: name 'aoi_values' is not defined"

user-c1bd23 11 January, 2024, 13:38:29

This is the guide I'm using: https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#define-aois-and-calculate-gaze-metrics

user-a0cf2a 11 January, 2024, 13:48:09

Hi, I noticed that the video from the world camera has a lot of motion blur (good illuminated interior). How can I reduce such blur? Is it possible to set the exposure manually?

user-d407c1 11 January, 2024, 14:17:30

Hi @user-a0cf2a ! In the settings you can enable manual exposure, although we do not recommend fiddling with it, as you may experience VFR (variable frame rate)

user-c1bd23 11 January, 2024, 14:27:12

I suspect that's where I'm going wrong (I'm brand new to Python!)

user-c1bd23 11 January, 2024, 14:27:19

Learning as I go along

user-9cb1df 11 January, 2024, 14:58:14

hello there! does someone have a pdf or any other document in which there is written how the detection and computation of blinks and pupil size work with the pupil neon? I looked on the homepage and couldnΒ΄t find any. Wish you guys a great weekend πŸ™‚

user-c1bd23 11 January, 2024, 14:59:16

Have you looked at the Alpha pages? https://docs.pupil-labs.com/alpha-lab/blink-detection/

user-d407c1 11 January, 2024, 15:24:57

Hi @user-9cb1df! Both blinks and pupil size are detected through neural networks. How blinks are computed is defined here which includes a link to our whitepaper and you can even run the blink detector by yourself, following the tutorial Syed pointed you to.

For pupil diameter we use a different network for which we have not disclosed yet much information, but what we can say is that is uses both eye images, and physiological plausible limits and that it takes into account the cornea and gaze angle to provide you with physiological values, here you can find some more info.

user-d407c1 11 January, 2024, 15:18:26

@user-c1bd23 We should probably have added typing annotations πŸ˜… .

If you are new to Python, my recommendation is that you use the vanilla script I mentioned before. Simply save the script, and if you have the dependencies installed in your environment, it should be pretty straightforward to run it. https://discord.com/channels/285728493612957698/1047111711230009405/1195008075036381276

It is also somehow commented.

That said, in the tutorial (which is due to be updated soon πŸ˜‰ ), the aoi_ids are used as aoi_values to show the AOIs selected first.

aoi_ids = pd.Series(np.arange(len(aois)))
plot_color_patches(reference_image, aois, aoi_ids, plt.gca())
user-c1bd23 11 January, 2024, 15:34:52

I've run the vanilla script and obtained a file called aoi_ids

user-c1bd23 11 January, 2024, 15:35:40

Where would this be fed into in order to generate an overlay of AOIs on the reference image?

user-d407c1 11 January, 2024, 15:40:14

When running the snippet you should also have been prompted with the plots, was it not the case?

user-c1bd23 11 January, 2024, 15:40:44

No. I only got a prompt to select the AOIs on the reference image

user-c1bd23 11 January, 2024, 15:41:01

Once complete, it exited python and generated the aoi_ids file

user-44d9d4 12 January, 2024, 13:02:08

Hi Pupil Labs team, Despite meticulous calibration efforts, we've noticed a systematic shift that requires us to individually adjust the areas of interest for each subject (The image shows the plotted gaze data using an R script). We're reaching out to see if there are any algorithms or tips available to retrospectively correct these shifts, eliminating the need for manual adjustments based on visual judgment. Any guidance you can provide on this matter would be immensely appreciated. Thank you in advance!

Chat image

user-c1e127 12 January, 2024, 14:21:47

Hey the NEON was never sampled at 250Hz right ? Always 200Hz after uploading it to the cloud ?

user-d407c1 12 January, 2024, 14:24:13

Hi @user-c1e127 ! Neon can also sample at 200Hz in realtime, and yes it does not sample at 250.

user-c1e127 12 January, 2024, 14:24:41

Thanks much

nmt 13 January, 2024, 02:31:48

Offset correction

user-df468f 15 January, 2024, 18:58:37

Hello Pupil Labs team, I recently made three recordings that resulted in three different errors:

Recording 1: December 20th, 2023 The recording was supposed to last about 40 minutes. Afterwards it turned out that the recording stopped after about 15 minutes. The following error message was displayed: "Calibration not synced! Tap for more instructions." However, I couldn't press it because an FPGA update was supposed to be performed. However, the update hung up because the connection to the eye tracker was interrupted. After several attempts, I was finally able to carry out the update. The aforementioned error message has not occurred since then, but the recording is now no longer usable for me.

Recording 2: January 10th, 2024 The same companion device with different neon glasses was used for this recording. This time, no error message was displayed, and the recording duration is also stated as 43 minutes on the companion device. However, when I start the recording, the video is only 8.5 minutes long. In the Pupil Cloud, on the other hand, the recording is 43 minutes long, but from minute 8.5 only a gray screen can be seen (without scene camera, without sound, without fixations).

user-df468f 15 January, 2024, 18:58:40

Recording 3: Here the original neon glasses were used again with the same companion device. The recording lasted approx. 1 hour and 29 minutes. The duration is also indicated as such on the companion device. However, when I start the recording, I can see that the video only lasts 4 minutes and a few seconds. In addition, the frame rate is greatly reduced. The video can be found in the Pupil Cloud with a duration of 1 hour and 29 minutes. However, the frame rate is also greatly reduced here and only the first 4 minutes have sound. This is followed by a gray screen for about 14 seconds, after which the recording of the scene camera can be seen again, but the sound is missing here. Again, there was no error message after the recording.

I now have a total of three recordings that can no longer be used. This is very annoying because these recordings should be used for a thesis. Is there a way to avoid these mistakes in the future or even to restore the recordings?

Thank you in advance Jonas

nmt 16 January, 2024, 01:48:37

Hi @user-5f1b97. Sorry to hear you've experienced an issue with these recordings!

For the first recording, it seems that the system required an update, which has now been implemented to prevent future issues. To give some context, when our app is updated, the system may sometimes prompt the user to install firmware (FW and FGPA) upgrades on the connected module. These upgrades often contain stability improvements.

That being said, it appears you've already uploaded all three recordings to Pupil Cloud, which will allow us to investigate using the metadata to identify the problem, with the aim of potentially recovering them.

Could you please contact us at info@pupil-labs.com and send the IDs of the three recordings? You can find the IDs by right-clicking on the recordings in Cloud and selecting 'View recording information'

user-58da9f 16 January, 2024, 21:03:09

Hello pupil team. I am working on ROS2 integration using the Pupil Neon, but I'm experiencing significant artefacts when connecting to it. The image looks very broken after moving and takes 1-2 seconds to go back to normal. Is this purely caused by network quality? Will a wired connection solve this issue?

user-58da9f 16 January, 2024, 21:04:58

Will switching to the async API improve performance?

user-58da9f 16 January, 2024, 22:25:23

After some experimentation I found it probably was my current desktop. Switching to my laptop with a stronger Wi-Fi card solved the issue

user-97997c 17 January, 2024, 14:19:28

Hi all, was the working range of the Neon tested? If yes, how much is it? Is it equal vertically and horizontally? Thank you!

user-d407c1 17 January, 2024, 14:40:20

Hi @user-97997c ! Do you mean whether accuracy has been measured across the field of view? If so, have you seen our whitepaper measuring Neon's accuracy?

user-97997c 18 January, 2024, 08:53:34

Hi Miguel, thanks for the tip, tha white paper already helps a lot. I was asking what is the maximum range that the NEON can measure. In the white paper you show data for +/-30deg, both horizontally and vertically. Is that the maximum you feel confident to provide? Thank you!

user-88aaf8 17 January, 2024, 17:03:49

Hi - can somone help me rescue a corrupt file?

user-480f4c 17 January, 2024, 17:06:30

Hi @user-88aaf8 - can you contact info@pupil-labs.com in this regard? Please share all relevant details, e.g., recording ID or the recording exported from the Companion Device (see our guide on how to transfer recordings from the phone to your computer).

user-ddf0f7 17 January, 2024, 20:27:51

Hi, we’re trying to record 30-minute sessions of data across 4 devices and are finding that the companion app/device will crash about 16% of the time. Can you provide any troubleshooting steps? We’re using LSL Rely. Could that be a factor? Relatedly, is there a desktop app, like Pupil Capture that is compatible with Neon?

user-cdcab0 17 January, 2024, 21:14:43

Hi, @user-ddf0f7 - sorry to hear about the trouble. Can you tell us more about the crashes? Are all of your companion device apps up to date? Are you recording data (on the device) while simultaneously streaming to LSL? That could be somewhat "overloading" the companion device, especially if it overheats.

Regarding a desktop app, the source version of Pupil Capture does have some support for Neon on Linux and Mac. This bypasses the companion app completely and works much like Pupil Core, but has not been extensively tested or used.

user-4bc389 18 January, 2024, 07:21:36

Hello, I would like to know if the data exported using Neon Player matches the data downloaded from Cloud? thanks

user-d407c1 18 January, 2024, 07:57:23

Hi @user-4bc389 ! There is a documentation and an app update coming soon to better reflect this. As of today, the timestamps are converted, and the surface mapper does not match the marker mapper. But, in the upcoming release, it would match the format completely, and the only differences that you would see are: 1. Recordings extracted from the phone do not contain eye state or pupillometry yet, and the sampling rate matches the one chosen at the time of recording.

  1. Recordings downloaded from Pupil Cloud would be at 200Hz, regardless of the recording sampling rate.
user-4bc389 18 January, 2024, 07:58:09

Thanks

user-b32205 18 January, 2024, 10:27:25

Hi all

user-b32205 18 January, 2024, 10:29:39

I'm new in Neon. I have registered a video and I want to see on the desktop. I installed the Neon Player v4.0.0 but I receveid this messagge: session setting are from a different version of this app. I will not use those

nmt 18 January, 2024, 13:24:17

Hi @user-b32205! Could you perhaps expand on that a little such that we can get a better idea of the situation. Are you able to load and play the recording?

user-b32205 18 January, 2024, 10:29:53

can you help me?

user-7413e1 18 January, 2024, 11:53:03

Hi - I have a logistic questions/ask for advice: when I record an experiment with different participants, would you have one wearer per participant, each with either one or multiple files (depending on whether the same participant is doing >1 trials)? Is there another/better way to organise my recordings or is that what you would do/did?

nmt 18 January, 2024, 13:26:24

Hi @user-7413e1 πŸ‘‹. One wearer per participant is the recommended way to structure your recordings. You can create arbitrary number of wearers and use whatever naming scheme is appropriate for your study!

user-7413e1 18 January, 2024, 12:03:10

Hi - question on fixation: I have tried to compare two different trials using events. In one, I moved my eyes restlessly, very quickly all around the room without stopping on anything in particular. In the other trial, I fixated one point for a few seconds. I then plotted the fixation durations and it shows a much higher value for the former trial than for the latter. I was expecting the exact opposite. I used this tutorial: https://docs.pupil-labs.com/neon/real-time-api/track-your-experiment-progress-using-events/

Can you explain why? Also, in the code on the tutorial there seems to be some confusion between fixation numbers (which I would expect to be higher in the first trial) and fixation duration mean (which I would expect to be higher in the second trial). Thanks for the help!

nmt 18 January, 2024, 13:29:45

Are you sure you didn't plot the number of fixations as opposed to the duration of fixations?

user-159183 19 January, 2024, 00:15:25

Hi, one subject's eye video is frozen after 15 seconds, but the gaze point and video are good. Do you have any idea about how to fix the eye video?

user-480f4c 19 January, 2024, 07:32:20

Hi @user-159183 πŸ‘‹πŸ½ ! Sorry you're experiencing issues with your recording. Is this the recording you shared with us yesterday or a different one? If it's a different one, could you please share with us the recording ID? You can either contact us at info@pupil-labs.com or DM the ID of the affected recording.

user-480f4c 19 January, 2024, 10:48:39

Hi @user-b6f43d πŸ‘‹πŸ½ ! The processing time for your enrichment can vary depending on several factors. The duration of your recordings and the current workload on our servers are two major variables.

We highly recommend to slice the recording into pieces and apply the enrichments only in the periods of interest, using Events.

See also this relevant message for more details: https://discord.com/channels/285728493612957698/1047111711230009405/1186329479753256991

user-b6f43d 19 January, 2024, 10:49:26

After the analysis iam not getting anything in the heat map, what might be problem ?

Chat image

user-480f4c 19 January, 2024, 10:54:35

@user-b6f43d can you please clarify if the enrichment processing has completed? Is there any error? Can you visualize the enrichment like in this tutorial?

user-b6f43d 19 January, 2024, 11:09:18

its not showing anything

Chat image

user-b55ba6 19 January, 2024, 16:34:10

Hi

I have a question regarding gaze_visualization when downloaded from cloud. When I see gaze data in Neon device, gaze seems much noiser that when I look at the gaze_visualization video from the cloud. Actually, when I manually plot gaze data over the pupilabs video, raw data seems noiser than the red circle from the gaze_visualization video.

Hope I make sense. Do you apply any filter to the gaze data in the video from pupilabs? If htis is the case, it would be great to knwo which filter you are applying.

Thanks!

user-480f4c 22 January, 2024, 12:39:44

Hi @user-b55ba6 ! Do you mind sharing more details on how you render the video? The raw data that you get in your gaze.csv is the same used for visualizing the red circle overlayed on the scene video.

user-7d60a5 19 January, 2024, 17:11:59

Hello, I have been trying to use the newly developed Neon Player software instead of the Pupil Cloud because of online data processing restrictions with my University. However, the software doesn't seem to have any enrichments/pugins for detecting faces in the visual environment (i.e., an equivalent of the Face Mapper in Pupil Cloud). Is this in fact the case? And, if so, are there any future plans to add a face mapper enrichment to the Neon Player as well? Thank you!

user-cdcab0 19 January, 2024, 22:15:14

Hey, @user-7d60a5 - Neon Player isn't designed to be a complete 1:1 offline solution to Cloud, and there aren't any specific plans to incorporate a Face Mapper plugin into Neon Player at this time.

Neon Player is extensible though, so it is feasible that one could write their own plugin for this

nmt 20 January, 2024, 03:13:07

@user-068eb9, responding to your message (https://discord.com/channels/285728493612957698/1195352014486523925/1197911358050664570) here. Are you asking about correcting the gaze signal or correcting mapping after running an enrichment, like Marker Mapper or Reference Image Mapper?

user-068eb9 22 January, 2024, 15:21:48

Correcting the gaze signal. Thanks.

user-15edb3 22 January, 2024, 06:20:00

hello..i have been facing some issues suddenly with log in issues in pupil cloud...is it only me or is there any maintanence going on ?..so it doesnt say incorrect credentials etc but doesnt let me in as well

user-480f4c 22 January, 2024, 12:42:16

Hi @user-15edb3 πŸ‘‹πŸ½ ! Could you please clarify whether you get an error or any message while trying to log in?

user-7413e1 22 January, 2024, 17:01:29

Hello - I am trying to use marker mapper. This is what I see on my screen: it seems it doesn't allow to add markers on the enrichment before running it. Am I doing something wrong? [also, it would be super useful to have more details in the tutorials... many things like these are kind of given for granted πŸ™‚ ]

Chat image

user-d407c1 23 January, 2024, 09:03:35

Hi @user-7413e1 ! Could you try navigating a bit a head on the recording and see if you can click on define surface?

user-b10192 22 January, 2024, 23:46:51

Hi What is the width and height of gaze data in pupil neon? Are they 1600 width and 1200 height? In pupil invincible, it is 1088 width and 1080 height.

user-d407c1 23 January, 2024, 09:04:15

Hi @user-b10192 ! That is the resolution of the scene camera, yes!

user-e3da49 23 January, 2024, 17:37:22

Hello, what do the red thick circes and the blue ones in the results of the scene camera mean? What is the difference?

user-e3da49 23 January, 2024, 18:02:12

thanks for helping out in advance

user-ccf2f6 23 January, 2024, 22:39:27

Hi Pupil Labs, I'm looking for a way to import events to my recordings on pupil cloud using an external dataframe (csv/docx or any other format). Is this possible right now? It'll be useful for ex. when we have multiple recordings which share the same events and the event timestamps were collected on a separate device as UNIX timestamps (on the same network clock).

nmt 24 January, 2024, 04:50:46

In addition to adding them using the Cloud interface, it's also possible to create events via the Cloud API. You'll want to look at the events POST request under 'recordings'

user-f43a29 05 February, 2024, 15:39:06

Hi @user-ccf2f6 πŸ‘‹ ! Apologies for the delay! Importing events into Pupil Cloud is possible. It requires working with our Cloud API, but note that it is not well documented at this time. You want the "CreateRecordingEvent" function. Here you will find an example of working with the API via Python. You will need to add a function for CreateRecordingEvent. Please let us know if you need assistance with that.

For future reference, if you need more flexibility, then take a look at our Real-Time API. Adding events during an experiment via the Real-Time API will automatically add them to the timeline and they will be uploaded to the Cloud for you, ready to go.

user-e3da49 24 January, 2024, 08:34:37

Hi, also is it possible to import completely deleted but exported projects back again into the workspace?

user-e3da49 24 January, 2024, 08:38:24

I have the problem, that the face detector didnΒ΄t recognize every record of a face, since the stimuli (in a movie) were too bright or blinded the glasses. How can I add this info afterward?

user-e3da49 24 January, 2024, 08:40:13

thanks a lot in advance

user-a2d00f 24 January, 2024, 11:47:48

Hello, i want to interaction Neon with my glasses. But i do not know where can i find the connect board PCB file?

Chat image

user-d407c1 24 January, 2024, 12:04:41

Hi @user-a2d00f ! That would be the Bare Metal, if you already have Neon and only need the PCB, you can get it here.

user-a2d00f 25 January, 2024, 02:17:39

Thanks, i have a Neon, but i want to design the connect board PCB by myself for interaction in our own glasses system. Thus i want to know do the electric interface of PCB open source or not? If not, the way we want to integrate in our system is only buy the Bare Metal?

user-f43a29 24 January, 2024, 14:30:30

Hi @user-e3da49 πŸ‘‹ ! The red circles are gaze, and the blue circles fixations. You can read more about them in the respective links!

When you say that you exported the projects, do you mean that you downloaded the data from Cloud? To be clear, data cannot be re-imported to Cloud.

Since faces were not always mapped in your recording, do you plan to manually label faces in your recordings, by hand? You can at least add events to the timeline that indicate when those unlabelled faces are present in the recording. Or have you found a different face mapper tool that you would like to try? If so, then other face mapping tools can only be run offline, not in Cloud.

user-e3da49 24 January, 2024, 17:03:16

Thank you, can I run a face mapper in neon player and export a cvs with a column "fixation on face" (true/false)?

user-e3da49 24 January, 2024, 16:36:09

Thanks a lot!

user-e3da49 24 January, 2024, 16:39:54

furthermore I have the urgent problem, that a started recording stopped unexpected and the following message appeared: "We have detected an error during recording!" - Now I am missing a recording of 2,5 hours and donΒ΄t know how to avoid such an incident in future. Is it possible for you guys to fish out the recording out of somewhere somehow? Please help..

user-328c63 24 January, 2024, 17:09:00

Hello! Is anyone else having trouble downloading the 'timeseries csv and scene video' recording data? every time i try to download and open the zip, i get this error.

Chat image

user-b6f43d 24 January, 2024, 18:47:07

Can i get a tutorial video in which i learn about face mapper enrichment ?

user-f43a29 25 January, 2024, 08:34:21

Hi @user-b6f43d πŸ‘‹! We do not have a video tutorial, but we do have documentation for the Face Mapper. What part of the Face Mapper Enrichment is causing you trouble?

nmt 25 January, 2024, 02:49:03

You can also submit feature requests here: https://feedback.pupil-labs.com/

user-a2d00f 25 January, 2024, 02:49:35

But i do not understand what meaning in your web of "we will open source ... and electrical interface. You can make your own interface PCB". If we have not the electrical schematic, how do we develop our own interface PCB.

Chat image

nmt 25 January, 2024, 11:40:46

Hey @user-a2d00f. I have good news - we'll be able to share the schematics that document the interface. It might take a bit of time to prepare the final version for sharing, however. I'll let you know here when it's ready πŸ™‚

user-b44746 25 January, 2024, 09:43:56

Hi pupil labs. I tried your updated Areas of Interest (AOIs) and it's pretty cool!! I have generated the image in the video (https://www.youtube.com/watch?v=RUmcU8zxMF0 ), but in 7.0v it describes that it generates a CSV file, β€š AOI Metrics After you have drawn AOIs you will automatically get CSV files of standard metrics of fixations on AOIs: total fixation duration, average fixation duration, time to first fixation, and reach. β€˜ but I didn't find the standard metrics of fixations in the download. Comparing the old version, I only see a new variable "fixation detected in reference image".

Am I overlooking something here? Thank you!

user-d407c1 26 January, 2024, 08:02:25

Hi @user-b44746 ! Apologies for that, this has been fixed. The metrics should be already downloaded with the enrichment. πŸ˜…

user-602698 25 January, 2024, 11:10:47

Hi Pupil Labs Team, Currently, I am starting to set up my environment for my experiment (website user experience). I tried to set the monitor behind, in front and beside the window, but I found a little bit of a problem with the focus from the camera, it showed only a white screen and could not detect the website's component. I captured some examples under this message, can you give me information about my issue? Is the problem because of brightness, color contrast or something else? And do you have any tips about that? But the last capture on my laptop screen could work

Chat image Chat image Chat image Chat image

user-480f4c 25 January, 2024, 11:28:34

Hi @user-602698 πŸ‘‹πŸ½ ! Please find below some points that might be helpful in addressing your questions:

  • Backlight compensation: If your screen appears too bright in the scene camera preview, then you can adjust the Backlight Compensation setting in the Neon Companion App settings. Note that the backlight compensation adjustment only takes effect in auto exposure mode. It adjusts the autoexposure target value: 0 is darker, 1 is balanced, and 2 is brighter. I'd recommend playing with it a bit to see which value works best for your environment.
  • Change to Manual Exposure: Alternatively, you can enable the manual exposure in the settings of the Neon Companion App. Note that high exposure values can reduce the frame rate of the scene camera. In auto exposure the frame rate is locked at 30 FPS but this is not the case for manual exposure.

Please note that adjusting the brightness of your screen might also help!

user-231fb9 25 January, 2024, 18:35:02

Hello,

For my thesis, I am conducting an experiment/research using Pupil Labs Neon. It involves an attention study comparing the wearing of face masks. I will measure the difference in gaze behavior during real-life interaction with a person and interaction with a person on a projector. The research is framed as a study related to eating (to encourage participants to gaze more freely at others). Various factors will differ, such as the type of face mask, the gender of the research fellow, etc.

I need advice on data processing. The data crucial for me is "time to first fixation" and "fixation duration". Is there a way to automatically obtain this data with Neon without manual counting? I mean, through code or another method?

Thank you in advance! Hope to hear from you soon. If there are any questions or uncertainties, feel free to ask them.

user-d407c1 26 January, 2024, 07:56:13

Hi @user-231fb9 ! Avg fixation duration and time to first fixation can be computed onto AOIs in Cloud thanks to the update from this week. See more here.

To do so, you will need to map your gaze into an image using the reference image mapper or the marker mapper.

I believe this is not exactly what you are looking for, but rather to detect the first fixation and fixation duration for an specific period of time regardless of gaze location on a surface, is that right?

Well, in that case you will need to look for it yourself (in any software of your choice or programming language). Simply add events relevant to you, and then look for the timestamp on the events.csv , use that timestamps and the recording id to filter the fixations.csv. As a result, you will have rows only with fixations corresponding to that period.

To compute average fixation duration you can take the average of the duration columns. To compute the time to first fixation, you can take the first row as the first fixation and subtract the start of the recording timestamp (this info is in the info.json of that recording or you can take the first timestamp in the gaze.csv for that recording).

If you need to do it aggregated for several wearers my recommendation would be that you use Python, Matlab, R or similar.

user-44c93c 25 January, 2024, 18:48:35

For the marker mapper enrichment in Pupil Cloud, is there a reason that recording id is not included as a column in the surface_positions csv file? I see that it is included in the gaze csv file. Given that an enrichment automatically runs on all videos in that project, it makes it hard to parse out which segments of data in the surface_positions csv file comes from which video.

user-44c93c 25 January, 2024, 23:12:00

Also, I am noticing when using apriltags that when the user moves their head (e.g., tilts to the left or right), for about 5-15 frames, all or some of the apriltags show as not visible when they are still in view. When the user is constantly moving their head, these gaps in data become very frequent and the data quickly becomes noisy. Is this expected behavior or is there something specific about our setup that might be causing this? When the user's head is stationary, there is almost no data loss on detected marker IDs .

user-d407c1 26 January, 2024, 10:23:38

Hi @user-44c93c ! this has been fixed, the surface positions should now include rec id, thanks for reporting it.

Regarding the detected markers, is hard to tell without seeing the video, but could it be due to motion blur?

user-44c93c 25 January, 2024, 23:13:54

Chat image

user-613324 27 January, 2024, 02:00:11

Hi Neon team, is it possible to set the parameters (e.g., wearer_id, wearier_ied, etc) of the wearer profile using the Real-Time API instead of using the companion app before the recording?

nmt 29 January, 2024, 13:56:27

Hi @user-613324 πŸ‘‹. That's not currently possible. You can see an overview of the current functionality at this page

user-9f3e92 29 January, 2024, 13:40:36

Hi Neon team,

user-9f3e92 29 January, 2024, 13:41:12

having issues streaming the data to windows 10 laptop. any tips?

nmt 29 January, 2024, 13:56:43

Are you using the Monitor app or real-time API?

user-e2db0a 29 January, 2024, 21:37:58

Hello team, I'm rohan. I was recently assigned a task to configure and setup the pupil labs plugin with the new neon configuration. I was closely following this pupil labs neon installation page ( https://www.psychopy.org/api//iohub/device/eyetracker_interface/PupilLabs_Neon_Implementation_Notes.html#pupil-labs-neon ) and setup and tried to install the pupil labs plugin through the plugin manager. But it fails repeatedly and shows the same error ( refer the image ). I tried re-installing psychopy entirely and it still gives the same error. I’m using a Macbook pro laptop with the latest m1 pro silicon chip. Any suggestions would be helpful. thank you

Chat image

nmt 30 January, 2024, 02:21:00

Hi @user-e2db0a πŸ‘‹. We've replicated the error you're getting. We will investigate and let you know when we have a fix!

user-cdcab0 30 January, 2024, 10:37:23

Hey, @user-e2db0a, what version of PsychoPy are you running?

user-b6f43d 30 January, 2024, 01:29:12

Can i map two surfaces simultaneously in a single frame ?

Chat image

nmt 30 January, 2024, 02:35:37

Hey @user-b6f43d! Those two surfaces represent different planar surfaces, so you'd want to create two enrichments, one for each!

user-1423fd 30 January, 2024, 13:14:50

Hi! I am trying to use the iMotions Exporter within Neon Player 4.0.0, however I consistently get this error: Currently, the iMotions Exporter only supports 3d gaze data

user-d407c1 30 January, 2024, 13:37:38

Hi @user-1423fd ! iMotions exporter is not compatible with Neon Player. iMotions software consumes native format from Neon directly, no need to export it using this legacy plugin from Pupil Player.

Also I recommend you to update Neon Player to 4.1 as it includes many fixes, the export format matches the Cloud export format more closely and more.

user-1423fd 30 January, 2024, 13:38:38

gotcha, did not realize it was an outdated plugin. Will also update to 4.1 now! Thanks Miguel!

user-d407c1 30 January, 2024, 13:41:20

no worries! Kindly note that we have recently renamed the "Raw sensor data" to "Native Recording Format" and I am not sure if iMotions already reflects this change.

user-44c93c 30 January, 2024, 17:03:52

Regarding the improved thermal performance of Neon mentioned in the 'announce' channel; is that change due to hardware changes? firmware changes? Just the changes to the frames? Mostly curious to know if it was a firmware change that would improve the thermal performance of the neon modules we already have. Thanks!

user-d407c1 30 January, 2024, 19:44:33

Hi @user-44c93c ! This improvement is achieved through new frames with a better way to distribute heat thanks to a heat sink. Basically, we heard feedback from all of you and work towards a cooler solution ❄️ To benefit from this you will only need to swap the frame.

user-e3da49 30 January, 2024, 21:48:20

Hi guys, there has been an update, I missed coming up and now I am wondering where I can find the "fixations_on_face" information since there is just "gaze_on_face" left apparently. Thanks for your help!

user-e3da49 30 January, 2024, 21:49:10

after I do the face mapping enrichment o.c. πŸ™‚

user-e3da49 30 January, 2024, 22:09:30

and last one for now πŸ™‚ whatΒ΄s the advantage of the neon player in comparison to the Pupil Cloud? Also what is a "recording directory" I have to drag into the player?

user-d407c1 31 January, 2024, 07:34:02

Hi @user-e3da49 ! Fixations on face CSV should still be there and have a boolean containing whether the fixations landed there or not. Could you confirm? That said this parameters can also be extracted easily from the fixations.csv file and face_positions.csv info. Please, let us know if you need any help obtaining those.

Reg. Neon Player, this is an offline solution that works for cases where you can not use Cloud. There are not many advantages over using Cloud. For example, the face mapper, the reference image mapper or pupillometry are only available in Cloud.

On the other hand, Neon Player offers you more visualisation possibilities, including seeing the eye videos. To try it out, simply export a recording from the phone or download the Native Recording Format data from Cloud.

Once download it, unzip and then drag and drop the folder over the Neon Player window.

user-29f76a 31 January, 2024, 13:10:19

Hi, anyone here have purchased Lenst Kit? Is this for NEON?

Chat image

nmt 01 February, 2024, 01:28:57

Hi @user-29f76a! This is an extended lens kit for the 'I can see clearly now' Neon frame. Note that the standard frame already ships with -3.0- to +3.0 diopter in 0.5 steps. You can read more about that here: https://pupil-labs.com/products/neon/shop#i-can-see-clearly-now

user-e2db0a 31 January, 2024, 19:01:54

Hey Dom, sorry for the delayed response. its 2023.2.3.

user-cdcab0 31 January, 2024, 22:44:51

Thanks! What was your install method for PsychoPy (installer from the psychopy website, pip, homebrew, from source, etc)?

user-ccf2f6 31 January, 2024, 20:05:31

Hi Pupil Labs, is there a way to check/set the parameters of audio-recording, gaze sample rate, etc. when remotely starting a recording with pupil-labs-realtime-api? I know we can set it with the companion device but it'll be much more dependable if we can ensure it before every recording without having to physically check the app settings.

nmt 01 February, 2024, 01:31:51

Hi @user-ccf2f6! Whilst you can see quite a bit of useful information about the Companion Device using the real-time api, such as battery level, free storage etc. it's not possible to set/check the sampling rate or audio capture. Check out our code examples for available functions

user-e3da49 31 January, 2024, 21:01:58

Also, unfortunately, a recording of over 2 hours disappeared after a push-up notification on the screen regarding the low battery status of the phone appeared. This time I could find the Scene Videos and the Eye videos in the storage data of the phone - which I canΒ΄t trace back to its location. Please help me find it somewhere.. the wearer didnΒ΄t click on anything (neither save nor discard options were shown)

user-e2db0a 31 January, 2024, 22:47:54

its with the installer from the website

user-cdcab0 01 February, 2024, 05:24:03

Ok, that's good. I haven't actually been able to replicate the same error that you shared, but we did find a different error that only occurs with the standalone installer and only on Mac, which seems to be related to the way PsychoPy is bundled. We're working with the PsychoPy team to resolve it, and I'm expecting it will actually require an update to PsychoPy itself. Unfortunately, I don't know how long that will take to resolve.

Having said that, the issue we do see does not occur when using the pip-installed version of PsychoPy on Mac, so it's possible that it may fix your issue as well if you're willing to try it out.

One final note - the documentation link you shared earlier is malformed (there's an extra slash), and it breaks the formatting on the page. Somehow google indexed the bad URL. The correct URL serves a page which is much nicer to read.

End of January archive