๐Ÿ‘“ neon


Year

user-37a2bd 01 December, 2023, 12:37:05

Hi team. Can you guys tell me how to download the raw Android data from the cloud? I am trying to test the neon player.

user-480f4c 01 December, 2023, 12:39:25

Hi @user-37a2bd! To access the raw android data from the Cloud, please right-click on the recording > Download > Raw Android Data.

Please note that if theRaw Android Data option is not shown, you will have to enable it in your workspace. To get the data in this format, please go to the Workspace Settings section and then click on the toggle next to "Show Raw Sensor Data". There you can enable this option in the download menu. Please keep in mind that you need to enable this feature for each workspace you'd like to download the raw android data.

I hope this helps!

user-37a2bd 01 December, 2023, 13:13:53

Thanks Nadia! It helped. I tried the neon player. Does it have a time limit for it to be opened? It seems to close by itself after a certain time. Any idea what the problem could be?

user-480f4c 01 December, 2023, 14:57:47

Hi @user-37a2bd! It might take a while to load depending on your recordings size. If you did not manage to load it successfully, could you please the Raw Android Data folder of your recording with [email removed] This might help us understand what went wrong.

user-37a2bd 02 December, 2023, 10:53:10

It didn't have a problem opening the file. But after a few minutes the program would close by itself. I would be toggling some of the options and when I select say fixations and then them on before it could start processing the program would close by itself

user-b5a8d2 01 December, 2023, 17:41:06

Hi, could I ask what does 'timestep' columns represent in 'gaze.csv'file exported by neon by pupil? as the difference of each column seems vary! Thank you in advance for your help!

Chat image

user-4c21e5 04 December, 2023, 04:28:41

Hi @user-b5a8d2, these are timestamps. You can read about them in the docs: https://docs.pupil-labs.com/neon/data-collection/data-format/#gaze-csv

user-b5a8d2 01 December, 2023, 23:56:53

Sorry for disurbing you, I believe the cloud is temporarily not accessible at this moment Could you guys please check with this? Thank you so much!!

user-4c21e5 02 December, 2023, 02:17:56

Hey @user-b5a8d2 ๐Ÿ‘‹. Pupil Cloud was undergoing scheduled maintenance: https://discord.com/channels/285728493612957698/733230031228370956/1180096742805491802. But it should be up and running again by now

user-b5a8d2 02 December, 2023, 02:38:09

Thank you so much !!!

user-d648ea 02 December, 2023, 07:16:34

Is it possible to know the frame index of world video in the gaze file? I mean how to know what all gaze fall in a particular frame. Currently, this information is not in the gaze.csv file. Also, the start timestamp of gaze.csv and world_timestamps.csv is different. Any suggestions?

user-4c21e5 04 December, 2023, 05:03:41

Hi @user-d648ea ๐Ÿ‘‹. The scene camera and eye cameras have different sampling rates - the scene is at 30 Hz, while the eye cameras are at 200 Hz. Although they aren't perfectly in sync, they do share a common clock. Therefore, one way to find which gaze data correspond to scene frames would be to find the closest matching timestamps. If you're familiar with Python, this can be done relatively easily using the Pandas pd.merge_asof method.

user-e141bd 03 December, 2023, 00:11:33

Hi, we had several recordings that couldn't be played either locally on the phone or on Pupil Cloud. All recordings were recorded today. The error message said "gaze pipeline was broken..." Can we have some help on this please? Thanks so much!

Chat image

user-831bb5 03 December, 2023, 05:56:13

Hello good evening, are there any issues regarding Pupil Cloud at the moment after the scheduled maintenance? I wasn't able to generate an image mapper out of 3 videos (Error says "The enrichment could not be computed based on the selected scanning video") and when trying to download a simple video with the gaze overlay it says "Error: Please contact info+cloud-support@pupil-labs.com for support) Thanks in advance!

user-4c21e5 04 December, 2023, 05:35:29

Hi @user-831bb5! The first message you shared indicates that the scanning recording video was not sufficient to complete the enrichment. This could be due to several reasons, such as insufficiently diverse camera perspectives, low light/contrast, etc. It's difficult to say without seeing it. The easiest way for me to check would be to invite me to the workspace. Are you able to do so? The second message will need further investigation. Can you share the enrichment ID with the email in the error message [email removed]

user-4c21e5 04 December, 2023, 05:05:29

Neon Player debugging

user-4c21e5 04 December, 2023, 05:30:18

HI @user-e141bd! Can you please send the recording IDs to [email removed] We will coordinate from there to try and get this Cloud error resolved ASAP!

user-e141bd 05 December, 2023, 17:57:05

Thanks, Neil! I sent the email just now

user-fb5b59 04 December, 2023, 13:18:45

Hey, is it possible for the "Just act natural" frame to remove the lenses? I know that this was possible with the PupilInvisible

user-44c93c 04 December, 2023, 17:20:25

Thanks. @user-480f4c . How does Neon ultimately determine the pupil size value it records? Does it always pick one of the two eyes? The larger diameter of the two eyes? Something else? Thanks!

user-4c21e5 05 December, 2023, 02:56:38

Hi @user-44c93c. Our approach outputs a single value for both eyes. It does not assume the two pupil sizes to be different and does not favour one of the two eyes. We are planning to release a whitepaper to elaborate on our approach - that will be announced here so keep an eye out for it!

user-4c21e5 05 December, 2023, 02:35:59

Hi @user-fb5b59 ๐Ÿ‘‹. The Just Act Natural lenses are not designed to be remove like with Invisible. Why you would like to do so?

user-fb5b59 05 December, 2023, 10:21:13

If we are using other sensors, e.g. external monitoring camera with an active illumination, you will get reflections on the lenses. And if the lenses do not really have anything to contribute (so no correction lenses), it might be cool to do all recordings without such reflections

user-ccf2f6 05 December, 2023, 04:34:11

Hi PupilLabs, we're working with real-time streams from multiple Neon devices. It seems like the decoding of scene frames from the RTSP stream is limiting the number of devices we can stream at a time using the Python Realtime API. Is it possible to reduce the resolution of scene frames from 1600*1200 so the decoding is faster?

user-831bb5 05 December, 2023, 06:16:52

Thank you very much, I made a scanning video that lasted a bit longer and it fixed the issue! And the other issue fixed on itself haha maybe it was the cloud not being updated for me at the time. Anyways, thanks a lot!

user-4c21e5 05 December, 2023, 06:42:35

Glad to hear you got your enrichment sorted with an improved scanning recording. Cloud was indeed being updated over the weekend, and I believe the team has resolved everything related to that now, so that's probably why your other issue is also sorted!

user-7413e1 05 December, 2023, 16:25:19

Hi - I am trying to set up PsychoPy. I am at the stage of details about 'remote address' and 'remote port'' (step 4 in this guide https://www.psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Neon_Implementation_Notes.html#setting-up-the-eye-tracker). I am confused about how it works with more than one eye-trackers. I have two (therefore two IP addresses) and would like to set up an experiment where the same trigger is sent to both devices connected on the same network. Is that possible?

user-cdcab0 05 December, 2023, 16:59:01

PsychoPy doesn't (directly) support multiple eyetrackers simultaneously.

This can be accomplished with our eyetrackers using the Python API (https://docs.pupil-labs.com/neon/real-time-api/tutorials/), but it will require manual coding and you will not be able to use PsychoPy's ioHub/eyetracker API/interface

user-7413e1 05 December, 2023, 17:12:15

thanks. when I try to install pupil-labs-real-time-api (from tutorial), I receive this error: note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for multidict Failed to build frozenlist multidict ERROR: Could not build wheels for frozenlist, multidict, which is required to install pyproject.toml-based projects

user-042d90 05 December, 2023, 17:02:34

Hi! Pupil lab support team. I used the marker reference enrichment to analyse data through pupil could via the application of apriltags. I would like to ask how the pixels data in surface positions tranfer into cm or mm ? Or in which way should I interpret them.

user-cdcab0 05 December, 2023, 17:08:29

Hi, @user-042d90 - surface gaze positions are in normalized coordinate space. The top left corner of the surface is (0, 0) and the bottom right is (1, 1). See: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/#surface-coordinates

user-cdcab0 05 December, 2023, 17:15:52

If you're using PsychoPy's UI and installed PsychoPy using their installer, you'll need to install packages into the Python distribution that comes as part of PsychoPy.

If instead you're running PsychoPy from source or from a pip install, then you may need a newer version of Python

user-7413e1 08 December, 2023, 12:01:28

Also as full reference this is the error I get. I have already installed the C++ build tools suggested in the error and restarted my laptop.

user-7413e1 05 December, 2023, 17:19:58

I'm using PyCharm - would that be a version issue?

user-cdcab0 05 December, 2023, 18:03:19

PyCharm is an IDE - a tool to edit and debug code. Python is installed separately.

Projects in PyCharm can be configured to use different Python installations or environments. It looks like you can follow the steps here to determine what version of Python is being used in your project: https://scripteverything.com/how-to-check-python-version-in-pycharm/

user-7413e1 08 December, 2023, 11:48:40

Maybe there is a list of packages that need to be pre-installed before installing with "pip install pupil-labs-realtime-api". I am using Python 3.12 (checked in PyCharm and also the interpreter is set to the 3.12). Still unable to install pupil-labs

user-01a3d9 05 December, 2023, 22:48:11

Im trying to stream the glasses' camera directly into VLC, I was using rtsp://[phone_ip]:8086/axis-media/media.amp?camera=world&timestamp=0&videocodec=h264, it was able to connect but I am not seeing any output. I took that path from the RTSP OPTIONS message sent in the PI Monitor websocket connection

user-01a3d9 05 December, 2023, 22:48:34

If anyone from PupilLabs also has some insight on this that would be awesome! ๐Ÿ™‚

user-01a3d9 05 December, 2023, 23:18:04

Using ths axis-media url works in an OBS media source, im thinking VLC can't handle paramaters in the URL?

user-01a3d9 05 December, 2023, 23:18:54

The end goal is to use FFMPEG to decode many streams on an NVIDIA GPU

user-cdcab0 05 December, 2023, 23:21:20

rtsp://[phone_ip]:8086/?camera=world works for me

user-01a3d9 05 December, 2023, 23:29:03

just so im sure im not doing something wrong, this is the URL i used to connect, it still shows no output, but does not error when connecting

Chat image

user-01a3d9 05 December, 2023, 23:30:52

using ?camera=sdfas produces an immediate error, so using ?camera=world seems to connect fine, just again, no output

user-01a3d9 05 December, 2023, 23:31:26

I am on VLC 3.0.11

user-cdcab0 05 December, 2023, 23:31:27

How long did you wait? For me, it takes a couple of seconds for VLC's buffer to fill before the stream is displayed

user-01a3d9 05 December, 2023, 23:32:00

30+ seconds, after around 5 seconds it switches to this

Chat image

user-01a3d9 05 December, 2023, 23:32:34

from this, where it is attempting to connect i assume

Chat image

user-cdcab0 05 December, 2023, 23:34:41

Do you have FFMPEG installed? If so, you could try this: ffplay -fflags nobuffer -flags low_delay -framedrop "rtsp://[IP]:8086?camera=world"

If not, you may get a little more info out of VLC by launching it on the command line

user-01a3d9 05 December, 2023, 23:36:03

yup, works through ffmpeg, strange how VLC is the only issue

user-cdcab0 05 December, 2023, 23:39:08

That is a bit of a headscratcher. Can you try launching VLC from the commandline?

vlc "rtsp://[IP]:8086?camera=world"

user-cdcab0 05 December, 2023, 23:39:18

It looks like you're on Windows, so you may need to specify the full path to VLC

user-01a3d9 05 December, 2023, 23:40:26

same result

user-cdcab0 05 December, 2023, 23:40:40

But any additional messages on the command line output?

user-01a3d9 05 December, 2023, 23:41:19

nope, command line output is empty

user-cdcab0 05 December, 2023, 23:41:44

Hm. Well, did you specifically need support in VLC?

user-01a3d9 05 December, 2023, 23:42:18

no, would have just been nice, the end goal was to run it through ffmpeg so thank you for helping with that!

user-e94e07 06 December, 2023, 01:26:36

Hey! We are having trouble with our companion app being connected to the servers. The recordings are not being uploaded. We have updated the app and the firmware. Tried reconnecting to the network but nothing seems to fix the issue. Please advise on how to fix!

user-4c21e5 06 December, 2023, 02:46:10

Hi @user-e94e07! Can you please try logging out and back into Neon Companion app?

user-292135 06 December, 2023, 07:27:03

Hi, anyone tried to connect a bundled OnePlus 8T (Android 11) to Apple Silicon Mac? I am trying it using Android File Transfer / Open MTP, but none works.

user-4c21e5 06 December, 2023, 07:52:34

Check out this link: https://docs.pupil-labs.com/neon/data-collection/transfer-recordings-via-usb/#export-from-neon-companion-app (instructions for mac at the bottom)

user-51951e 06 December, 2023, 09:54:07

Hello, we are using the web interface to stream the video from the Neon glasses using the phone IP indicated on the Neon companion, but regularly either the video or the gaze is freezing. Is it normal? We have to close and open again the browser, sometimes two or three times to fix it. Did you observe this as well ? And if so, have you some ideas to avoid this problem ? Thanks !

user-4c21e5 07 December, 2023, 02:46:41

Hi @user-51951e, may I ask which web browser you're using?

user-2d7cba 06 December, 2023, 16:26:23

Hi, Is there any plan to be able to obtain real-time measurements of pupilometry data? My understanding is that this is only available after uploading to Pupil Cloud.

user-9cb1df 06 December, 2023, 17:13:30

Hello everybody ๐Ÿ™‚ Im from the university of Ulm and we just bought a couple of the pupil neon eye tracker, and they are working splendid! So we are really happy about them. ๐Ÿ™‚ I have question about the parameters: Is there any way to find out how long the gaze has to rest in order for it to count as an fixations, saccade etc.? and is there any way to change these parameters? I looked through your homepage but coulndยดt find anything, so if sb could help me, thatยดd be great!

user-cdcab0 06 December, 2023, 19:07:57

Glad to hear you're enjoying your Neons! Have you seen this whitepaper which describes our fixation detection algorithm? https://docs.google.com/document/d/1dTL1VS83F-W1AZfbG-EogYwq2PFk463HqwGgshK3yJE/export?format=pdf

user-9cb1df 08 December, 2023, 17:44:32

thank you!

user-19bba3 06 December, 2023, 21:06:57

Hi @user-480f4c Where is this how-to document now? The link is broken.

user-480f4c 07 December, 2023, 07:28:20

Hi @user-19bba3 ๐Ÿ‘‹๐Ÿฝ ! Apologies - there have been updates of the documentation recently. You can find this how-to document here: https://docs.pupil-labs.com/alpha-lab/neon-with-capture/

user-51951e 07 December, 2023, 07:26:53

We used Chrome and Microsoft Edge

user-4c21e5 07 December, 2023, 10:01:45

Thanks for the information. Would you be able to share some further information about the kind of network you're on when using the monitor app?

user-480f4c 07 December, 2023, 07:32:15

Hi @user-2d7cba! Indeed, pupillometry is currently available only post-hoc after uploading the recordings to Pupil Cloud. Our next step will be to implement pupillometry on the Companion Device, allowing real-time measurement. We expect this feature to become available early next year.

user-2d7cba 09 December, 2023, 16:17:21

Very excited about this future release!

user-51951e 07 December, 2023, 08:48:58

Also to tell you that the audio recording was activated on the companion but we figured out after 6 videos that it didn't record the audio. We had to toggle off and on again the audio recording to make it work. Very unfortunate, please check this issue

user-4c21e5 07 December, 2023, 10:02:52

Audio missing

user-ca96f7 07 December, 2023, 10:23:46

Hello, do you know what wavelengths are used in infrared by eye tracking glasses? Is interference possible with other devices in 780 and 850 nanometers?

user-a64a96 07 December, 2023, 11:53:31

Hey Pupil Labs team,

We need to know which gaze- and fixation- data points belong to which video frame. In Core, you provide the โ€˜start_frame_indexโ€™ for every fixation and the โ€˜frame_indexโ€™ for every gaze (https://docs.pupil-labs.com/core/software/pupil-player/#fixation-export). We would need something similar. Because of this missing information, we tried to synchronize gaze.csv and world_timestamps.csv with their closest minimum โ€œtimestamp [ns]โ€. We take the โ€œtimestamp [ns]โ€ from the gaze and find the corresponding frame. However, the coordinates do not align with the red circle (image).

Therefore, the questions are:

  • How do you calculate the red circle?
  • How can we find out what the person was actually looking at on a certain frame?
  • Could you please share the code or the documentation on how gaze overlay is done in Neon Player?

(image)[The frame is from gaze overly enhancement of the video and blue dots are the gaze coordinates we exported from gaze.csv]

Chat image

user-d407c1 07 December, 2023, 12:09:42

Hi @user-a64a96! You can use this repo https://github.com/pupil-labs/densepose-module/blob/main/src/pupil_labs/dense_pose/main.py as a reference for example, it reads the video, reads the csv gaze and world timestamps files and uses that information to get the match and render it. Simply ignore the densepose stuff, you can change the pose.get_densepose() function to whatever you you want, as you already have at that point the matching scene frame and coordinates.

user-d407c1 07 December, 2023, 12:12:42

Hi @user-ca96f7 ! The IR LEDs illuminating the eyes run at a peak wavelength (ฮป) of ~860nm (centroid 850nm ยฑ 42nm ฮ”ฮป). So while our IR will not interfere almost with the 780nm, they will with the 850nm. That said, note that we only use the IR for illumination purposes.

Chat image

user-ca96f7 07 December, 2023, 12:14:34

Okay, thank you for your answer!

user-a64a96 07 December, 2023, 13:03:15

@user-d407c1 Thank you for your answer. I think you forgot to add the link for the repo ๐Ÿ™‚

user-d407c1 07 December, 2023, 13:04:32

Sy! ๐Ÿคฆ Added it

user-f8b80c 07 December, 2023, 13:36:09

Hello Pupil Team! Could this problem be solved? I'm encountering the same issue at the moment. Even though three of my 4 markers are detected, the surface definition is all over the place...

user-cdcab0 07 December, 2023, 16:57:10

Hi, @user-f8b80c - could you share the enrichment ID so that we can take a closer look? To copy the ID, open the enrichment in Pupil Cloud, then click on the three dots โ‹ฎ at the top of the enrichment info and select "Copy Enrichment ID"

user-292135 08 December, 2023, 00:56:33

Hi Neil, I knew this instruction. I found I can access OnePlus using 2019 Intel MacBook Pro (Vetura) through Android Transfer without problem but still cannot access using 2021 M1 Max MacBook Pro (Ventura). Similar issue is reported here https://discussions.apple.com/thread/252616053?sortBy=best Open MTP (https://github.com/ganeshrvel/openmtp) is software seemingly developed to solve a common bug in data communication between Android OS and Mac OS, but it failed in this case; it may be that Apple Silicon & Open MTP & OnePlus 8T combination is not good fit. I hope readers of this post will give some information.

user-4c21e5 09 December, 2023, 03:37:13

I'm not familiar with that repository, but we've not encountered issues with transferring files between 1+8 phones and Macbook M1. My question now is what USB cable are you using? Not all USB cables support data transfer.

user-f8b80c 08 December, 2023, 09:36:37

Hi @user-cdcab0 I tried deleting the old enrichment and adding a new marker mapper, this way it worked better for some videos, but not for all. The new enrichment ID is the following: 2b4fbe8f-7c6d-4eba-8639-e58cad32a2bb I applied the enrichment to all videos in my project (8 in total), for some it works perfectly fine, for others the above problem occurs.

user-cdcab0 08 December, 2023, 10:04:12

Thanks! I've passed the recording ID along so we can investigate

user-7413e1 08 December, 2023, 11:10:55

Hi - I think this is not the issue, as my python version is the latest (3.12). Can you help me understand what's the issue?

user-cdcab0 08 December, 2023, 17:27:23

That very well could be the issue! 3.12 is so new that some libraries are still catching up and don't support it yet. Can you try 3.11?

user-7413e1 08 December, 2023, 12:05:20

Also as full reference this is the error I get. I have already installed the C++ build tools suggested in the error and restarted my laptop.

Error.txt

user-42321a 08 December, 2023, 23:12:20

Hi, @user-cdcab0 I am having an issue with my pupil lab eye tracker it's not connecting to the phone

user-4c21e5 09 December, 2023, 03:26:51

@user-42321a, could you please reach out to info@pupil-labs.com and someone from there will help to get you up and running again!

user-10b2f3 08 December, 2023, 23:51:10

Hi @user-cdcab0, Could you please help! My neon companion stopped connecting to the phone (1plus 10 Pro) after making some continuous vibrations coupled with an error display. I have tried to update the firmware but the issue still persists.

user-4c21e5 09 December, 2023, 03:27:26

@user-10b2f3 Could you please reach out to info@pupil-labs.com and someone from there will help to get you up and running again!

user-292135 09 December, 2023, 06:06:55

I used the same cable for validation so it was not the cause of the problem. I also found M1 Max x Monterey x Android Transfer worked. Finally, I updated minor version of Ventura (M1 Max Pro; the original problematic machine) and it worked. Thanks for your advice!

user-4c21e5 09 December, 2023, 08:02:37

So mac update solved the recording transfer issue. Awesome!

user-c87d2d 09 December, 2023, 17:52:34

Hello, I am interested in purchasing a Pupil Neon, can someone provide me with a sample data file output so I can review it? Does the Neon require paid software?

user-4c21e5 11 December, 2023, 02:34:40

Hey @user-c87d2d ๐Ÿ‘‹. Please reach out to info@pupil-labs.com and we will send you an example recording. If you haven't already, you can find an overview of the recording format on our online docs: https://docs.pupil-labs.com/neon/data-collection/data-format/#recording-format

user-ddf0f7 11 December, 2023, 16:46:24

Hi, I have a question about Pupil Cloud. I am trying to obtain ethical approval to use the Neon in a study. I want to use your Pupil Cloud services, but my Research Ethics Board wants to know whether the transfer of this data to your servers is secure. Is the data encrypted before transfer?

EDIT: It would also be great to know whether the data is stored on your servers in an encrypted state as well.

user-480f4c 11 December, 2023, 18:07:43

Hi @user-ddf0f7! Pupil Cloud is fully GDPR compliant. You can find all the details in our privacy policy https://pupil-labs.com/legal/privacy and in this document as well: https://docs.google.com/document/d/18yaGOFfIbCeIj-3_GSin3GoXhYwwgORu9_7Z-grZ-2U/export?format=pdf

user-275c4d 12 December, 2023, 15:04:52

Hello! I would like to use the data output from the neon device to predict 6d camera pose (relative 3d rotation + 3d translation) using visual-inertial odometry methods. However, it seems that these methods require magnetometer readings from the IMU device. After looking through the IMU manufaturers datasheet (https://invensense.tdk.com/download-pdf/icm-20948-datasheet/) it seems that it has two modes: 1) first mode uses the integrated digital motion processor (DMP), which outputs a quaternion but no magnetometer readings, and 2) second mode does not use DMP and so does not output a quaternion, but does return magnetometer readings. So basically, using the DMP throws away magnetometer readings, but returns quaternion. Does anyone here know how to set the IMU device to the second mode which returns the magnetometer readings? Thanks!

user-cdcab0 13 December, 2023, 19:43:44

Hi, @user-275c4d - welcome to the community here! Neon uses an onboard algorithm to fuse data from the accelerometer, gyroscope, and magnetometer readings to produce an absolute orientation estimate. Direct-access to magnetometer sensor values is not available

user-275c4d 12 December, 2023, 15:33:49

Also, if anyone has experience with this kind of thing, I would greatly appreciate any tips/suggestions/experiences! Thanks

user-e1140a 12 December, 2023, 16:43:34

Hi! Is there any way to get the scene_camera.json file? Here https://docs.pupil-labs.com/neon/data-collection/data-format/, it says it's included in the recordings folder, but I can't seem to find it

user-480f4c 13 December, 2023, 14:57:12

Hi @user-e1140a ๐Ÿ‘‹๐Ÿฝ ! You can get the scene_camera.json file by downloading the Timeseries Data of the recording(s) of interest from Pupil Cloud.

user-d1f142 13 December, 2023, 15:27:57

Hi, I have a question regarding the relation between the timestamps and the video in data exported from a neon recording in the pupil cloud. The enrichment_info.txt points to https://docs.pupil-labs.com/cloud/enrichments/#raw-data-exporter, but unfortunately, this does not seem to exist. I am currently working under the assumption that the "start_time" in the info.json corresponds to the start time of the video, i.e. I can get the corresponding video time by subtracting that start time from the "timestamp [ns]" (in both gaze.csv and world_timestamps.csv). Is this assumption correct?

user-cdcab0 13 December, 2023, 20:14:46

Hi, @user-d1f142 - yes, you have the right idea! Thanks for pointing out that bad link by the way - we recently overhauled our documentation website and it seems there are a couple of loose ends to tie up.

user-328c63 14 December, 2023, 15:57:21

Hello! I saw on the website that pupillometry data and eye state are now supposedly available on pupil cloud now. Unless I'm looking in the wrong place, where do I go to access this data? There doesn't seem to be any new enrichments and there are no pupillometry related csv files in the data exports. Thanks!

Chat image

user-d407c1 14 December, 2023, 16:01:18

Hi @user-328c63 ! Recordings made with the Companion App 2.7.4 or above should already have a new CSV file named 3d_eye_states.csv https://docs.pupil-labs.com/neon/data-collection/data-format/#_3d-eye-states-csv that contains pupillometry and eye state. If that's not the case or you need an older recording to be reprocessed, please let us know.

user-328c63 14 December, 2023, 16:04:44

Hi Miguel, thanks for your reply! Gothca. Could you say more about reprocessing older recordings? I have multiple subjects I have already run, and was hoping to extract pupillometry from that data.

user-d407c1 14 December, 2023, 16:07:25

You can reach out to us by email [email removed] with a list of recording ids or workspace ids that you would like us to reprocess for pupil size.

user-f9bb4c 14 December, 2023, 18:45:01

it should be noted that the quaternion does include the magnetometer data. The pose if fully defined. So the magnetometer data is not thrown away. @user-275c4d

user-275c4d 26 January, 2024, 07:29:54

I just saw this message. Thanks for your reply! Yes, I completely agree - the magnetometer data is not simply thrown away but rather integrated with the gyroscope data to get a quaternion. Would you happen to know of a way to recover the raw magnetometer readings from the quaternion outputs? (in microtesla)

user-857e95 14 December, 2023, 22:31:30

Hi, I am going through data I collected using pupil capture. However nearly every file has been misnamed. This can not be a result of me mislabeling the files before hand as I backed up the files by screen recording. For the screen recordings I can see that I entered the correct names next to where it says โ€œrecording session nameโ€ but when I view the files in pupil player most have the wrong name. Is there a way to fix this, and how can I prevent this from happening again in the future? Thanks!

user-1391e7 15 December, 2023, 09:03:25

Hello! We've received our neon glasses this week (yay)! I've been able to test them both glasses for a short time and with one of the two, neon companion has trouble displaying the gaze for some reason. When I uploaded the recording, after processing, the gaze data was present, I just couldn't display it in the stream or the preview of the app (or when trying to set an offset)

user-d407c1 15 December, 2023, 09:16:58

Hi @user-1391e7 ๐Ÿ‘‹ ! I am sorry to hear that you are experiencing issues with one of your Neons. Would you mind writing an email to info@pupil-labs.com referring to this message? We would follow up with more debugging steps to try to identify what happened but we might need to ask you for some IDs, purchase order, etc to better assist you and I feel it might be better to do so by email.

user-1391e7 15 December, 2023, 09:04:27

I think, since the gaze stream was there in the recording, maybe this is a software issue in the companion app? I need to test further, but any help in getting to the bottom of it would be greatly appreciated

user-1391e7 15 December, 2023, 09:06:41

the visualization shows up for the first few frames and then nothing

user-1391e7 15 December, 2023, 09:06:51

I'll try reinstalling the companion app, maybe something is up there

user-1391e7 15 December, 2023, 09:08:39

hm no, this is persisting :/

user-d407c1 15 December, 2023, 09:12:23

Hi @user-857e95 ๐Ÿ‘‹ ! Are you using Pupil Core or Neon ? Would you mind also developing what do you mean by the mislabeling? Perhaps stating what filenames do you find and which ones were you expecting.

user-857e95 15 December, 2023, 15:13:21

Hi Miguel, Iโ€™m using pupil core, apologies for posting in the wrong room. By mislabeling I mean that the folders are numbered incorrectly. Iโ€™m doing driving research and labeled scenarios 1-8. The folders seem to be labeled correctly for the participant ID name but the scenario numbers are all incorrect. The names of each file are created in the follow format - participantID_Pre(or)Post_Sc01-08.

It seems as though the scenario number is what got screwed up for all participants. Also some files are duplicated and stored under a different scenario number.

user-1391e7 15 December, 2023, 10:35:55

thank you, I shall do that ๐Ÿ™‚

user-1391e7 15 December, 2023, 10:56:26

I found a difference

user-1391e7 15 December, 2023, 10:57:26

the eye preview screen looks persistently fine on the pair that is working

user-1391e7 15 December, 2023, 10:59:11

but on the other pair, it looks like the whole frame is shifted up/down and maybe that isn't just a display error, maybe it impacts the live preview

user-1391e7 15 December, 2023, 11:25:53

when I upload a recording to the cloud and something didn't work out, can I reupload the recordings somehow?

user-1391e7 15 December, 2023, 11:26:13

the problem was, something didn't work during upload, so the recordings were stuck processing

user-1391e7 15 December, 2023, 11:27:46

I "trashed" the two broken recordings on the cloud web interface, but locally, the companion apps tell me the recordings are in the cloud, so I can't try and upload them again

user-d407c1 15 December, 2023, 11:28:51

Hi @user-1391e7 ! I have replied to you by email. The trash only removes the recordings from Cloud, not from the phone.

user-d407c1 15 December, 2023, 11:30:25

You can access the trash and restore them by clicking on the three dots of the search bar, and selecting "Show trashed"

user-1391e7 15 December, 2023, 12:15:10

A general question with regards to the companion app:

I saw that I can enable/disable audio recording in the companion app, which is great. For studies, in which the recording of video is an issue (meaning all we're allowed to record is general eye movement behaviours, but not the world around participants), can I disable video recording as well?

user-1391e7 15 December, 2023, 12:17:45

with the pupil invisible glasses, the workaround was to just disconnect the world-camera ๐Ÿ™‚

user-d407c1 15 December, 2023, 12:20:34

You can not disable the scene camera yet, but you can occlude it. If you would like us to prioritise this feature, please suggest it here.

Additionally, note that you can disable scene video upload to the Cloud for any Workspace you choose, when creating it you will find a toggle to select it.

user-1391e7 15 December, 2023, 12:22:20

by occlude, you mean I could just cover up the lens? .. I didn't think of that, that's a very easy solution ^^

user-d407c1 15 December, 2023, 12:25:59

Yes, that's exactly what I meant! I recommend being cautious when placing a sticker over the camera to ensure that no adhesive residue remains on the lens. However, should any residue be left behind, you can normally clean it off.

user-1391e7 15 December, 2023, 12:24:37

the workspace options are awesome! very cool, ty

user-1391e7 15 December, 2023, 12:25:27

we always have to take great care with regards to rights to privacy, so any extra control there helps a lot

user-1391e7 15 December, 2023, 12:26:57

I was thinking I'd use a small piece of thick paper and tape over that, so then there's no contact between lens and anything sticky

user-1391e7 15 December, 2023, 12:28:07

also, the corrective lens swapping is great, really love the solution there.

user-1391e7 15 December, 2023, 12:30:59

the little screwdriver that came with the glasses fits into the center piece, does that mean I could in theory swap the frame for a different one?

user-d407c1 15 December, 2023, 12:44:24

Yes! Neon is modular, meaning you can swap the frame. All frames are 3D printed in high quality and contain a chip to interface.

You can see the different frames we have available here, and their specifications here.

Or you can prototype your own frame !

user-1391e7 15 December, 2023, 12:32:30

although, the frame must have something else neat inside, right? since you're connecting that to the usb cable

user-1391e7 15 December, 2023, 12:33:44

maybe the one thing I'm worried about with the glasses for now. accidentally bending the cable over the course of the next year or so

user-1391e7 15 December, 2023, 12:51:47

There is one version under Specifications, called "Ready, Set, Go", which I can't find in the shop? Is that a preview of a version yet to come or was that a prototype you decided against selling later?

user-d407c1 15 December, 2023, 13:07:05

We've temporarily removed this product from our store as we're redesigning it based on the feedback that we received from our customers. We will bring it back soon with some improvements! Thanks for your understanding

user-3306fd 18 December, 2023, 10:21:48

Hi Pupil Labs team ๐Ÿ™‚ You mention a performance evaluation in a white paper coming this year (https://pupil-labs.com/products/neon/specs). Is there any news on this?

user-d407c1 18 December, 2023, 10:38:02

Just with the final touches on this, it will be available pretty soon โ„ข๏ธ (hopefully this week)

user-ff2367 18 December, 2023, 11:13:20

Hey everyone!! I'm trying to use the recently released blink detection algorithm Jupyter notebook, but even after installing the requirements I'm getting a ModuleNotFoundError regarding the pikit module. I tried installing the module separately, but it didn't solve the problem. I also changed my python version, but the problem persists. Has anyone been through this too and could help me?

user-228c95 18 December, 2023, 11:26:02

Hello everyone! How can ฤฑ export the data in csv format ?

user-228c95 18 December, 2023, 12:03:15

https://docs.pupil-labs.com/neon/data-collection/data-format/ Can I download it directly in csv format from neon companion or pupilcloud? In the download options, it downloads as a code in json format. Can I convert this code in Python and get csv format this way? Even though I examined this page, I would like to say that unfortunately I cannot fully understand it. How can I download my record in CSV format in the simplest way ?

user-480f4c 18 December, 2023, 12:17:08

Hi @user-228c95 ๐Ÿ‘‹๐Ÿฝ ! To get the data in csv files, you have the following options:

  • You can upload the recording to Pupil Cloud, and then right-click on the recording of interest, select Download > Timeseries Data + Scene Video.

  • If you prefer to work with your data offline, you can use Neon Player. In this case you'd have to export the android data directly from the phone or from Pupil Cloud, right-click on the recording and select Download > Raw Android Data. Which data stream are you interested in? Note that data exported from Neon player does not currently include pupillometry.

user-e3da49 18 December, 2023, 12:27:04

Hi everyone! Could someone please tell me where I have to upload the reference image and the reference surroundin video? Also in my case http://neon.local:8080/ doesnยดt work. Last thing: where can I find the steps for the calibration process for Neon - eve though it doesnยดt necessarily need some, I got the information that itยดs recommended.

user-480f4c 18 December, 2023, 12:32:46

As for your last two questions:

  • Are the two devices connected to the same wifi/hotspot? To stream the data in real-time using the Neon Monitor App you need to connect your computer to the same WiFi network as the Companion device.

  • Regarding the offset correction feature, please have a look at this video for detailed instructions.

user-480f4c 18 December, 2023, 12:28:22

Hi @user-e3da49 ๐Ÿ‘‹๐Ÿฝ . Regarding your first question: If you're planning to use the Reference Image Mapper enrichment on Cloud, you'd have to follow these steps:

1) Once you have made the eye-tracking recording and the scanning recording please upload them on Pupil Cloud. This should be done automatically if you have the Cloud upload enabled on the Settings of the Neon Companion App on the phone. See our scanning best practices. The reference image is just a picture of your area of interest. This can be taken with any phone.

2) Transfer the image you have taken from your phone to your computer.

3) When the eye-tracking and scanning recordings are uploaded to your workspace on Pupil Cloud, please right-click on them and place them in a new project or in an existing project (New project with 2 selections or Add 2 selections to project).

4) Then you can go to your project. You should see both the eye-tracking recording and the scanning recording.

5) Now you can create the enrichment. Go to the top right Enrichments > Add > Reference Image Mapper. Once you have clicked on the Reference Image Mapper, you should see a panel that requires you to introduce the information about the reference image and the scanning recording. You'd just need to upload the reference image that you should now have locally to your computer, and select the recording that is going to be used as scanning recording.

Have you checked our Pupil Cloud documentation? We have an onboarding video on how to use Pupil Cloud that might be helpful.

user-e3da49 18 December, 2023, 12:40:23

Thank you! Where can I download the Neon Monitor App for Mac?

user-480f4c 18 December, 2023, 12:48:57

To access the Monitor app make sure the Neon Companion app is running and visit the page neon.local:8080 on your computer.

user-e3da49 18 December, 2023, 12:41:14

And yes, all of the devices are connected with the same WiFi and the URL doesnยดt lead anywhere...

user-e3da49 18 December, 2023, 12:50:46

Still nothing: Die Website ist nicht erreichbarDie DNS-Adresse von neon.local wurde nicht gefunden. Eine Problemdiagnose wird durchgefรผhrt. DNS_PROBE_STARTED

user-e3da49 18 December, 2023, 12:50:55

page not found

user-480f4c 18 December, 2023, 12:52:36

are you by any chance be using Eduroam or some kind of institutional network?

user-480f4c 18 December, 2023, 12:57:47

Institutional networks like eduroam can be quite restrictive. Can you try typing the IP in the browser rather than neon.local:8080 and see if it works? Alternatively, I would recommend setting up a hotspot using a third-party device.

user-e3da49 18 December, 2023, 12:52:45

yes

user-e3da49 18 December, 2023, 12:53:27

but it also didnยดt work with other wifi networks...

user-e3da49 18 December, 2023, 13:01:16

That worked, thank you so much!

user-e3da49 18 December, 2023, 13:08:48

Last question regarding the face mapper: I canยดt figure out how to implement the face mapper into the Cloud. Do I need to program myself and download some training data?

user-480f4c 18 December, 2023, 13:10:57

Using the Face Mapper on Cloud is very simple - you don't need to program anything yourself. You'd have to select the recordings of interest and create a new project (see point 3 here: https://discord.com/channels/285728493612957698/1047111711230009405/1186283819364536390) and then select Enrichments > Face Mapper and click run. Once the enrichment is completed, you'll be able to download the data by going to the Downloads tab. The Downloads tab is in the project's page on the left side of the Cloud interface.

user-d407c1 18 December, 2023, 13:11:37

Hi @user-ff2367 ! Apologies for that, the code has been updated , could you kindly do a fetchand pull or re-download the code?

user-ff2367 18 December, 2023, 13:45:30

Hello @user-d407c1. The changes worked. Thank you very much!

user-e3da49 18 December, 2023, 13:39:00

thank you so much!

user-2f2524 18 December, 2023, 13:53:10

Hello guys, can you help me with the "waiting for DNS service?" RTSP iconnections s irregular across browsers and networks. Not sure what is missing.

user-480f4c 18 December, 2023, 16:49:47

Hi @user-2f2524 ! Just to clarify, are you trying to use the Neon Monitor App and having issues with streaming? If so, have you checked our relevant troubleshooting section ?

user-e3da49 18 December, 2023, 14:37:07

Hello again. Is that usual that it takes more than 10 minutes (still on going) for the reference and the face mapper to run? (each)

user-480f4c 18 December, 2023, 15:29:48

Hey @user-e3da49 ๐Ÿ™‚ It can take more than 10 minutes, but this really depends on several factors (e.g., the duration of your recordings). See this relevant message for more information on that: https://discord.com/channels/285728493612957698/633564003846717444/1154018234299850813

In general, it is highly recommended to slice the recording into pieces and apply the enrichments only in the periods of interest. This can significantly reduce the processing and completion time for your enrichments. You can "slice" the recordings directly on Cloud by adding Events. Please have a look at this guide that also used event annotations to apply multiple RIM enrichments: https://docs.pupil-labs.com/alpha-lab/multiple-rim/

user-442653 18 December, 2023, 16:15:47

Hi, I am having an issue with the neon companion live monitor application. It seems that I have a pair of glasses that just doesnt produce a video when connecting to the live monitor. the gaze circle is correct but the video itself is just permanently loading on a gray screen. I have another pair of glasses that I tested with a different phone and it works just fine. Both phones are running the same versions of the neon companion app.

user-480f4c 18 December, 2023, 16:46:26

Hi @user-442653! By live monitor, do you mean the preview button on the app or the Neon Monitor App ?

user-442653 18 December, 2023, 16:47:11

sorry I meant the neon monitor app my bad for not clarifying

user-480f4c 18 December, 2023, 16:53:00

No worries ๐Ÿ™‚ Thanks for clarifying! Can you please check whether the preview of the scene camera in the app works for the phone+glasses combination that shows the grey screen in the Neon Monitor App ? You can find the scene camera preview button on the bottom right part of the Neon Companion App

user-442653 19 December, 2023, 14:19:11

Yes I have tried the preview with the combination and it works just fine everytime

user-90c44a 18 December, 2023, 19:06:05

Hi, I have been having a problem with Pupil Cloud where I cannot download the Time Series data. The download starts, but fails around somewhere between 23 and 32 MB. It gives me a "Check internet connection" error message.

I have verified that my internet connection is not the problem. I have a consistent connection and am able to download similarly large files from other website (e.g., Google Drive). Most of the recordings are 1 to 1.5 hours long, so they are quite large, but I haven't had this problem in the past. It started last Friday or so. Is there anything I can do to resolve the issue?

user-e1140a 18 December, 2023, 21:44:52

Hi! Is there anyway to get the magnetometer data? Or is there any output data on the magnetic north?

user-4c21e5 19 December, 2023, 11:09:17

Hey @user-e1140a! The raw magnetometer readings are not exposed. However, magnetic north is included with the quaternion and Euler angles, as the orientation of the IMU is computed. If you haven't already, be sure to read the IMU docs

user-4c21e5 19 December, 2023, 11:04:57

Hi @user-90c44a ๐Ÿ‘‹. Please try running this speed test from your browser and let me know the outcome: https://speedtest.cloud.pupil-labs.com/

user-90c44a 19 December, 2023, 16:20:03

Hi Neil. Here are the results from the speed test

Chat image

user-e91538 19 December, 2023, 12:03:33

Hi! Is it possible to use the companion app for the neon without account and without internet?

user-480f4c 19 December, 2023, 12:08:07

Hi @user-e91538. To use the Neon Companion App, you would need to sign up using your Google account, or create an account with an email address and password. See our instructions on how to make your first recording

Regarding your second question, internet connection is not required to record data using the Neon Companion App. This data will be first saved on the phone and you can upload it to Pupil Cloud at a later point when you have internet access again. I hope this helps!

user-e91538 19 December, 2023, 12:08:32

Yes, thank you very much!

user-2de7eb 19 December, 2023, 14:31:57

Hi guys, I am running into an "Internal Server Error" when trying to delete recordings from the Pupil Cloud. Would you know how to address this?

user-480f4c 19 December, 2023, 14:34:20

Thanks for clarifying. Have you checked our troubleshooting section ? Can you also try using the IP address instead of neon.local:8080? You should be able to find the IP address by clicking on the Stream icon on the top left panel of the Companion App's main screen.

user-442653 19 December, 2023, 14:35:55

I also tried using the IP address and it was still causing the problems I could not seem to find any relevant trouble shooting information in the document you just provided when I was looking it over

user-480f4c 19 December, 2023, 15:05:10

Can you please try to re-connect Neon and let us know if the streaming starts? We'd also recommend opting for a low-traffic network (e.g., using a dedicated router or a hotspot by a phone)?

user-442653 19 December, 2023, 15:07:48

hmmm I just tested it out right now at home, and it seems to work, yesterday I was in office and tested both using my companies router as well as a dedicated router with 0 traffic and it wasnt working

user-480f4c 19 December, 2023, 15:09:05

Hey @user-2de7eb! I replied to your email. Please share with us the information I requested in the email and we'll try to resolve this issue as soon as possible.

user-2de7eb 19 December, 2023, 15:09:29

Great, thank you!

user-a64a96 19 December, 2023, 15:40:53

Hey Pupil Labs,

In our data, we noticed a mismatch between the gaze data provided in your gaze overlay enrichment and plotting the gaze data from the csv using your code from the repository (https://github.com/pupil-labs/densepose-module/blob/main/src/pupil_labs/dense_pose/main.py). We do not understand why there is an offset, specifically a very visible delay, between the gaze data from the CSV file (gaze.csv and world_timestamps.csv) and the overlay circle. This leads us to the following questions: 1. Why is the data from the csv and the data in the video desynchronized (the gaze circle in the video lagging behind the csv data) even if we use the code from your repository?

  1. How can we get the correct gaze coordinates(the exact position where the person is looking at) for each given frame, since there is obviously a time shift when using the version from your GitHub? Here is a visualization of the observed problem. The following video is downloaded from Cloud Pupil, features the normal gaze overlay enrichment (indicated by a large red circle). Our team reproduces the gaze coordinates using the code from your repository mentioned above (marked by a small red circle). As you can see, the large circle lags behind the gaze data from the CSV files. The script provided in the attachment, named sample.py, is a modified version derived from your repository. Its purpose is to synchronize the world_timestamp.csv and gaze.csv files.

sample.py

user-d407c1 19 December, 2023, 15:52:19

Hi @user-a64a96 ! Could you share that raw recording with [email removed] so I could check this issue more in detail? If you simply want a gaze overlay, have you checked Neon Player

user-a64a96 19 December, 2023, 15:53:02

Sure! I'll send them right now! EDIT: Regarding Neon Player, I have checked it but we wanted to recreate the gaze overlay of the recording on our local machine using the csv files. Also, we wanted to know all the gaze samples regarding each specific frames

user-d407c1 19 December, 2023, 16:21:30

I will have a look at this tomorrow. Thanks for reporting it!

user-4c21e5 20 December, 2023, 08:20:10

Hi Neil. Here are the results from the

user-d407c1 20 December, 2023, 11:29:43

Hi @user-a64a96 ! I noticed that the scene video is missing from the files you shared, which is necessary for me to understand what is going on. However, I have an idea about what might be happening. Did you try to plot the CSV data on the gaze overlay video instead of the original video?

Normally, eye cameras start slightly earlier than the scene camera after you press the record button. We don't throw away data, while waiting for the sensor and this means you have some gaze points over grey frames.

The video renderer / gaze overlay will not take this section of grey frames that appear at the beginning of the video in Cloud, and this will result on different video lengths, which could have caused this issue. What happens if you use the video downloaded from "Timeseries CSV + Video" ?

user-a64a96 21 December, 2023, 11:42:37

Hi @user-d407c1 Thank you for your email and this response. Perhaps, I am missing something but both videos (original video and video from gaze enrichment) are the same (amount of frames in both videos are identical). I didn't quite understand why plotting of merged csv file (world_timestamps.csv and gaze.csv based on nearest timestamps) on the original video or gaze enrichment video should be different. Since the frame numbers in the both videos and the merged csv file are the same.

I also tried to plot the merge csv file with the original video file but sadly the gaze point is not quite the same with gaze enrichment. I encounter another issue which I discussed in the email yesterday.

user-d407c1 21 December, 2023, 16:12:46

Hi@user-a64a96! I had a deeper look at the code that you shared, there are parts that are not necessary in your code, like multiplying by 10e9.

The override parameter on the densepose repo is to make it compatible with recordings from Pupil Player.

Here you have the barebones, for a simple gaze overlay, maintaining the audio. Note that you will need to change the input path.

gover.py

user-a64a96 27 December, 2023, 15:16:48

Hey @user-d407c1 Thank you sharing your code.

user-d407c1 21 December, 2023, 16:51:53

@user-a64a96 And so you can also see that the temporal offset occurred because you used the gaze overlay video and not the original. Here you have the circle being plotted using the code above with the original video, stack next to the gaze overlay export(left), you can appreciate the temporal offset.

End of December archive