πŸ‘“ neon


Year

user-f43a29 01 August, 2024, 09:03:54

@user-839751 Here are the instructions for more recent versions of MacOS, such as 14.5. Note that there is an order to connecting the cables.:

  • If everything is already connected, then first unplug all cables from the USB hub. Also, unplug the hub from the phone
  • Plug Neon into the port marked with 5Gbps on the hub
  • Start the Neon Companion App on the phone and wait for the "Plug in and go!" message
  • Now, plug the USB cable of the Anker USB hub into the phone
  • Wait for Neon to be recognized in the Companion App
  • Now, you can connect the Ethernet cable to the USB hub and to a free Ethernet port on your computer.
  • Wait for the device to be recognized. It might be assigned an IP and show "connected" with a green light in Network settings, but not necessary to wait for this.
  • Then, go to General -> Sharing. Do not yet enable Internet Sharing.
  • Click on the Info button, next to the Internet Sharing toggle.
  • Select the name of a device that is not Neon's connection under "Share your connection from". In my case, I choose the WiFi connection (it does not need internet access).
  • Then, toggle your Neon's Ethernet device (in my case, AX88179A) and all other Ethernet adapters under "To devices using".
  • Now, turn on Internet Sharing. On my system, when I did this, MacOS automatically decided to toggle off en4. This is okay.
  • Now, open Terminal.app and input ifconfig . This will probe the Ethernet adapter in a way that establishes a Bridge to share the connection and assign it an IP.
  • After entering the ifconfig command and waiting about a minute, the QR code and IP address should be displayed in the stream section of the Neon Companion app. You might have to wait longer than you think.
  • Now, try the discover_one_device command of our real-time API.

Let us know how it goes.

user-151827 01 August, 2024, 13:05:00

Hi there, I am trying to stream the scene video over RTSP to another app on the companion device in order to do some image processing locally. This is possible, but seems to only work when connected to a WiFi-network. However, I assume that the network is not used to actually transmit the video (since I am using eduroam, which shouldn't allow it). Is there any way to stream the video without any WiFi-network? Cellular network is available.

user-f43a29 01 August, 2024, 14:30:15

Hi @user-151827 , the streaming only happens over WiFi (including hotspots) or via Ethernet connection. It happens over standard ports that are usually allowed, so I do not think that eduroam specifically blocks such data in general, but it could also be that your institution has configured their personal eduroam settings to allow it. However, do note that you will not have the best data transmission over eduroam. The quality will be quite variable.

We recommend a local, dedicated WiFi router, but you can also use Ethernet over the tested USB hub if that's necessary for your use case.

Streaming via the real-time API over a cellular network is not possible, but you can live stream the display by joining a Google Meet session with the phone and sharing the screen.

user-f43a29 01 August, 2024, 14:42:49

@user-151827 , my apologies, my colleague just gave me a tip and upon re-reading your message, I see that you are running an additional app on the Companion device

user-151827 01 August, 2024, 14:42:43

Hi @user-f43a29 , thanks for your response! My issue is that I would like to access the video stream in an Android app on the Companion Device itself. This app would take the video stream as an input, process the images (with some OpenCV operations), and output the results. A very minimal proof of concept of this works, as I'm able to watch the RTSP video stream in the other app, but only when connected to a WiFi-network.

I am wondering if there is any way to work around this? The reason I assume that the video isn't actually streamed over eduroam is that I can access the stream in the other app when connected to eduroam. However, when trying to stream video to a laptop over eduroam, that doesn't work (I checked this with @user-d407c1 before and he confirmed that eduroam sometimes doesn't allow the streaming).

user-f43a29 01 August, 2024, 15:08:32

@user-151827 I'll coordinate with my colleagues and will likely have a response by next week.

user-f43a29 01 August, 2024, 14:43:21

Yes, indeed, @user-d407c1 just pointed out your use case to me πŸ™‚

user-151827 01 August, 2024, 15:11:59

Thank you @user-f43a29 , I appreciate your help!

user-1e53d0 01 August, 2024, 20:59:59

Hey there! I'm trying to get the heat map from my recordings but it's not working πŸ˜• any help?

user-480f4c 02 August, 2024, 05:52:23

Hi @user-1e53d0πŸ‘‹πŸ½ . How are you trying to obtain the heatmap? Heatmaps are generated based on enrichments. Have you checked our documentation ? For a general introduction to Pupil Cloud, I'd recommend having a look at our onboarding video.

user-151827 02 August, 2024, 09:11:35

@user-d407c1 Some weeks ago, you suggested using ExoPlayer to handle the incoming RTSP stream in our custom Android App. However, I noticed large delays (>1 second) when doing so and found similar issues online. Is this something you are aware of and have a solution/workaround for, or are there better alternatives? I have found a user-made RTSP client that does not buffer and that has minimal latency (<10 ms), but maybe there is a way to do it with an officially supported package.

user-d407c1 02 August, 2024, 09:16:41

Hi @user-151827 ! Unfortunately I have no experience with those packages, so I can not make any recommendation. I did suggested ExoPlayer as it is the one on the official Android documentation, nothing else, but that package looks quite promising, so if you implemented it let us know how it goes.

user-baddae 02 August, 2024, 12:48:34

Hi, what is the best way to measure inter-eye distance? Is there a specific procedure to ensure accuracy?

user-d407c1 02 August, 2024, 13:02:12

Hi @user-baddae ! In clinical and optometric environments, it’s common to use a pupilometer, but a ruler works just as well. Here’s how you can measure: 1. Ask the wearer to look at a distant object (to avoid eye convergence). 2. Position the ruler in front of their eyes and measure from the center of one pupil to the center of the other.

You can use a small light to help locate the center of each pupil, but ensure they are looking at a distant object.

user-4475fd 02 August, 2024, 15:52:41

Neon hardware robustness

user-a2d00f 03 August, 2024, 09:57:35

Hi, i had a issue that, when i use realtime api to get IMU info, i want to get the Euler angles (YAW/PITCH/ROLL), but i just can get the quaternion, how can i trans the quaternion to (or get) the Euler angles in real time

Chat image

user-f43a29 03 August, 2024, 17:13:19

Hi @user-a2d00f , are you using the real-time API via Python?

user-f43a29 05 August, 2024, 10:24:31

@user-a2d00f Ok, then you can use the SciPy package to transform the quaternion values to Euler angles with the following code:

from scipy.spatial.transform import Rotation as R

euler_angles = R.from_quat([quat_x, quat_y, quat_z, quat_w]).as_euler(seq="XZY”, degrees=True)

It takes into account the appropriate axis labels for the IMU. The angles (in degrees) will be returned in an array in the order [pitch, yaw, roll]. Note that the real-time API does not provide the Euler angles in real-time, only the quaternion. The Euler angles are computed for you on Pupil Cloud and via Neon Player.

user-baddae 05 August, 2024, 11:29:54

Hey! Is there a way to know which frame of the scene camera video correlates to data points, to match up the frames of the video to the collected data? I have everything downloaded from pupil cloud but I only have the timestamps of the datapoints, not their relation to the frames of the video

user-07e923 05 August, 2024, 11:42:35

Hey @user-baddae, may I ask what's your goal of correlating data points to the scene camera? For context, gaze is sampled at 200 Hz, and the scene camera is sampled only at 30 Hz. This means you're going to get a lot more gaze than scene camera data at any given frame.

user-baddae 05 August, 2024, 11:44:22

We're trying to do some image processing with the data and the video feed

user-07e923 05 August, 2024, 11:47:00

Okay. You can get the scene camera's timestamps from Pupil Cloud by downloading the timeseries data. The timestamps are found in the world_timestamps.csv. Each row is the timestamp per frame in ns.

Using this .csv, you can then correlate the gaze data from gaze.csv. You might need to first include/exclude data using timestamps as the criteria, to find gaze within a specific frame.

user-baddae 05 August, 2024, 13:06:25

Ah ok thank you!

user-dcc847 06 August, 2024, 16:41:05

Dear Neon Support,

I am experiencing an issue while using the real-time-screen-gaze feature and would like to ask for your assistance.

As per the instructions on GitHub, I have obtained gaze data from the Realtime API and passed it to the instance for GazeMapper processing. However, I am not receiving gaze positions mapped to the screen coordinates.

I have tried using 4 or 8 markers and increased their size, but the issue persists.

Could you please provide a solution?

Here are the versions of the packages I am using:

・pupil_labs_realtime_api 1.3.1 ・real_time_screen_gaze 1.0.2

user-cdcab0 06 August, 2024, 17:06:23

This is usually the result of poor marker visibility/detectability. In most environments, you will need to adjust the exposure time in the Companion device. If you open the scene camera preview in the companion app, you'll be able to see what the camera sees, and this is a good place to start. You'll also want to be sure that your markers are large enough and have large-enough, white margins around them. If you're unsure, share an image from your scene camera with your markers in view, and we can troubleshoot from there

user-f6ec36 07 August, 2024, 01:45:04

Hi . I have a few queries about Neon output .

user-07e923 07 August, 2024, 06:18:25

Hi @user-f6ec36, thanks for reaching out πŸ™‚ Feel free to type your questions here.

user-f6ec36 07 August, 2024, 06:21:49

We would like to have the following metrics for a particular study wee. Can we get it from the cloud software.

user-f6ec36 07 August, 2024, 06:23:31

Chat image

user-f6ec36 07 August, 2024, 06:25:04

the client would like in excel format

user-07e923 07 August, 2024, 06:27:24

These are the fixation metrics we currently provide on Pupil Cloud. They can be obtained by drawing areas of interest, see here. All the data that we provide can be downloaded as .csv files.

user-07e923 07 August, 2024, 06:27:28

Chat image

user-f6ec36 07 August, 2024, 07:14:22

We would need some help with this. Can we write to support or sales requesting a session

user-480f4c 07 August, 2024, 07:15:32

Hi @user-f6ec36 - we have received your email, and we will reply today to schedule the Onboarding Workshop you requested. We can then schedule a call and address any questions you might have regarding metrics and how to obtain them from Pupil Cloud.

user-f6ec36 07 August, 2024, 07:15:54

Thanks Nadia . Appreciate the same.

user-5c56d0 07 August, 2024, 07:17:56

Dear sir Excuse me. Could you please answer the following?

Q1 https://docs.pupil-labs.com/neon/neon-player/blink-detector/

Regarding blinks.csv, Duration [ms]: Duration of the blink in milliseconds." in blinks.csv matches which of the following?γ€€If not, which one is it close to?

(1) The time range between when the eyelid starts to close and when it finishes opening. This is the duration of the blink as described in general paper such as https://ieeexplore.ieee.org/document/8125844 or https://www.sciencedirect.com/science/article/abs/pii/S1746809421004808. (2) The time interval between the pupil beginning to be hidden by the closing of the eyelid and the pupil being fully visible by the opening of the eyelid. This time width is calculated according to whether the pupil is hidden or not.

Q2 Is it safe to treat this Duration value as the correct data for Duration in the paper?γ€€οΌˆAlternatively, the blink duration of NEON is not good to use in the paper because if it necessarily indicates the duration of the blink?οΌ‰

Q3 Is the above the same for both NEON and Invisible devices?

user-07e923 07 August, 2024, 07:29:58

Hi @user-5c56d0, thanks for contacting us πŸ™‚. Blinks are detected via an eyelid opening and an eyelid closing event. This detection is the same between Neon and Invisible (Q3).

For Q1, I recommend reviewing the white paper from the link. The description from the white paper about how blinks are detected is similar to (1). There are also details on when false detections are rejected, as well as the minimum duration needed for a blink.

As for Q2, I'm not sure I understood what you mean by "safe to treat this duration value as the correct data". If you're unsure about the blink duration, you can manually process the raw binary data offline by downloading the "Native Recording data" folder. You also then edit the blink detector script to something that suits your research needs, or employ an entirely different blink detector.

user-480f4c 07 August, 2024, 07:43:28

@user-5c56d0 to add to the information already provided by @user-07e923: You can run the blink detector by yourself and evaluate it for your use case, following this tutorial

user-5c56d0 07 August, 2024, 09:09:07

Sorry for the confusing question. Thank you for your response.

I apologize for asking similar questions repeatedly. β€»I plan to use the Duration [ms] values obtained from pupil cloud as they are.

β‘  Are the Duration [ms] values obtained from Neon in blinks.csv a reliable means of measuring the actual duration of blinks?

β‘‘ Do the Duration [ms] values obtained from Neon closely match the actual duration of blinks?

user-50c22d 07 August, 2024, 11:28:03

Hi, would it be possible to extract the pupil ellipsis area (major and minor axes) from the Neon 3D eye-states measurements? Or can we only access the average pupil diameter? Thanks!

user-07e923 07 August, 2024, 11:29:46

Hey @user-edb34b, thanks for reaching out πŸ™‚ I've moved your message to this channel since it's concerning Neon, and not Core XR.

Pupil diameters provided by Neon isn't an average, but the physiological size in mm of each eye. You can learn more about the 3D eye states file here.

If you'd like to access the values in real-time, such as during recording, you can do that using the real-time API.

user-bf6395 07 August, 2024, 17:58:37

Hi all, I am starting a gaze tracking project with neon. Does anyone know where I can find existing open-source datasets or ML models? It would save me a lot of time. Thanks!

user-435838 07 August, 2024, 22:03:32

Hello, is there a way to extract raw gaze direction values from the device's Neon companion app that have been recorded?

user-f43a29 08 August, 2024, 06:50:33

Hi @user-435838 so that I am certain I fully understand the question, could you clarify what you mean by β€œraw gaze direction”? Do you want gaze in a 3D world reference frame or do you want it in the scene camera reference frame (similar to 3D eyestate)? If it helps, you could draw what you mean on a piece of paper and send a photo.

user-f43a29 08 August, 2024, 08:00:07

Hi @user-151827 , my apologies for not getting back to you.

The team knows about it. A fix and what form it will take is being determined.

If you want, you can open a ticket in πŸ›Ÿ troubleshooting to make it easier for us to stay in contact.

In the meantime, you can connect the Companion device to the hotspot of a second phone to initiate a network connection and start the stream.

user-151827 08 August, 2024, 08:21:07

@user-f43a29 Thanks for looking into it. I'll open a ticket. The meantime solution should be sufficient for the nearest future, but a fix would be excellent.

user-20657f 08 August, 2024, 15:23:04

Hi there! What does the percentage bar indicate in the fixations heatmap figures?

user-f43a29 08 August, 2024, 15:48:55

Hi @user-20657f , have you seen the heatmaps documentation, including the implementation details at the bottom? If that does not answer your question, then just let us know.

Regarding the second question, you can use events to mark the beginning and end of each stimulus, such as β€œstart-picture-1” and β€œend-picture-1”. The events can have whatever name you want, so long as the event names are consistently used across all recordings that you are studying. You can easily add events during a recording using the real-time API or you can manually add them post-hoc in Pupil Cloud. There is also a way to programmatically add them post-hoc via the Pupil Cloud API, but this will require that you have a record of when each stimulus was presented to each participant, especially since presentation order was randomized. Will you be needing the Pupil Cloud API?

Once you have the events in place, then you can use them to set an analysis window when running an Enrichment, also known as Enrichment Sections. You will find this under β€œAdvanced Settings” when creating an Enrichment. Be sure to set the section before running the Enrichment.

user-20657f 08 August, 2024, 15:23:16

Is it the % of recordings in which there was a fixation on that area?

user-20657f 08 August, 2024, 15:24:49

Also, I was curious if someone could help me with the "time to first fixation" metric if I am randomizing the order of an image being shown to several individuals. I essentially want to know the time from when the picture was shown to when the first fixation was on certain AOIs. Right now, since my images are being presented in a randomized order, it is giving me the time to firstfixation since the start of the images which does not represent that metric. Thank you so much!

user-20657f 08 August, 2024, 20:40:02

Would there be a way for me to set "Image 1 start" and apply that to all of the recordings for that image

user-20657f 08 August, 2024, 20:40:48

im just curious why that wouldnt work given that Image 1 for example despite being presented in a randomized order still has the same barcodes/symbols in the corners accross different recordings

user-f43a29 09 August, 2024, 09:36:42

Hi @user-20657f , in Pupil Cloud, events are not tied to the output of the Enrichments, so creating an event does not link it to the detection of a specific set of AprilTags. This means an event cannot be propagated across recordings in that way.

Since you have collected your data, here are some ways to do what you are asking:

  1. You can skip events. Rather, draw your AOIs for a given surface and then run the Marker Mapper Enrichment on your selected recordings. Once it is finished, download the enriched data and in there, you will find timestamps for fixations and the surfaces. You can also get timestamps for the AOIs. You can correlate this data in an analysis environment, like Python, to compute time to first fixation for a given AOI, calculated from the start of stimulus presentation.
  2. If you want the metrics as computed by Pupil Cloud and are okay with some programming, then you can follow a similar process as item 1, but rather than calculate time to first fixation yourself, you can use the Pupil Cloud API to send events (https://discord.com/channels/285728493612957698/1047111711230009405/1204088822892073033) based on the surface and AOI detection timestamps. Then, you can use these events in the β€œAdvanced Settings” when creating an Enrichment.
  3. Alternatively, if you have a record of the stimulus presentation timestamps and you have a way to synchronize them with Neon’s clock, then you can submit them directly via the Pupil Cloud API (see the link in item 2 above).
  4. If you would prefer to do everything in the Pupil Cloud UI, then you can add the events manually.

For future reference, it is easiest to add events during the experiment using the real-time API.

user-bba4b8 09 August, 2024, 00:31:29

Hi Pupil Labs Team, I have the rare opportunity to refit a room specifically for eye-tracking using the Neon glasses. Estate management has asked about the ideal ceiling lighting conditions. Could you provide advice on the optimal levels of lighting warmth, luminance, or any other suggestions for room lighting?

user-480f4c 09 August, 2024, 06:55:21

Hi @user-bba4b8. Thanks to its deep learning approach, Neon provides robust gaze data in any environment (even in direct sunlight or complete darkness). Therefore, the room lighting will not affect your eye tracking data.

Room lighting can, however, impact the clarity of your environment as captured by Neon’s scene camera. For instance, overly bright areas in your scene may appear overexposed in the camera feed. To ensure you have an optimal and clear view of your scene in the Neon scene camera video, we recommend reviewing our docs on the different exposure modes offered in the Neon Companion App settings. The scene camera’s exposure can be adjusted to improve image quality in different lighting conditions.

user-3ecd80 09 August, 2024, 06:26:34

Hello, We are in the market for an eye-tracking goggles. My biggest concern is not the hardware but actually the software. I am worried that defining the AOIs will be extremely cumbersome. We want to study the usability of our devices using eye-tracking. Our devices are combinations of hardware and a screen with GUI. I remember when I was at university, we went through frame by frame to record the fixations. Extremely time-consuming. I know now it is possible to draw in AOIs but if the head or participant move do I need to manually move the AOIs to realign everything? is there some sort of AI that can be used to recognize different parts of the GUI so that AOIs do not have to be moved manually? How time-consuming is it to get useful information out of a eye-tracking video?

user-480f4c 09 August, 2024, 07:01:59

Hi @user-3ecd80! If your use case requires mapping gaze/fixation data onto AOIs, then I'd recommend considering the tools we offer on Pupil Cloud. For example, our Reference Image Mapper enrichment allows you to map gaze/fixations from your eye tracking recording onto a 2D image of your AOI. This is a powerful tool - it works with real-world AOIs (e.g., a table, a chair, even an entire building). You can see a demo in this tutorial where we used this tool to map gaze onto areas of interest throughout an entire room as the wearer was moving.

Following completion of this enrichment, you can define further AOIs on the reference image using our AOI Editor Tool.

If you'd like to learn more, I'd recommend scheduling a demo call with us to discuss about the software and how you can analyze your data using our tools. Feel free to reach out at info@pupil-labs.com and we can schedule a call.

user-292135 09 August, 2024, 06:30:47

Hello team, Is there any easy way to download specific time series data (e.g., saccades) from Pupil Cloud? I have long 10+ recordings and want to download only certain types of data, but there is no option to select them. As a result, I have to download the entire time series data, which is painfully slow and could be costly for you.

user-480f4c 09 August, 2024, 07:30:32

Hi @user-292135 - Selecting only one file for download is not currently available on Cloud. Feel free to add this feature to the πŸ’‘ features-requests πŸ™‚

However, getting the folders via the Pupil Cloud API would be less time-consuming. You can download programmatically a selected recording (or a set of recordings), download it via the API, and work only with the saccades.csv file. You can find an example attached.

example_download_cloud.py

user-a09f5d 09 August, 2024, 16:21:59

Hello I am using the values of eyeball cente and optical axis provided within the 3d_eye_states.csv file to cast an optical axis ray for each eye and visualise this using pyvista (I ultimately plan to correct this to the visual axis). I need to visulise this within a coordinate space of my choice. To do this I have used the tag aligner program provided by your alpha labs which has given me an aligned_poses.csv file. I would like to use this file to get my optical axes rays into this calibrated space by using the values of translation_x,y,z (which will be converted to mm so there are in the same units as eyeball center) as the origin of the scene camera. However, I do not know how to match the timestamps within aligned_poses.csv with those from 3d_eye_states.csv (or any other file for that matter). For each row in 3d_eye_states.csv it gives timestamp [ns] which is a very large number (e.g. 1.72E+18). This timestamp seems to match up with those provided in other files such as gaze.csv and imu.csv. However, it does not match with the values given for start_timestamp and end_timestamp in the aligned_poses.csv file which are much smaller (e.g. 1.5).

1) Could you please tell me what are the units for start_timestamp and end_timestamp in aligned_poses.csv? 2) Crucially, how can I match the the timestamps in aligned_poses.csv with the timestamps in 3d_eye_states.csv so that I know which rows in each file correspond with each other?

user-cdcab0 09 August, 2024, 17:23:31

Hi, @user-a09f5d - I believe the timestamps of the poses are in seconds, measured from the recording start time, whereas other timestamps are measured in nanoseconds from the Unix epoch. To convert the pose timestamps, first multiply by 1e9, then add the result to the recording start time which should be in the info.json file

The aligned poses are calculated from scene camera images, which, as you probably know, are sampled at ~30 Hz, whereas eye states come from the eye camera images at ~200 Hz. Since the data come from different sensors, the timestamps will never match up perfectly - you will need to match them by finding the nearest values

user-b6f43d 12 August, 2024, 09:10:55

Hey pupil labs, While collecting data during day time iam facing this issue of too much of brightness. iam not able to look at what he is exacly looking from the scene video. it is focusing on what is inside the car rather than what is outside. how do i deal this ??

Chat image

user-07e923 12 August, 2024, 09:23:17

Hey @user-b6f43d, thanks for reaching out πŸ™‚ The "brightness focus issue" you're describing is related to camera exposure. You can change the exposure modes before recording (or even during). However, it's not possible to modify a recording's exposure.

user-dcc847 12 August, 2024, 15:51:36

Hello Pupil Labs.

I followed the Real-Time API tutorial to obtain scene video, gaze data, pupil diameter, and eye state data, but I am unable to output the pupil diameter information.

The XY coordinates of the gaze, the positions of the centers of both eyeballs, and the optical axis information of both eyes were output without any issues.

The error output is as follows:

'GazeData' object has no attribute 'pupil_diameter_left'
  File "C:\Users\key06\Documents\Python_UDP\neon_test1.py", line 12, in <module>
    print(f"Pupil diameter in millimeters for the left eye: {gaze_sample.pupil_diameter_left} and the right eye: {gaze_sample.pupil_diameter_right}\n")
AttributeError: 'GazeData' object has no attribute 'pupil_diameter_left'

The versions of Python and the Real-Time API are as follows:

・Python 3.10.9 ・pupil_labs_realtime_api 1.3.1

Could you please provide a solution to this issue?

user-f43a29 13 August, 2024, 08:26:32

Hi @user-dcc847 , may I ask what version of the Neon Companion App you are running on the Companion device?

user-85dce8 13 August, 2024, 12:12:18

Hi, I have a question regarding the calculation of the pupil diameter. I have read that it is only availabel in real-time and pupil cloud. I export the data from the app to my computer. When importing the data to neon player, I can apply the plugin "Eye State Timeline" which should calculate Pupil Diameters, Eyeball Centers and Optical Axes. However, when opening the cvs file 3d_eye_states, there is no data. (all the other data was available) So my question is: Is there a way to calculate the diameter without using pupil cloud?

user-07e923 13 August, 2024, 12:18:52

Hi @user-85dce8, thanks for reaching out πŸ™‚ You can now get Pupil diameter offline by exporting data from the Companion app onto your computer, or download the "Native Recording data" from Pupil Cloud, then dropping the folder into Neon Player.

Which operating system and Neon Player version are you currently using?

user-151827 13 August, 2024, 12:52:59

Hi there, Is it at all possible to access the rtsp stream of the camera remotely (i.e. not on the local network)? Will port forwarding work or is there an easier solution?

user-4bc389 14 August, 2024, 05:31:25

How to solve the problem shown in the picture when using Neon?

Chat image

user-480f4c 14 August, 2024, 06:13:14

Hi @user-4bc389 - could you please open a ticket in the πŸ›Ÿ troubleshooting channel? We will assist you there in a private chat

user-de8319 14 August, 2024, 06:32:01

Hi, is possible to order a set of prescription lenses from -3.0dpt to 3.0 dpt, diopter in 0.5 steps? Thanks.

user-07e923 14 August, 2024, 06:41:17

Hey @user-de8319, thanks for reaching out πŸ™‚ Sure! Please contact our sales team --> sales@pupil-labs.com with your contact, telephone, shipping address, and your request.

Btw: we also sell a lens extension kit that ranges -6 to +6 diopters. If you don't have the "I can see clearly now frames", you can simply by the frame + lens kit. Otherwise, you can by the lens extension kit if you already have the frame. Check out the accessory section of our online store.

user-0001be 14 August, 2024, 15:37:59

Hello! What happens if we accidently upldate the Android software, it keeps popping up and I've been snoozing it everytime since we got the Motorola phone. Today I noticed the update has been downloaded but it needs to be restared to be updated officially. In the event that the phone restarts and updates to the new software, from my understanding the app wont work as it works with this specific version of android OS. How should I go about this?

user-07e923 14 August, 2024, 15:52:47

Hi @user-0001be, thanks for reaching out πŸ™‚ Firstly, don't panic. I understand that you might have unknowingly upgraded Android. Moto's system update is very aggressive. The good news is that, we usually provide a way to rollback the wrong Android version from our documentation page.

If you look at the compatible devices page, we do support Android 14 for the Moto Edge 40 Pro.

user-20657f 14 August, 2024, 17:19:32

Why are some of my "time to first fixation" values = to "0" when I have fixation metrics for this participant that wasnt 0

user-20657f 14 August, 2024, 17:20:18

for instance one observer looked at a part of an image for an average of 435 ms, 10 times, but then time to first fixation and total fixation duration = 0

user-07e923 15 August, 2024, 06:55:03

Fixation-based metrics on Pupil Cloud

user-bef103 15 August, 2024, 11:34:22

Hey Pupil support team I work at iMotions, and we have recently expericned issues with live recordings done with our software when using the neon glasses. When the recording is being done the gaze is fixated on the left sight of the screen the entire recording and this reuslts in bad eye tracking data. If anyone can DM private i can further explain the issue.

user-d407c1 15 August, 2024, 11:43:10

Hi @user-bef103 ! Could you open a ticket on πŸ›Ÿ troubleshooting and develop there the issue? Btw have you tried clearing the app's cache?

user-bef103 15 August, 2024, 11:43:19

Yea i have.

user-b6f43d 15 August, 2024, 13:13:38

Is there any way in which i can download the scene video with the gaze, fixation and saccades on it just like how it shows in pupil cloud ? if i download the scene video it just come as a raw video.

user-d407c1 15 August, 2024, 13:31:21

Hi @user-b6f43d ! The video renderer is designed for this exact purpose. Just place all the recordings you want to export in a project, then create a new visualisation on the left panel and use the Video Renderer option.

You can define the start and end points of the segments you’d like to include using events (to trim the recordings, if necessary). Additionally, you can configure how gaze circle is displayed, show or hide fixations and undistort the video. Once you’re ready, click β€œRun,” and after processing, you’ll be able to download the final video.

user-20657f 15 August, 2024, 19:37:38

also i have a second inquiry. how can i change the final enrichment image that gets carried through to the visualizations. some images have been coming out a bit faded or with strange lighting. We have been keeping recording setting and lighting consistent so I am wondering if the final enrichment image can be enhanced or changed to a better quality timestamp. Thank you!

user-f43a29 15 August, 2024, 21:47:25

Hi @user-20657f , let me see if I can clear it up.

The primary purpose of markers is to define the boundaries of surfaces. In some cases, they can mark the start and end of a stimulus presentation, but markers can also be used to define the boundary of a kitchen table, for example. Sometimes, users print the markers and attach them to the corners of their monitor, and in that case, there would be the same markers for all stimuli, including the inter-stimulus intervals. In such a case, marker detection cannot be used to determine beginning and end of a stimulus.

So, rather than impose restrictions on what the markers mean, we have events to denote analysis windows and "moments of interest". If you use the default recording.begin and recording.end events, then time to first fixation is measured from the start of the recording, not the start of marker detection. If you use events that are positioned at the start and end of the marker presentation, then that is one way to get what you want.

I understand the difficulty of doing it manually in your case, so one programmatic solution is method 1 from the previous message, by which you sidestep events and obtain the start and end timestamps of marker detection, which would correspond to the start and end of your stimuli. Then, you can compute the values you need. There is also method 2, which uses the Cloud API. If you click the link in method 2 (https://discord.com/channels/285728493612957698/1047111711230009405/1204088822892073033), then you will be taken to a message with example code and details of the Cloud API. Method 2 would require that you have saved when each stimulus was presented.

If you are looking for dedicated programming support, then we do offer support packages.

user-20657f 15 August, 2024, 23:08:47

OK thank you! I will look into that. Also I am running some enrichments on a lot of recordings right now. They have been running for 2 hours and 0% progress. Is the site down or something?

user-20657f 15 August, 2024, 23:10:46

All recordings have been stuck on 0% processing

user-20657f 15 August, 2024, 23:17:36

without doing all 20 images with 160 recordings each at once

user-20657f 15 August, 2024, 23:20:52

and once i run the enrichment can i close my laptop - i.e. will it run in background within the cloud?

user-f43a29 15 August, 2024, 23:22:06

Yes. None of the processing runs locally. The Enrichments will continue to process on the Pupil Cloud servers.

user-20657f 16 August, 2024, 01:48:37

Ill wait until morning however, its been about 6 hours and every single recording is still on 0% doesnt seem like its moving

nmt 16 August, 2024, 02:15:47

Hi @user-20657f! Can you try refreshing your browser window? Sometimes browsers can cache the page, making it appear like the processing is still ongoing.

user-20657f 16 August, 2024, 02:31:44

5+ hours

Chat image

user-20657f 16 August, 2024, 02:34:31

Chat image

user-20657f 16 August, 2024, 02:46:41

Any thoughts? Thanks so much guys!

user-07e923 16 August, 2024, 07:42:47

Hey @user-20657f, thanks for providing the screenshots. So it seems like you're trying to run the enrichment on 160 recordings x 6 times. While I don't know how long the average recordings are, or which segments of the recordings you're processing (in terms of recording length), I'd ask you to please remain patient πŸ™‚ . The enrichment won't be completed within only a few hours.

user-b6f43d 18 August, 2024, 14:47:23

If i try to exreact using win rar, its showing like this

Chat image

user-f43a29 18 August, 2024, 15:49:33

Hi @user-b6f43d , could you give 7-zip a try?

user-f43a29 18 August, 2024, 18:19:57

Which video type are you looking at? - Video renderer visualization - Timeseries Data + Scene Video - Native Recording Data video

When you say it β€œgets stuck”, do you mean that the time in the video player keeps increasing (I.e., the play head keeps moving), but the video does not change? Or the videos are shorter in duration than expected?

What video player are you using? It should not be necessary, but just to be sure, does it work with VLC player?

user-b6f43d 18 August, 2024, 18:54:47

iam looking at video render visualization, the videos gets stuck but the time is increasing. and using VLC media player only

user-bdc05d 19 August, 2024, 08:32:02

I see that the page/code for scanpath generation on alphalab have change (is there big change or not? I'm not sure)

user-480f4c 19 August, 2024, 08:35:05

Hi @user-bdc05d, indeed we have updated the scanpath generation on our Alpha Lab page. The output remains the same, that is you can generate both static and dynamic scanpahts. With this updated version, we provide an easier and more user-friendly tool (leveraging Google Colab) for a faster scanpath generation. Additionally, with the updated code, you can generate scanpaths using the output of both Reference Image Mapper and Manual Mapper. You simply need to open the Google Colab link and follow the instructions on the tutorial. Let me know if you have any further questions πŸ™‚

user-bdc05d 19 August, 2024, 08:36:53

the use of google drive obligatory? even in local or not ?

user-480f4c 19 August, 2024, 08:38:58

In case you don't want to use Google Colab, you can find the same code available for download here

If you still want to have access to the old scanpath generation code, you can still access it here

user-bdc05d 19 August, 2024, 08:40:26

okay it is just that I have some trouble with drive (even if I connect while running code it does not work), we must download ref img mapper folder and they put it on drive that it to use the new version of the code ?

user-480f4c 19 August, 2024, 08:47:59

The current version offers two options:

  1. You can download the enrichment folder manually from Pupil Cloud and then upload the unzipped folder to your Google Drive. Then, on Google Colab you can type the folder's name (in Google Drive Path field in the attached image), and then run the cells to generate the scanpaths.
  2. Instead of manually downloading your enrichment folder from Cloud and manually uploading to your drive, you can have this process done automatically. Here's how: Select Retrieve the enrichment automatically (see image). You can use a Pupil Cloud API token to have the enrichment loaded into Google Drive automatically. You need to obtain a developer token from Pupil Cloud (click here to obtain yours). Then you simply need to copy paste the enrichment URL and everything is done automatically.

Chat image

user-bdc05d 19 August, 2024, 08:50:10

okay thank I will try this ! and thank also for the old scanpath code πŸ™‚ the change is just in the process but not in the result so if I don't manage do do it I can use the old one πŸ™‚

user-480f4c 19 August, 2024, 08:51:32

correct, the changes concern only the process. The results are the same - you'll get static and dynamic scanpaths in either case. If you run into any issues while trying it out, let me know πŸ™‚

user-bdc05d 19 August, 2024, 08:59:29

I don't have issue but the precedent code was much better for my use in local than always copy paste the url etc for all my subject but one question, the code for generating dynamic scanpath (video) was updatated ? because it it was faster that what I have actually (sec vs several dozen of min)?

user-480f4c 19 August, 2024, 09:02:21

yes, the current version offers a faster generation of scanpaths.

Please note that you don't need to run the code for each subject separately. If you have an enrichment that includes the data from 5 participants, the current version allows you to select which participants will be included in the scanpath visualization (see the section Select The Wearers To Be Included in the Google Colab notebook).

user-bdc05d 19 August, 2024, 09:05:32

no i have several "project" for each subject that why I said that (for my case I needed to separate them)

user-480f4c 19 August, 2024, 09:08:30

right! thanks for clarifying. In that case, yes, you'd have to run the notebook separately for each enrichment folder. However, that was the case for the old scanpath code - you had to select one enrichment folder every time you'd run the code.

The decision is ultimately yours, feel free to use the option that best meets your needs. In terms of results, though, these are the same in either case πŸ˜‰

user-bdc05d 19 August, 2024, 09:11:25

yes but I had change that to just run on all my folder so that was quick not sure to do this easily with this new one, but the speed of process is so much better ! πŸ₯²

user-934d4a 19 August, 2024, 10:12:33

I have a question about the data that comes out when performing the head pose tracker.

On the one hand we have the rotation in X/Y/Z and on the other hand the Pitch, Yaw, Roll.

What rotations do each one describe?

https://docs.pupil-labs.com/neon/neon-player/head-pose-tracker/#head-pose-tracker

user-f43a29 19 August, 2024, 10:21:44

Hi @user-934d4a , they are two different ways of specifying the exact same rotation.

They both specify the rotation of the head.

user-b3b1d3 19 August, 2024, 11:20:10

Hello I have a problem when preprocessing data with the neon player. If I ask use the fixation pluggin to extract the fixations and try to export the data, it crashes with the following traceback, thanks in advance:

user-07e923 19 August, 2024, 13:20:39

Hey @user-b3b1d3, thanks for reaching out πŸ™‚ May I ask which Neon Player version are you using, and what's your operating system?

Also, did the fixation detector finished running (i.e., finished detecting all fixations) before you tried exporting the data?

user-b3b1d3 19 August, 2024, 11:20:13

Chat image

user-d2d759 19 August, 2024, 23:27:19

Hi there, my neon eye tracker is having sensor failure repeatedly, even with uplugging and plugging back in, what shall I do to resolve this? Thanks

Chat image

nmt 20 August, 2024, 02:54:06

Hi @user-d2d759! Please open a ticket in https://discord.com/channels/285728493612957698/1203979608563650650 and we can do some debugging πŸ™‚

user-baddae 21 August, 2024, 15:26:49

Is there any way to get the real time audio signal from the neon to see when an auditory signal was produced with a timestamp associated?

user-d407c1 21 August, 2024, 15:40:17

Hi @user-baddae ! The audio stream is currently not exposed in the realtime API, if you want to see this implemented feel free to upvote this feature request here https://discord.com/channels/285728493612957698/1226973526947266622

user-baddae 21 August, 2024, 15:41:57

If I record the real time data and save the real-time video feed, can I analyze the audio stream from there offline?

user-d407c1 21 August, 2024, 16:59:40

Just to clarify, the audio stream would be encapsulated in the video stream as a channel, but this is currently not implemented as I mentioned. So, you won’t be able to obtain it from the streamed video.

With that said, if you have on the phone audio enabled and you start the recording pressing the button or programmatically, the video stored in the device will include the audio signal.

From there, you can export the stored video or if you have cloud uploads enabled, you can download it from Cloud to further analysis.

If you need more guidance on how to access the audio channel, let us know

user-baddae 21 August, 2024, 15:59:06

Or is it possible to collect data in real time but also save it to pupil cloud to look at the events and analyze further?

user-baddae 22 August, 2024, 07:50:47

And I can have the recording on the phone while still doing real time collection correct?

user-d407c1 22 August, 2024, 08:02:18

Yes, you can start/stop the recording by including the following in your code.

import time

from pupil_labs.realtime_api.simple import discover_one_device

# Look for devices. Returns as soon as it has found the first device.
print("Looking for the next best device...")
device = discover_one_device(max_search_duration_seconds=10)
if device is None:
    print("No device found.")
    raise SystemExit(-1)

print(f"Starting recording")
recording_id = device.recording_start()
print(f"Started recording with id {recording_id}")

time.sleep(5)

device.recording_stop_and_save()

print("Recording stopped and saved")
# device.recording_cancel()  # uncomment to cancel recording

device.close()

Check more here

In fact, I strongly recommend that you rely on the recorded data rather than only on the streamed data. This is mainly because your network can lose packages.

user-baddae 22 August, 2024, 09:15:46

Thank you! Could you give me some insight on how to access the audio channels afterwards from this recording?

user-d407c1 22 August, 2024, 09:43:30

To access the audio channel, you would typically use ffmpeg or pyav (a python binding for it). A straightforward way to work with these libraries is through our pl-neon-recording library, which simplifies their usage. You can find an example of how to access the audio channel and detect the loudest sound in this example script.

user-37a2bd 22 August, 2024, 11:52:51

Hi team, I am about to downgrade my OnePlus 10 pro companion device from Android 14 to Android 13. Could you guys please tell me how to backup the data on the companion app? Is there a particular way I should do this or is it just a simple copy and paste? How do export all the recordings properly? I would like to keep an offline backup since some of the projects are still in progress. I know that the data is already on the cloud but just as a fail safe.

user-480f4c 22 August, 2024, 12:00:24

Hi @user-37a2bd - thanks for reaching out. If you want to save the recordings locally, you need to simply transfer the recordings from the phone to your computer. Please refer to these instructions.

user-480f4c 22 August, 2024, 13:18:11

Sure, if you have any issues feel free to reach out here or via email at [email removed]

user-37a2bd 23 August, 2024, 07:03:35

Hi Nadia, I have a lot of recordings on the phone and when I tried to export all of them the phone gave me a message saying it has insufficient space to perform the export. What should I do?

user-dcc042 22 August, 2024, 16:48:43

Hello! I'm considering purchasing a new laptop for research. Are there any specific computing specs and requirements that I might want to look for if I'm conducting research with the Neon eye tracker?

user-480f4c 22 August, 2024, 16:55:39

Hi @user-dcc042! Neon connects to a phone (included in the full bundle we ship, regardless of the frame you choose), so a laptop is not needed for recording Neon data. In terms of analysis, Neon recordings can be analyzed on Pupil Cloud, our online platform, which is accessible from any browser and its usage is not dependent on laptop specs.

If you want to learn more about Neon's software ecosystem, feel free to explore our docs, or send us an email to info@pupil-labs.com and we can schedule a demo and Q&A call.

user-79486f 23 August, 2024, 01:44:01

Hey Pupil Labs! πŸ‘‹

I'm a university researcher, and we're looking to buy 4 or 5 pairs of Neon eye-tracking glasses for a project. Our budget is pretty tight, so I was wondering if there are any discounts available if we go for 5 pairs? Any help or advice would be greatly appreciated! 😊

Thanks!

nmt 23 August, 2024, 02:25:11

Hi @user-79486f πŸ‘‹. Great to hear you're interested in Neon! For your query, please reach out to [email removed] and someone will get back to you. Have you already seen a demonstration of Neon? If not, we can also arrange a video call once we've got you're email πŸ™‚

user-b6f43d 23 August, 2024, 06:37:25

Hey pupil labs What is duration threshold for identifying a gaze as fixation ?

user-07e923 23 August, 2024, 06:50:03

Hey @user-b6f43d, our fixation detector is velocity based. In the first step, the detector computes how fast gaze changes (in pixels / sec) between gaze samples.

Once the change falls below a certain velocity threshold, the detector then determines how many samples stay below the velocity threshold. In this second step, the detector takes into consideration the time across samples, which is about 70 ms for Neon.

If you'd like to modify the velocity parameters, you'll have to do it offline. See https://discord.com/channels/285728493612957698/1047111711230009405/1253738317757812866

user-b6f43d 23 August, 2024, 07:09:39

and there is also a time threshold ? why so ?

user-b6f43d 23 August, 2024, 07:43:48

can explain why this is happening ?

user-07e923 23 August, 2024, 08:10:29

As described in the whitepaper, the fixation detector is a velocity-based algorithm with some minimum threshold to be considered.

How are those thresholds applied? Well, if you exceed the velocity threshold for the minimum saccade duration, then you are (still) in a saccade, which lasts for as long as you are above the velocity threshold.

In contrast, for a movement to be classified as a fixation, the gaze velocity must be below a different threshold, indicating that the eyes are relatively stationary.

user-328c63 23 August, 2024, 14:32:52

I have recently rerun computations on some of my earlier subjects to download saccade data, but now I realize that by doing so, their 3d-eyeState csv is now empty. Would there be a way to recover this data again?

user-07e923 23 August, 2024, 14:39:14

Hi @user-328c63, what do you mean by "rerun computations"? Do you mean that you've applied the gaze offset correction in Pupil Cloud?

user-934d4a 26 August, 2024, 13:42:05

Hi Pupil Labs team,

I have been working on implementing the Map Gaze Into a User-Supplied 3D Model functionality. However, I noticed that the gaze collision visualization in the 3D environment does not seem to be enabled. Is there any way to integrate this functionality?

Thank you in advance for your help.

Chat image

user-f43a29 26 August, 2024, 13:45:00

Hi @user-934d4a , great to see you are using this!

May I ask what you mean by "not enabled"? Can you show the result for your scene? If it needs to remain private, you can send it to me in a DM.

user-934d4a 26 August, 2024, 13:52:33

Hi Rob,

I mean that after reviewing the Python scripts, I couldn't find any reference to this functionality. Additionally, in the assets, the 3D object that should represent the intersection with the scene is missing; only the glasses and the ray are present.

Chat image

user-cdcab0 27 August, 2024, 08:24:00

Hi, @user-934d4a - I think what you're looking for is in the Blender addon

user-876d7f 27 August, 2024, 08:18:05

Hi pupil -labs- Team, i have a question regarding the MarkerMapper. In my project i will acquire experiment data from many participants performing tasts on a screen. I expect to end up with about 150 or more videos with a duration of each video within the range of 45 minutes up to one hour. All these vides are using the markerMapper and my understanding was, that i put them in the same project in pupil clouds. There i create the enrichment with the marker mapper. However testing this procedure i learned, that the results of the marker mapper ,e.g. gaze relative to surface , are extracted in only one file, including all the project data. Intuitively I expected to get one file for each video. Is there a possibillity to change this , so i get one enrichment file for each video? M afraid that otherwise the datatable would become very big and hard to handle . Thanks for your support

user-480f4c 27 August, 2024, 08:58:14

Hi @user-876d7f! Indeed, if you include e.g. 10 recordings to a project and run a Marker Mapper enrichment on these recordings, then your data export format will have the mapped gaze and fixations of all these recordings in one file. However, it is easy to handle the data since the csv file has a column with the recording id associated with each data point (please refer to the Marker Mapper Export Format).

Although it is possible to run an enrichment for each video, this wouldn't be an optimal way for analyzing your data. I recommend running the enrichment on all your recordings, download the data, and then you can handle the data using Python's pandas library to create individual csv files for every recording. Here's a snippet:

import pandas as pd

Load the CSV file

df = pd.read_csv('your_marker_mapper_gaze_data.csv')

Get unique recording ids

recording_ids = df['recording id'].unique()

Loop through each unique recording id and save the corresponding rows to a new CSV

for recording_id in recording_ids: df_filtered = df[df['recording id'] == recording_id] df_filtered.to_csv(f'recording_{recording_id}.csv', index=False)

user-b6f43d 28 August, 2024, 09:37:13

What does saccade amplitue in degree mean ? degree with respect to ?

Chat image

user-480f4c 28 August, 2024, 09:43:05

Hi @user-b6f43d - amplitude [deg] is a float value representing the amplitude of the saccade in degrees of visual angle. You can find a detailed description of all the data provided for Neon recordings in our docs

user-a55486 28 August, 2024, 12:44:51

Hi everyone! We are working with Neon output data but found the samples are not equally spaced in time. We took the timestamps and computed the diff between subsequent timestamps, and it seems that usually the diff is ~5ms (which makes sense given the 200Hz sampling rate) but could also be 10ms (one missed sample?). Extreme values include 175ms. If we want to convert it to a contineous signal (e.g. to be incorporated as an EEG channel), what would you recommend us to do?

user-cdcab0 28 August, 2024, 14:09:25

Hi, @user-a55486 - are you using data pulled from the phone in its native recording format? It is true that a frame of data may occasionally be skipped during real-time processing on the device. Recordings that are uploaded to Pupil Cloud are re-processed there at the full 200 Hz, and those extreme values should not be present in that data.

Another option is to interpolate the data at whatever frequency/timestamps you want using pl-neon-recording:

import numpy as np
import pupil_labs.neon_recording as nr
from pupil_labs.neon_recording.stream import InterpolationMethod

recording = nr.load('path/to/recording/folder/')
sample_rate = 200

interpolated_times = np.arange(recording.gaze.ts[0], recording.gaze.ts[-1], 1 / sample_rate)
interpolated_eyestates = recording.gaze.sample(interpolated_times, InterpolationMethod.LINEAR)

timestamps = interpolated_eyestates.ts
diff = np.diff(timestamps) / 1e6
print(np.unique(diff))

Output:

[5.00011444e-09]
user-a55486 28 August, 2024, 12:47:32

timestamps = eye_states['timestamp [ns]'] diff = timestamps.diff() / 1e6

user-a55486 28 August, 2024, 12:48:19

print(diff.unique())

[ nan 5. 5.007 4.993 4.999 5.004 4.996 175.245 5.125 5.012 4.987 5.005 4.995 5.001 4.994 5.003 4.997 5.132 5.006 10. 4.883 5.117 5.009 4.991 5.002 5.119 4.998 5.122 5.011 4.988 5.129 4.992 5.013 4.986 5.01 4.99 5.124 5.008 4.989 5.015 5.11 5.116 9.999 4.985 4.891 5.109 5.121 5.016 4.984 5.111 5.12 10.125 5.123 10.011 5.112 9.997 4.888 5.113 5.128 10.007 5.13 5.126 10.005 5.118 5.114 5.014 5.139 5.017 4.887 5.115 10.002 4.882 4.983 4.878 5.134 5.136 4.885 5.127 4.89 4.881 4.884 10.008 5.135 5.137 4.886 9.993 5.142 9.998 10.004 5.133 5.131 10.01 5.141 9.995]

user-a55486 28 August, 2024, 14:21:12

Hi @user-cdcab0 Thanks for the reply! FYI we are using the processed recordings from Pupil Cloud but we still see such great gaps. I wasn't aware of pl-neuron-recording and it does seem like a sensable solution too!

user-cdcab0 28 August, 2024, 15:05:25

You're welcome πŸ™‚

Do you notice those large gaps in timestamps in multiple recordings or just one of them?

user-a55486 28 August, 2024, 15:08:11

in multiple ones

user-a55486 28 August, 2024, 15:14:17

Actually I downloaded the 'Timeseries CSV and Scene Video' option instead of 'Native Recording Data'. Maybe that's the reason why I saw gaps?

user-cdcab0 28 August, 2024, 17:11:17

The timeseries CSV is cloud-processed, so it shouldn't have those gaps. Do you mind opening a ticket in πŸ›Ÿ troubleshooting?

user-cdcab0 30 August, 2024, 16:51:59

For anyone who may encounter this in the future, it turns out that the large time deltas occur at the beginning of the recordings - which is expected to have some lag because sensors are starting up, files are being opened for writing, etc.

The reason these large gaps are not present in native recording data (and hence, not seen with pl-neon-recording) is because gaze computed on the companion device doesn't start until after this lag has already cleared out. Many eye camera frames will be recorded without having gaze computed on the device. Whereas cloud computes gaze on every single frame of eye video from the very beginning.

user-23177e 29 August, 2024, 09:16:52

Hi, I'm relaying this inquiry from one of our researchers. They noted that the Just act natural frame (the newer 2024 version) still gets hot after wearing for a prolonged time (30mins +). We recently also received a ready-set-go frame that has replaceable foam that fits over the module. Could this also be applied to the Just act natural frame?

user-480f4c 29 August, 2024, 10:13:31

Hi @user-23177e ! May I ask what is the ambient temperature during use and if you are using the heatsink version of the Just Act Natural frame? https://discord.com/channels/285728493612957698/733230031228370956/1200298162242465842

user-23177e 29 August, 2024, 09:18:50

And, if so, would Pupil be able to provide these foam pads?

user-23177e 29 August, 2024, 10:30:08

I will have to check if they made any temp measurements, if not, we will do so. The frames were ordered on 22-02-2024. I am not sure if they are the new version

user-480f4c 29 August, 2024, 10:41:22

It looks like you have the frames with the heatsink.

Please note that the module might get a bit warm with this frame - however this should not affect the Neon App functionality.

user-23177e 29 August, 2024, 10:33:53

Chat image Chat image

user-23177e 29 August, 2024, 10:59:38

Thanks for checking. The main issue is the heat that does make it uncomfortable for participants. Would a foam piece work for this frame? The one that also is included with the ready-set-go frame?

nmt 29 August, 2024, 11:11:40

Hey @user-23177e - There will be no issue adding a small piece of foam to the frame, if you want to see whether it improves comfort for your participants. Just make sure it doesn't obscure the eye cameras, but I doubt that would really be an issue.

user-23177e 29 August, 2024, 12:12:12

Great thanks for the response!

user-688047 30 August, 2024, 04:25:30

Hey there! A quick question regarding Pupil cloud, is it possible to generate AOI heat maps for 'events' across a bunch of recordings rather than the whole recordings themselves? Thanks in advance

nmt 30 August, 2024, 04:41:29

Hey @user-688047 πŸ‘‹. Welcome to our Discord server!

Yes, you can choose to run enrichments and generate AOI heatmaps between specific 'events' across multiple recordings. We call these periods between events 'sections', and they're incredibly useful when you want to focus your analysis on certain parts of the recordings.

Read more about that here: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections

user-688047 30 August, 2024, 06:05:30

Great thanks!

user-46e202 30 August, 2024, 15:15:21

Hello. I am collecting data for an experiment using neon eyetrackers and wanted to do enrichments on the pupil cloud. However, I faced this error when I checked the recordings on pupil cloud.

Chat image

user-46e202 30 August, 2024, 15:16:52

Do I have to worry my recordings not coming back?

nmt 30 August, 2024, 15:27:50

Hi @user-46e202 πŸ‘‹. Please open a ticket in πŸ›Ÿ troubleshooting and we can take a look for you!

user-20657f 30 August, 2024, 16:28:50

Could someone please help me define what a fixation is for Neon? I know it is a fixed x, y coordinate on a given location, but for how long (200ms?)

user-d407c1 30 August, 2024, 18:21:02

Hi @user-20657f πŸ‘‹ ! You can find the fixations definition in our documentation under Data Collection > Data Streams > Fixations & Saccades. There you will also find a link to our whitepaper and a publication describing the methodology more in detail. The algorithm is velocity based but the minimum fixation duration threshold is set to 70 ms, you can find the thresholds here https://discord.com/channels/285728493612957698/1047111711230009405/1276446120192507904

user-20657f 30 August, 2024, 22:29:47

or the other pairs

End of August archive