👓 neon


Year

user-bbd165 01 September, 2024, 15:34:36

Hi, just a quick and hopefully easy question what is the academic reference I should use for your equipment? Thanks

user-07e923 01 September, 2024, 16:01:31

Hey @user-bbd165, that's a great question! It seems different researchers cite the eye trackers differently. Please check out the publications page for some inspiration 🙂

nmt 02 September, 2024, 01:18:01

Hi @user-bbd165! I'll assume you're referring to Neon, since we're in the 👓 neon channel 😄. Relevant details for citations can be found on this page of our docs. (That page has information for our other eyetrackers as well)

user-bbd165 01 September, 2024, 16:24:30

Thanks!

user-bbd165 02 September, 2024, 12:20:37

Perfect, thanks!

user-f28210 03 September, 2024, 11:06:12

neon companion crashes in the phone

user-f43a29 03 September, 2024, 11:08:48

Hi @user-f28210 , could you open a support ticket in the 🛟 troubleshooting channel?

user-f28210 03 September, 2024, 11:12:34

neon companion crashes in the phone #⁠🛟┃troubleshooting

user-08df76 03 September, 2024, 12:25:31

Hello, I'm currently using the pupile labs device paired with the neon companion app on a motorla phone. Unfortunately only the video of the outside camera is uploaded into the cloud, but I need the pupil diameter data. Do you have any idea why this happens?

user-d407c1 03 September, 2024, 12:35:36

Hi @user-08df76 👋 ! If the eye videos 👀 are not being recorded or uploaded, it might be due to a lack of permissions to access the eye cameras.

When you first connect Neon to your phone/companion device, a permission dialog should appear for each camera. Make sure to select “always” when granting these permissions. If this step was missed, the app won’t have access to the cameras, and you won’t receive the eye video feed needed for gaze and pupillometry data.

To resolve this, please update the app to the latest version from the Play Store. Then, go to the app’s settings by pressing and holding the app icon on your home screen, and check if all permissions are granted. If they are not, grant them, and try clearing the app’s cache.

If these steps don’t solve the issue, please open a ticket on 🛟 troubleshooting and we can continue with the troubleshooting.

user-d407c1 03 September, 2024, 12:55:10

@user-08df76 just clarify, do you see any error in Cloud? Or is it simply that you were expecting the eye videos to be visible on Cloud?

Note that while the eye videos can't be played back in Cloud, they are available on the Native Recording Format. Furthermore, pupil size is available directly on the Timeseries Data, in the 3d_eye_states.csv file

user-08df76 03 September, 2024, 21:48:39

Hello Miguel, yes I was expecting the eye video to be visible in the cloud. No error is shown. Can you further explain how I get access to my pupil size data, please?

user-934d4a 03 September, 2024, 13:04:53

Hi Pupil team! I wanted to ask if external IR light sources can interfere with the Neon and vice versa.

We would also like to know how well the user view camera works at night.

Thanks!

user-d407c1 03 September, 2024, 13:23:58

Hi @user-934d4a 👋 ! Neon has two IR LEDs on the module, with a wavelength of ~860nm (centroid 850nm ± 42nm Δλ) in a constant pattern. You can see more about their distribution on this message: https://discord.com/channels/285728493612957698/1047111711230009405/1182293610813730846

The IR illumination is designed to capture eye images even in complete darkness, with the eye cameras equipped with a filter that allows only certain wavelengths to pass through. While other IR sources may affect the brightness of the eye cameras, they generally should not interfere with NeonNet’s ability to provide accurate gaze data or detect eye states.

Is there something specific you’re concerned about?

About the scene camera during the night, you can tweak the exposure to improve your settings but the image quality and whether you would be able to distinguish objects will largely depend on the illumination of the environment.

If your plan is to record in almost complete darkeness, my recommendation would be to sync Neon with an IR camera.

user-934d4a 03 September, 2024, 14:39:23

Thanks Miguel! That's the information I needed.

user-934d4a 03 September, 2024, 14:43:34

It's because we are using another tracking system with a similar wavelength, and we were wondering if it could interfere with the corneal reflection.

nmt 04 September, 2024, 00:24:20

Hi @user-934d4a! Slight clarification: Neon doesn't employ corneal reflection. Instead, we use a deep-learning approach to generate gaze data. Check out the whitepaper for more information. The IR illuminators do ensure we capture images of the eyes even in very low illumination, and generally, external sources of light don't affect the gaze pipeline—it even works in direct sunlight! But if you're worried about a specific IR source, the best approach would be to run a pilot in the testing environment.

nmt 04 September, 2024, 00:15:18

Hi @user-08df76! Pupil size data is contained within the 'Time Series Data' download. To get it, just right-click on a recording, click on 'Download' -> 'Time Series Data', and it will be downloaded to your computer. Pupil size, specifically, is in the 3d_eye_states.csv file.

user-08df76 04 September, 2024, 11:53:13

Hey Neil, thank you for your help it worked now! Is there any possibility to see the pupil size per minute?

nmt 04 September, 2024, 00:15:55

You can read more about the contents of the file in this section of the docs: https://docs.pupil-labs.com/neon/data-collection/data-format/#_3d-eye-states-csv

user-7df65a 04 September, 2024, 06:23:26

Hi Pupil labs ! Can I connect Neon directly to a PC (without using the Neon Companion App) and display the scene video with the overlaid gaze circle on the PC? I would like to display the video and gaze on the PC without going through the Neon Companion App via local network communication, in order to minimize delays and video distortion.

user-f43a29 04 September, 2024, 11:50:03

Hi @user-7df65a , have you considered connecting via Ethernet cable with the tested USB hub? You can get transmission latencies of roughly 0.4-0.5 ms then.

Otherwise, directly connecting the headset to the PC will not get you gaze as you would receive it from the App, since the eye image processing is part of the Neon Companion App, running on the phone.

You could try using Neon as if it were Pupil Core, but then you must calibrate, including when the headset slips, and you no longer use NeonNet, but rather Pupil Core's pipelines.

user-7df65a 04 September, 2024, 12:09:43

Actually, the app I want to develop is one that transmits video between users walking outdoors, similar to this chat. https://discord.com/channels/285728493612957698/1187405314908233768/1280689744342155368 So, connecting via Ethernet is not an option. Therefore, I plan to develop it based on the Real-Time API! Thank you again for answering my question! @user-f43a29

user-f43a29 04 September, 2024, 14:10:07

Hi @user-7df65a , if I may then ask a follow-up question: - Were you planning to connect Neon to a laptop or something like a Raspberry Pi and put that in a backpack?

user-c541c9 04 September, 2024, 13:22:57

Is there any difference between correcting a user's gaze-offset "before" recording via app (i.e. this) vs. correcting it post-hoc, e.g. in NeonPlayer this -- It was mentioned somewhere that this correction is applied on top of the output of the gaze computed, so it should be identical -- should I choose to leave it blank in the app do it later in NeonPlayer (or Cloud)?

user-d407c1 04 September, 2024, 13:45:05

That depends on your needs. While you can do it Post-Hoc, doing it on the Companion Device has the benefit of being able to use that offset in real-time, and it would be carried over in the Native Recording format data and the Cloud data.

On the other hand, you might be more accurate at defining the offset with a mouse than with your finger on the Companion's screen. If you do it in the Cloud, the data will be reprocessed automatically, and the TimeSeries will use the new offset.

Note that the Native Recording Format won't be altered. Also, if you ran any enrichment, you would need to recompute it.

If you do it with Neon Player, only the exported data from Neon Player will carry this new offset correction.

user-0001be 04 September, 2024, 13:57:40

Hi guys, we’re writing a paper and would like to know the exact definition of a fixation used by Neon. Is it a threshold of within 1.5 degrees of visual angle and more than 70 ms?

user-d407c1 04 September, 2024, 14:09:26

Hi @user-0001be ! Neon fixation detector uses an adaptive fixation threshold, you can find all the information you need on the whitepaper and on the accompanying peer-review paper, which you can cite.

But to answer your question the 70ms is correct but as you can find on this table that minimum saccade amplitude is from 🕶 invisible not 👓 neon .

Chat image

user-0001be 04 September, 2024, 14:14:11

Thanks @user-d407c1 ! I got that mixed up. So for fixation of Neon, in general, would it be fair to put it down as less than 1 degree of visual angle and longer than 70ms? Or should I just say it’s adaptive?

user-d407c1 05 September, 2024, 09:14:14

@user-0001be I don't know to what extent you would like to describe the detector in your paper or whether you want to simply cite the already published paper.

As you mentioned, fixations are typically defined as under 1.0° of visual angle, lasting longer than 70 ms and under 750px/s velocity (roughly 48°/s), but given the detector's adaptive nature, this does not fully portray it. The algorithm approaches a velocity-based algorithm with said parameters only in the context of in the limit of static head pose and no body movement.

I would probably define it as:

Neon employs a velocity-based algorithm that adaptively identifies fixations compensating head and body movements by incorporating optic-flow-based velocity correction (derived from the IMU sensor), an adaptive velocity threshold, and event-based post-processing filters as defined by [Drews & Dierkes 2024, Fixation detector 2024]

user-c541c9 04 September, 2024, 14:22:17

Thanks for covering the pros/cons of each.

I wonder if the correction remain useful only in a narrow zone about the optical axis of the scene camera. In the documentation (https://docs.pupil-labs.com/neon/data-collection/offset-correction/), the person is moving their head while they look at the blue stickers, so the gaze (corrected and uncorrected) are all distributed in a smaller field about the centre of the scene.

This would also seem natural, but suppose we try doing all the seeing with head stationary. Then the eyes will be forced to veer outside the central zone of the scene. I wondered about how the correction would apply away from centre.

I did an experiment where I fixated on a distribution of points on a plane at about arm's length distance. I tried to fixate without moving the head, and when I observe the gaze in the recording via NeonPlayer, I see deviations in the gaze, and with manual correction applied, the reduction in error is not similar across all gaze-points.

I believe this is to be expected, given Neon accuracy report (fig. 6); From the report, with or without offset correction, there's still a spatial distribution of errors increasing outwards in the field of view. Thoughts?

user-d407c1 05 September, 2024, 06:36:33

Hi @user-c541c9 ! The gaze offset correction is just a linear adjustment applied uniformly across the field of view to the gaze estimation. It doesn’t vary at different eccentricities, thus, it will correct for general offsets across the whole field of view.

As you pointed out, and as noted in the accuracy paper, gaze estimation accuracy tends to decrease towards more eccentric areas, this happens typically on every eye-tracker and can be observed in those with a wide field of view. Unfortunately, this peripheral inaccuracies cannot be corrected by just adjusting the gaze offset parameter.

As a side note, if you are interested mostly in visualising the peripheral's visual content, you may want to undistort the scene camera and gaze.

user-688acf 04 September, 2024, 15:42:47

hello, once again neon player (version 4.2.2.) crashes when exporting data from an older recording created with neon companion. export on another machine with neon player version 4.1.4 works fine. is there a way to download this version of neon player without building it from scratch? also i would - again - ask for a little bit of error handling....the crash practically destroys all progess and time taken to run fixation and surface detection...thx in advance

user-d407c1 05 September, 2024, 06:39:26

Hi @user-688acf ! You can download previous releases here but kindly note that the latest version 4.3.2 contains several fixes to prevent crashes when exporting fixations on surfaces.

PS. Did you know that you can reuse surface definitions?

user-7df65a 04 September, 2024, 23:56:59

@user-f43a29 Yes, My ideal was to use a Raspberry Pi in a bag and connect Neon. If the processing power of the Raspberry Pi was not enough, I would use a powerful laptop.

user-f43a29 05 September, 2024, 15:16:19

Hi @user-7df65a , since a Raspberry Pi has an Ethernet port, then you could use the Ethernet cable solution to stream data from Neon with low transmission latency, but it might be more effort than necessary here.

May I ask why the standard approach of using Neon with the Companion phone is insufficient in your case? What part of your setup requires the Raspberry Pi or laptop?

user-688acf 05 September, 2024, 06:52:26

hey, great thx! both appreciated 🙂 no i did not know that, how about fixations that were detected?

user-c541c9 05 September, 2024, 10:03:45

gaze-offset correction discussion

user-dd2e1c 05 September, 2024, 14:57:05

hello neon team, I got the gaze_position.csv file from the neon player. However, in this gaze point, there are x points which over 1600px. I expected all gaze points are inside of 1600x1200 px which is front cam spec.

gaze_positions.csv Chat image

user-f43a29 05 September, 2024, 19:24:03

Hi @user-dd2e1c , gaze coordinates can go outside the boundaries of the scene camera image. We leave the decision to users about what data to retain & discard, since gaze data outside the image boundaries can still be valid and usable depending on research/use context.

However, 2000 px is quite large. Can you try clearing the Companion App’s cache? You can do this by long pressing the App icon, selecting “App Info”, and going into “Data and Storage”. Then, start the App again and make a fresh recording. Let us know if that resolves it.

user-52e548 06 September, 2024, 01:01:33

Hello Team! I'm sorry if this is a known report, but I could not find it by searching, so I am reporting a phenomenon in which the Neon Companion app cannot be launched under certain conditions.

If I delete the Workspace selected in the Neon Companion app on Pupil Cloud, the app will not be able to be launched afterward. I was able to resolve this issue by uninstalling and then reinstalling the app, but please let me know if there is another solution to this issue so that I can refer to it in case I encounter a similar situation in the future.

user-d407c1 06 September, 2024, 08:37:46

Hi @user-52e548 ! That's not the expected behaviour, if you delete the workspace or get your user removed from it. Next time you connect in your Companion App you should see the following message (See screenshot).

I've tried to replicate this on my end but I could not get in to that error. Could you share some additional details such that we can further investigate it? Was there a recording uploading half-way? What version of the Companion App where you using?

Chat image

user-b6f43d 08 September, 2024, 18:33:10

What does the world time stamp in ns mean ?

Chat image

user-d407c1 09 September, 2024, 06:33:32

Hi @user-b6f43d 👋 ! Each sensor is timestamped at the hardware level and synchronised to a master clock on the Companion Device. Thus, the timestamps in nanoseconds (UTC) will refer to different sensors depending on which file you’re looking at. For example: - The timestamps in world_timestamps.csv indicate the timing of the scene camera frames. - The timestamps in gaze.csv show the timing of the eye images used to generate gaze point samples.

user-b6f43d 08 September, 2024, 18:33:32

if i want look at the time stamps of the reconding where shoul i look at ?

user-52e548 09 September, 2024, 05:30:06

Hi @user-d407c1 ! The message as in the screenshot you attached did not appear. The app has never even started first of all. I agree that we need to review the details of the circumstances under which the error occurred. However, unfortunately, the device is currently out of my possession, so please understand that it will take about a week for my reply. In addition, I have no clear memories of whether or not there were recordings in the process of being uploaded to the cloud. Please understand that I have no way to confirm this now that I have deleted the Workspace and uninstalled(and re-installed) the Companion App. Of course, we can try to reproduce the situation after we get the device back.

user-d407c1 09 September, 2024, 06:37:06

Hi @user-52e548 ! Thank you 🙏 I hope you do not see this issue again, but I if you do we’d really appreciate it if you could provide those details. I tried to replicate the issue on my end but wasn’t able to.

user-057596 09 September, 2024, 11:02:16

Can you purchase the Neon glasses that also have the Neon module included but without the companion device. We have project where we would like to use the Neon glasses with Neon module but with the Core pipeline for analysis with our software and we would need around 10 pairs of glasses.

user-d407c1 09 September, 2024, 11:10:22

that would be possible @user-057596 ! Please reach out to sales@pupil-labs.com with that query.

user-057596 09 September, 2024, 11:10:50

Thanks Miguel

user-cc6fe4 09 September, 2024, 21:02:40

Hi! I am wondering how the world camera pixel coordinates work. What do the values mean? How should I read them? Thank you

user-f43a29 09 September, 2024, 21:10:39

Hi @user-cc6fe4 , when you say "how to read them", do you mean, how do you open the video in a programming language?

With respect to your other question, are you potentially trying to relate scene camera coordinates to positions or objects in the world? At least for the image itself, the coordinates specify the locations of its pixels. For Neon, each scene camera frame is 1600 pixels wide by 1200 pixels high, where pixel (0, 0) is in the top left hand corner, and the coordinates increase to the right and down. Pixel (800, 600) is at the center of the scene camera image.

The diagram in the Gaze section of the Data Streams docs depicts the coordinate system.

Please let me know if that helps.

user-876d7f 10 September, 2024, 08:11:52

HI pupil-labs-Team, i have a question regarding the pupil cloud. Is there any limitation of the storage in the cloud? Either in storage, or numer of Videos e.g.? I am asking to know if theres a cap that we schould have in mind when processsing our data. Thank you

user-b6f43d 10 September, 2024, 20:10:34

Hey pupil labs, Studies say that the raw pupil size data contains some missing values due to eye blinks (which lasts for almost 70 - 300ms) and some abrupt values of pupil size due to the hardware glitches. There are some eye trackers that returns a zero value during the blink interval. Is this there in neon also ? or have you done any sort of interpolation or filtering techniques before you give the csv file or is it just raw file ?

user-f43a29 10 September, 2024, 21:03:01

Hi @user-b6f43d , Neon will still try to estimate pupil size when the eyes are closed or when you blink. You can filter the data during blinks, for example. We do not automatically filter out data for you (nor do we interpolate) during blinks, since different paradigms and different research groups have different needs. In some cases, you filter just the data during the blink, whereas in others, you might also filter the data 150 ms pre- and post-blink.

The 3D Eye State stream contains the data as recorded by Neon.

user-cc6fe4 10 September, 2024, 21:09:03

Thank you Rob, the part about the coordinates specifying the locations of te pixels is already helpful. Let me be more specific, I wonder what happens to the coordinates considering head movement? If one moves rotates the head, for example, to the right, there will be a new center of the images, etc. Do you see head movement in the coordinates? Or do they stay the same and one needs to look and the data from the head position? I have an area of the video to be detected, however the markers are not beeing detected by the Marker Mapper in the cloud, so I need to figure out a different way to detect it. By the way, do you have any suggestions? Thanks again

user-f43a29 10 September, 2024, 21:24:50

Hi @user-cc6fe4 , first, let me see if I can clarify the scene camera coordinates better. There are two things to consider:

  • How the pixels are arranged in the image.
  • What the pixels "see", so to say.

As I move my head around, the pixels in the image always remain in the same location in scene camera coordinates. They are arranged in the same way in the image, but they now "see" different parts of the world, which is reflected in the color at each pixel.

In other words, the center of the scene camera image will always be at pixel coordinate => [800, 600], but this pixel is "aimed" at different points in the world as you turn your head around.

Regarding Marker Mapper, as a general tip, if you cannot clearly see the markers in the scene video feed, then it is less likely that they will be detected well. Illumination conditions can also play a role. Can you share a photo of what the markers look like when viewed on Pupil Cloud? You can share it via DM if you prefer. Then, I can provide some specific suggestions.

user-b6f43d 11 September, 2024, 03:07:01

How is neon able to calculate the pupil size even when the eyes are closed or during blinks ?

user-f43a29 11 September, 2024, 07:47:01

Hi @user-b6f43d , it tries to calculate the pupil size during a blink (i.e., it "estimates" pupil size), but of course, it will not be reliable in that moment.

This is due to Neon's deep-learning nature. It is a property of deep-learning systems and models in general - they will make an estimate from the provided data. In the case of NeonNet, the data are the images from the eye cameras.

When the eyes are visible, as is the case when you wear Neon normally, then the estimates are reliable and accurate, as assessed and validated in the accuracy test report. Otherwise, if you blink, then the data that Neon needs to make an accurate estimate is reduced, so you can filter out the estimates during this time, using the timestamps from the Blinks datastream.

Methods that will assist you in knowing when the eyes are closed for longer than a blink are underway.

user-787f23 11 September, 2024, 07:09:38

Hi pupil labs team, I have a question regarding AOIs. Is it possible to get the coordinates of defined AOIs? When downloading the timeseries csvs there are the x and y coordinates for fixations. I would like to manually compare whether a fixation falls within an AOI and for that I would need the coordinates of the AOIs I defined beforehand. However, I couldn't find them in any output. Thanks for your help!

user-f43a29 11 September, 2024, 07:55:16

Hi @user-787f23 , this is possible, but you currently need to use the Pupil Cloud API. This example from my colleague, @user-d407c1 , shows how to obtain this information and automatically add it to the gaze and fixations dataframes..

user-787f23 11 September, 2024, 11:25:02

great, thank you - we will have a look at that!

user-787f23 04 October, 2024, 05:59:32

Hi Rob, I tried the solution you suggested, but I don't think it's exactly what I need. The result of the script was that I had a new column in the fixation and gaze csv where the AOI name, where the fixation/gaze is located, was stated. So for fixation ID 5, for example, there was ‘AOI 3’.

But what I would like is to simply define the areas of interest in one enrichment and then get the x & y coordinates for these AOIs. I would then like to compare these coordinates manually with the fixation coordinates in the downloaded recording data (in the fixations.csv).

Is that possible?

Thank you for your help!

user-787f23 11 September, 2024, 11:27:37

Another question: I have seen that it is possible to use Segment Anything for an automated AOI detection. Are you also working on integrating Segment Anything 2 (https://ai.meta.com/sam2/) for videos and not only images in Pupil Cloud? If yes, do you already have a timeframe when that feature could be available to use?

nmt 11 September, 2024, 11:48:39

Hey @user-787f23! This isn't on the roadmap. But, can you add this to💡 features-requests? We may follow-up with some questions there 😄

user-97c46e 11 September, 2024, 17:37:11

Hi pupil labs team,

We are in the process of integrating a recently purchased neon device (paired with a Motorola Edge 40 Pro XT2301-5 companion device running Android 13) into our experiments.

Sending events to the device with the lowest possible latency is vital for data quality in our setup.

In our conversations prior to purchase, we'd been advised to use a USB dongle/hub with an ethernet port to achieve wired connectivity and avoid wireless network jitter when sending events. Specifically, https://www.amazon.de/PowerExpand-Ethernet-Adapter-Compatible-MacBook/dp/B08CKXNJZS?ref_=ast_sto_dp was shared as a device that had been tested by pupil labs and found to be appropriate.

I am a little unclear about configuring the companion device to use the ethernet port on the dongle to receive messages. Specifically, the only place I see ethernet in the companion phone's settings is the hotspot and tethering section, and ethernet tethering appears to be greyed out (disabled). I was not able to locate any instructions in the documentation (apologies if its there and i missed it).

Could you provide some guidance/steps that describe how one can configure the companion device for ethernet connectivity over the dongle that's been recommended?

user-f43a29 11 September, 2024, 18:13:28

Hi @user-97c46e , which Operating System will be running on the recording computer? Then, I can send the appropriate instructions.

user-97c46e 11 September, 2024, 18:22:37

Hi Rob, If possible, could we have instructions for both Linux (Ubuntu 22.04) and Windows 11? We are currently using Windows 11, but hoping to use Linux once we troubleshoot some external issues (related to interop between a DAQ card and linux ).

user-f43a29 11 September, 2024, 21:07:55

Hi @user-97c46e , sure.

First, note that you do not need a hotspot enabled nor do you need ethernet tethering. I would disable those options to reduce the chance of conflicts. You can also disable WiFi to reduce the chances that it conflicts with the Ethernet connection.

The instructions for Windows 11 are here: https://discord.com/channels/285728493612957698/1047111711230009405/1272483345137139732

Instructions for Ubuntu 22.04 below:

  • Open Settings and go to Network
  • Go to Wired connections, click the + sign to add a new Ethernet connection. Give it a name like “neon”
  • Under the IPv4 tab, choose “Shared to other computers”, click Apply/Done (see attached image)
  • Make sure your Neon and USB hub are properly connected. The order in which you plug the cables is important:
    1. Unplug all cables from the USB hub and unplug the hub from the phone. Close the Neon Companion app.
    2. Plug Neon into the port marked with 5Gbps on the hub.
    3. Start the Neon Companion App on the phone and wait for the "Plug in and go!" message
    4. Plug the USB cable of the Anker USB hub into the phone.
    5. Wait for Neon to be recognized.
    6. Now, connect the Ethernet cable to the USB hub and to an Ethernet port on your computer.
  • Next, you will need to activate the appropriate Ethernet connection/interface for your Neon to be accessible via the Real-Time API.
    1. In Settings → Network, click on the “neon” connection that was made in the previous steps. It will show a checkmark when it has been successfully activated.

Wait about 30 seconds to a minute. Open the "Stream" section of the Companion App to see if a connection has been established. If so, test if you can connect to the device with the discover_one_device function from the simple version of the Real-Time API. You should get a successful connection and can proceed with data collection.

Chat image

user-f616a4 12 September, 2024, 06:25:09

Hi Pupil Labs! I am an undergraduate researcher for sip lab at Georgia Tech. My team has a Pupil Labs Neon. We were only able to stream the Gaze Data stream onto Matlab with LSL. Is there a way to stream all the different types of Data Streams that the Neon provides using Matlab LSL?

user-f43a29 12 September, 2024, 07:46:15

Hi @user-f616a4 , currently the following are streamed via our Lab Streaming Layer (LSL) outlet:

  • Gaze
  • 3D Eyestate
  • Pupil diameter
  • Events

Just to mention it for completeness, we recommend using the new integration in the Neon Companion App, not the previous lsl-relay. If you have the previous lsl-relay installed, I recommend removing it to reduce the chances of conflicts. See our Lab Streaming Layer docs for more details.

May I ask if you are also looking for the IMU data stream? If you would like to see an additional data stream added to our LSL integration, then feel free to open a 💡 features-requests .

user-97c46e 12 September, 2024, 14:05:55

@user-f43a29 I've followed all the steps up until checking the streaming section of the neon companion app (mobile phone icon + sideways wifi icon on the top right, below the settings gear). When i check for connection information there, i see a "waiting for dns service" message. I'm doing this on Windows 10 Pro, build 19045.4780 From what i was able to gather from your message to another asker about windows, the neon companion app is looking for an mdns server to give it an IP address that it can be contacted at. Windows 10 (build 1803) and beyond have mDNS enabled by default. I surmised that your instructions about configuring the ethernet connection on the computer have to do with allowing inbound traffice on UDP/5353 (mdns queries) so that the neon companion app can ask the laptop to give it an IP address. I did confirm that mdns is active using some of the queries mentioned here - https://rickardnobel.se/disable-the-non-dns-name-resolution-methods-in-windows/. my local lan connection is currently using a self-assigned IP. Should I give it a static IP or some such?

user-f43a29 12 September, 2024, 15:30:33

Hi @user-97c46e , unfortunately, although newer builds of Windows 10 are advertised as having mDNS support, it is reportedly not full mDNS support, so I cannot say for certain that it is compatible. Full mDNS support is officially included in Windows 11.

If you try again and the "waiting for dns service" message reappears, then my colleague, @user-cdcab0 , has suggested force stopping the Companion App and restarting it.

If that does not work, you can also try the following:

  • Install Bonjour Print Services from Apple and restart. This may sound odd, but Bonjour is Apple’s implementation of mDNS, which should be more complete.
  • Install iTunes, which should have a newer mDNS client than the Print Services. Again, it might sound odd, but there is a chance it works.
  • Try “MyPublicWiFi”. The name may sound odd, but it does have an Internet Sharing feature that uses its own mDNS implementation. I’ve at least successfully tried it with Neon and the tested USB hub on Windows 11.

I cannot make any guarantees on these, as they are third party software. Since Windows 10 is no longer supported by Microsoft, a newer mDNS implementation will not be supplied by them, so if the built-in implementation does not work, then these are the only potential solutions that I know of.

If you go this route, let us know how it works out.

user-97c46e 12 September, 2024, 15:34:23

thank you! i think i'll aim for windows 11. i ended up tinkering with windows 10, because i had a machine at hand.

user-f43a29 12 September, 2024, 16:32:37

Hi @user-97c46e , if you want to continue testing with Window 10 and the "waiting for dns service" message reappears, then my colleague, @user-cdcab0 , has suggested force stopping the Companion App and starting it again.

user-97c46e 12 September, 2024, 15:35:46

i have one question about the 'simple' version of the realtime_api. Parsing the documents suggests this is a synchronous API and the function calls to send events are blocking calls. Am i following that correctly?

user-f43a29 12 September, 2024, 22:05:26

The Simple API is internally using the Async API; it is just an easier interface.

You can use the Simple API on a separate process if blocking is an issue for your code. You could then synchronize with your main thread via a Queue.

If necessary, you can also consider setting timeout_seconds argument to 0, but this is not necessarily needed and the separate process would be the first thing to try.

user-97c46e 12 September, 2024, 15:38:04

and one question about the MATLAB support for the API . Is it reasonable to expect timing (local processing delays - not network transmission) using the MATLAB implementation of the API to be comparable to using the python implementation?

user-613324 12 September, 2024, 15:52:24

I have a follow-up question on kappa angle vs gaze offset. So if gaze offset is not the kappa angle, does it mean that the kappa angle is already considered and corrected during the gaze estimation using your machine learning model. If that's the case, then what is the nature of the offset that this gaze offset process tries to correct for?

nmt 12 September, 2024, 17:56:04

Hi @user-613324! I'll step in briefly here for @user-d407c1.

Neon optimises for the population average, meaning that kappa angles are implicitly accounted for during gaze measurements.

That said, some wearers will deviate from the population average in terms of both kappa angle and other physiological factors.

For these wearers, the offset can be used to improve the accuracy of Neon's gaze measurements by reducing systematic errors that are consistent in both magnitude and direction across the field of view.

Important to note: For many use cases and wearers, the offset correction feature isn't needed. However, it can easily be trialled with a simple validation routine, e.g., asking the wearer to look at a known point in your experiment to determine if the offset is required. The offset can also be set post-hoc in Cloud or Neon Player.

I hope this helps!

user-97c46e 12 September, 2024, 19:16:09

@user-f43a29 , Thank you. I did try a few types of restarts including the companion app as suggested by @user-cdcab0 . I also tried restarts of the phone, and the laptop itself (from scratch - after disconnecting all the wires and reconnecting in the order prescribed) as well as just leaving the cables in place. The outcome did not seem to change.

user-97c46e 12 September, 2024, 19:20:18

@user-f43a29 I have also now repeated the process on a new (2 months old) Windows 11 machine. The message is the same "waiting for dns service". One deviation in the messages i receive from the OS while following the instructions is that i don't get anything like the third message/popup/image you shared (attached below). It led me to wonder if i need to configure static IPs on the ethernet connection on the computer (it's currently using a self assigned IP)

Chat image

user-f43a29 12 September, 2024, 22:01:58

Hi @user-97c46e , static IPs should not be necessary.

Do I understand correctly that you are using the same USB<->Ethernet dongle with all computers?

user-97c46e 12 September, 2024, 19:23:00

one other thing to point out is that these laptops don't have an ethernet port, so i'm using a USB<->dongle to connect an ethernet wire to them.

user-97c46e 12 September, 2024, 19:38:51

@user-f43a29 , lastly, my results with a linux computer (Ubuntu 22.04) are the same. After configuring the computer for connection sharing per the instructions, i find the same behavior on the companion app ("waiting for dns service")

user-97c46e 12 September, 2024, 22:03:00

I tried two different USB <-> Ethernet dongles, since i had them available and they seemed like a potential point of failure. The results were unchanged.

user-97c46e 12 September, 2024, 22:04:21

Thought i'd mention in case there is some prescriptive guidance around these.

user-f43a29 12 September, 2024, 22:08:05

There is not. I have at least successfully used a third-party USB<->Ethernet dongle with a Macbook.

Do you have a system with a dedicated Ethernet port?

user-97c46e 12 September, 2024, 22:07:16

actually, my main concern was that i don't want to move on to the next instruction (emitting a daq pulse) until i'm sure the event i sent has landed on the companion device.

user-97c46e 12 September, 2024, 22:08:32

so a blocking call would be useful, as long as i can calculate the offset between devices (as in i know how long the call took, and how long the acknowledgement from companion to computer took, before i send my daq pulse, so the signals recorded by the system receiving the daq pulse can be aligned with the neon traces)

user-97c46e 12 September, 2024, 22:10:25

no, the equipment we have at hand is recent so peripheral connectivity (to external displays, DAQs, response button boxes) has been handled through USB-C mini-dock type of devices

user-3ee17f 13 September, 2024, 06:46:18

Hi, I have a question about marker enrichment. if I have a record with several events, i.e. each event corresponds to an object observed by the participant. I can trim the time of that event in several records but the image selected does not coincide with the one observed in the video. With a single recording there is no problem, but when adding more participants recordings, although it selects the same event, it does not correspond to the image that is automatically selected. Could I choose the frame to be used by the visualization? Thank you!

user-edb34b 13 September, 2024, 09:18:21

Hello team, is there a way to control the gaze Offset Correction more accurately than manually dragging the gaze circle onto the center of the target location?

user-d407c1 13 September, 2024, 09:20:38

Hi @user-edb34b ! you can now set it post-hoc in Cloud with your mouse. You can check how to do so here: https://docs.pupil-labs.com/neon/pupil-cloud/offset-correction/

user-edb34b 13 September, 2024, 09:26:32

Thanks for your answer. Would there be a way to automatically set the gaze circle around the target (for instance if the target was a small QR-code?). Even if the post-hoc offset correction is more precised than on the Companion App, it is still a manual drag & drop with human error 🙂

user-480f4c 13 September, 2024, 11:46:30

Hi @user-3ee17f! We are working on an update that will allow you to manually upload any image showing the surface of your choice, which will be used instead of the auto-generated crop. This could be a screenshot of the preferred frame in the scene video, or a high-quality image taken with e.g. the phone's camera. See also this relevant message: https://discord.com/channels/285728493612957698/633564003846717444/1266354360888197121 We expect this to be ready soon, but unfortunately, we don't have an exact release date yet.

user-3ee17f 13 September, 2024, 11:58:24

Ok, that would be fine! Thanks for the explanation 🙂

user-f43a29 13 September, 2024, 11:54:08

Then, yes, the default behaviour of the Simple API is to block when sending an event.

May I ask though why this approach is necessary? You can set your own timestamp for the event, which may help with integrating Neon into your setup.

When calculating offsets, you may find our Time Sync docs helpful.

user-97c46e 13 September, 2024, 16:25:11

The main objective for us is to synchronize neon streams with other signals from a different system, ideally not more than 5 ms apart from each other in alignment (ideally we would want sub ms but i don't think that is possible). The computer sending events to Neon is also sending (nearly instantaneous) daq pulses to the other system. If i send the daq pulse only after Neon event has finished AND I know the clock offset between the computer sending events and the neon device, THEN the only error i incur is transmission delays (which i hope is <5 ms if minimized with a direct ethernet connection). I've read the docs you've shared and they are definitely helpful in thinking through. I haven't fully worked out whether sending a computer timestamp to Neon changes anything in the equation substantially, but thank you for pointing that out, i'll think through. Also appreciate any other comments you might have for us upon reading all this detail. One thing that is very nice about the pupil framework but that we're not likely to use, is real time access and analysis of data as it is being recorded. All our analysis takes place offline.

user-f43a29 13 September, 2024, 11:57:24

Ok.

Since you are having a similar issue happening on separate computers and Operating Systems, it could be something about the additional peripherals, as the instructions that were linked have worked on different systems until now.

I do know that some Ethernet switches/hubs can cause a "hiccup", in that they need to be properly initialized or configured before Neon is discoverable on the network. Are you potentially using one of those?

user-97c46e 13 September, 2024, 16:35:52

Apart from a USB C<->Ethernet dongle used at the computer and the device into which the ethernet, the Neon Companion Device, and the Neon Device are plugged in, there is no other networking hardware (ethernet hub, router, port or switch) of any sort. There are also no other peripherals. For the USB C<-> Ethernet dongle used at the computer, I have tried 3 different devices because of the same concern as you are pointing out. I can dig up their manufacturer/model numbers if that is useful for our conversation. Alternatively, if you have any specific recs for a device on the side of the computer (should i just use another anker?) then i'd appreciate those too.

user-f43a29 13 September, 2024, 12:01:00

Regarding the MATLAB integration, the timing has been essentially the same as the base Python implementation on tested systems.

Mathworks will have to clarify exactly how they have achieved their efficient Python interoperability, but if you look at the bottom of the README for the pl-neon-matlab repo, you will see a basic timing result for the systems that were tested. You should of course test and confirm it for your setup, since the possible Matlab-Python-OS combinations are numerous.

user-97c46e 13 September, 2024, 16:41:58

Thank you! This is very helpful! My daq is old and its not working through the python libraries on our linux laptop (Measurement Computing Technical Support haven't clarified so far), but a Windows/Python combination and a linux/MATLAB combination are both able to use it. Although we use MATLAB with Psychtoolbox, I've always had a skeptical view of MATLAB's own systems level performance basically built on Java, so i always look for timing data (and test the timing on my own code carefully) if i'm doing something of this nature in MATLAB. Ideally we'll be able to use MATLAB/Linux 🙂

user-97c46e 13 September, 2024, 16:47:49

Lastly, as i'm looking at this, i'm wondering if the engineering team has considered direct device<->device communication directly over USB (like over the anker USB hub). With newer laptops not having Ethernet ports anymore, the USB<->Ethernet dongle on the laptop side only serves to introduce one more failure point (the laptop dongle) and delays (most likely insignificant) from additional circuitry to translate USB to Ethernet (and back again). I'd be happy to log a feature request if the idea is feasible.

user-f43a29 16 September, 2024, 12:26:27

Hi @user-97c46e , I responded to most of your questions in the thread, but I wanted to clarify that the instructions that were provided above for establishing an Ethernet connection are for directly connecting Neon to an Ethernet port on the computer.

You could also connect Neon via Ethernet to a router and connect your computer to the same router via Ethernet and then device discovery should work without needing to change any settings in your OS. You should also get essentially the same transmission latency this way. The router does not need to be connected to the internet.

Regarding device<->device connection over USB, such as over the Anker USB hub, please open a 💡 features-requests .

user-f43a29 13 September, 2024, 20:07:02

Timing, Ethernet, Matlab

user-37a2bd 17 September, 2024, 11:39:25

Hi Team. I had a question in general about the eye tracking glasses. Are there any limitations on how long the eye tracking recordings can be and what are the reasons for those limitations? Also what is the quality of audio recording on the Neon glasses?

user-480f4c 17 September, 2024, 11:54:36

Hi @user-37a2bd 👋🏽 ! Battery depends on the phone which serves as both a power source and a recording unit.

Neon can record continuously for up to four hours on a single battery charge. Note that this may vary slightly depending on other factors, e.g., the Companion Device model, real-time streaming, and real-time sampling rate.

May I ask how long you expect your recordings to last?

As for the audio, Neon's audio channel is currently set to record in mono format at 44100 Hz.

user-3b63ba 18 September, 2024, 09:48:28

Hi. How can I get gaze data (coordinates)? When I press Download->Native Recording data in Pupil Cloud then I only get one file (Neon Scene Camera v1 ps1.time)

user-c2d375 18 September, 2024, 09:57:39

Hi @user-3b63ba 👋 In the download tab, please select "Timeseries Data" to download gaze data and other exports from Pupil Cloud.

user-688acf 18 September, 2024, 12:34:01

hello, i have an issue with neon player 4.3.2 and gaze data: the neon companion app (latest version from the play store) shows the gaze point in both the live view and the recording. when i export it to my pc and load it with neon player the gaze data is not visualized. "vis circle" plugin is active. fixation detector works just fine. when i do the export also surface data contains no gaze points and gaze_positions.csv also doesnt contain any data.

when i use an older recording from (what i believe) was done with an older version of campanion it works fine,so i guess more of an issue with companion rather than neon player.

any ideas? thx in advance

user-cdcab0 18 September, 2024, 13:28:31

Hi, @user-688acf - that sounds quite odd. Would you be able to share your recording with us?

user-057596 19 September, 2024, 07:47:49

Hi, just want to say how great your team is in responding to technical difficulties, it is most appreciated especially when in the midst of researching and surrounded with chaos. Keep up the good work.💪🏻

user-a55486 20 September, 2024, 10:52:06

Hi support team! We were just looking at our IMU data and it seems to be sampled in a quite irregular way. Some samples are highly clustered while there are periods where there's only few samples. Is this a known issue or it's something with our device or settings?

Chat image

user-a55486 20 September, 2024, 10:53:55

(BTW this is from the middle of a recording so it should not be because of the data loss during device startup)

user-d407c1 20 September, 2024, 10:57:34

Hi @user-a55486 ! The IMU samples at 110Hz rather than 200Hz, but the sampling should be consistent. Where is that data coming from, is it through the realtime API or directly from the recording? Could you kindly create a ticket on 🛟 troubleshooting and share the files for further evaluation?

user-a55486 20 September, 2024, 10:58:23

Thanks for the quick reply! This is downloaded from Pupil Cloud. I will submit a ticket with the files

user-057596 20 September, 2024, 12:02:11

Hi I know the Neon glasses can be used with the Core pipeline with MacOS and with Linux but you had mentioned that you were considering making it possible with the Windows operating system. Are you still considering this and if what type of timeline?

nmt 23 September, 2024, 02:38:50

Hi @user-057596! This isn't on the roadmap right now. I noticed you previously enquired about using several Neon systems with the Core pipeline. Are the two enquiries related?

user-b6f43d 22 September, 2024, 05:56:04

can we reduce the size of red (gaze) circle ? after recording ?

Chat image

user-d407c1 23 September, 2024, 07:04:46

Hi @user-b6f43d 👋 ! Absolutely. You can download the video with a custom circle size using the video renderer, you can also set the colour and whether you want the fixations scanpath. Let us know if this is what you meant.

user-0d28eb 23 September, 2024, 06:56:11

Good morning Neon team,

We were wondering if there are known cases where the neon was successfully used in low-light environments? Many of the use cases proposed to us focus on such settings, but the scene camera of the Neon (and glasses from other companies) dont really work in these conditions.

Has there ever been a consideration to add a secondary scene camera without IR filter?

user-f43a29 24 September, 2024, 11:43:32

Hi @user-0d28eb , may I ask if you have a rough estimate of the typical light levels in these dark environments/conditions?

user-cc6fe4 23 September, 2024, 12:49:54

Surface detection in dark environment

user-057596 23 September, 2024, 13:29:53

Hi @nmt yes I was checking to see if you had decided to go with this as we were wanting to use the Neon Glasses with our analysis software which is on windows as there are easier to use than the Core glasses and it would have avoided the need to use two devices that were connected but we can go down this route as we know it works.

user-c828f5 23 September, 2024, 20:30:34

Hello, is it possible to get access to "is this thing on?" frame's 3D model (.step) file? I couldn't find it here: https://github.com/pupil-labs/neon-geometry

nmt 24 September, 2024, 02:10:32

Neon + Core pipeline

user-fa0c14 24 September, 2024, 08:25:42

Hi! We are a team of students working in collaboration with a researcher. We'd like to improve our existing project. We'd like to know whether it would be possible to connect to the neon glasses with several different PCs simultaneously. We'd like to know if there are any avenues of development or if this isn't possible at the moment. Have a great day ! Emilie 🙂

user-d407c1 24 September, 2024, 08:27:32

Hi @user-3d8899 ! Since this is more of a general question than a feature request, I moved your message to the 👓 neon channel. Yes, you should be able to subscribe to the Companion App feed from multiple computers/clients.

user-f07f09 24 September, 2024, 11:21:02

Hey! I'm having a problem in Neon Player, where the exported world_timestamps.csv doesn't contain a pts column (which I need for getting specific frames). Do you guys have any idea why that could be the case and what I could do to fix this? Player version is 4.3.2.0 and recording software 2.7.11-prod

user-cdcab0 25 September, 2024, 09:14:36

Hi, @user-f07f09 - the world_timestamps.csv file isn't meant to contain a pts column (it seems our documentation is out of date). Can you elaborate on why you need that particular piece of data? I suspect that you may not actually need it.

If you really do, however, it's simple enough to extract yourself. Here's a snippet that does so using pl-neon-recording

world_timestamps_and_pts.py

user-a09f5d 24 September, 2024, 20:09:40

Hi

I have two questions about matching timestamps when using the Neon:

1) I was hoping to get some clarification regrading the saving of timestamps when saving events using the Realtime API. In order to make sure that the timing of the timestamp in gaze.csv and 3d_eye_states (and other files) are accurately matched up with the timestamps in event.csv, is it best to use device.send_event('event note') or device.send_event('event note', event_timestamp_unix_ns=time.time_ns()), or are there additional steps I should take? I need to know the gaze position and 3d eye stats at the exact moment the event occurred, hence the need to ensure accurate timestamps.

2) For the study, I will be using the Neon to record eye movements while the subject is viewing a physical moving target. Without going into too much detail, the 3D position of the moving target is recorded by an Arduino that is connected to the computer. This position data is saved as a csv file along with a timestamp (for simplicity I think it will be fine to use the python timestamp taken at the moment the computer receives the package from the Arduino). What is the best way of ensuring that this timestamp is matched with the timestamp(s) within the eye tracking data? I need to be able to match gaze position and target position for any given time point.

Thank you very much in advance.

user-cdcab0 25 September, 2024, 07:40:56

Hey, @user-a09f5d! I hope your project is going well 🙂

1) For the most accurate event timestamps, you should send your event with a timestamp that corresponds to the Companion Device's clock. Your PC's clock will likely be close thanks to NTP, but to be as accurate as you probably want/need, you should measure the offset and adjust the timestamps accordingly. You can see a simple example of this in the Realtime Python API documentation.

2) Is the Arduino recording the data itself (like to an SD card) or are you sending the data from the Arduino to a PC that records it?

If a PC receives the Arduino data and records it, then you'll probably just want to use the real-time API to send a couple of synchronization events that you can line-up later.

If the arduino records the data itself and there otherwise isn't a possible network connection between the Arduino data stream and your Neon, then you'll need to create some type of synchronization "event" in both data streams. For example, a flash of light could be detected by a sensor on the Arduino and picked up by the camera of the eyetracker. Or - here's a fun one - since your Arduino is already recording the 3D position of an object, perhaps you could put the eyetracker on the object after starting a recording and move them around together. You could then align the data using the position data from the Arduino and the IMU data from the Neon. One more: smash your tracked object into a gong. You'll see the moment of impact in the position data from the Arduino and the audio data from the Neon. There are lots of options 🙂

user-4b3b4f 24 September, 2024, 20:10:44

Hi all. Quick question: what do you recommend doing when you have two gaze points with the same fixation id and fixation duration?

user-d407c1 25 September, 2024, 08:39:23

Hi @user-4b3b4f ! That is expected as a fixation would be constituted of multiple gaze points.

user-0d28eb 25 September, 2024, 06:48:14

[email removed] Boode | BUAS , may I ask if

user-dd2e1c 25 September, 2024, 08:28:01

Hello neon team, I recorded neon video in this March(check info.json). I tried to get the 3d-eye-state using this recording. However, I got only empty 3d-eye-state file in this recording. I found the recent recording returns 3d-eye-state well. How can I get the 3d-eye-state using this previous recording? (Details: I saved it to my desktop and deleted the recording file on my phone. I want to move it back to my phone and export it again. Do you think this is a good way to do it?)

info.json

user-d407c1 25 September, 2024, 08:50:52

Hi @user-dd2e1c 👋 ! Are you referring to the native 3D eye state file or the timeseries? The real-time estimation of eye state was introduced on April 22nd with Companion App version 2.8.10-prod. Recordings made with earlier versions won’t include this data in the native format.

Kindly note, that moving it back to the phone and exporting it won't generate this data. If you had Cloud backup enabled note that recordings in Cloud are reprocessed and you get eye state.

user-97c46e 25 September, 2024, 23:01:58

Additionally, i'd like to ask for a pointer to details of the fusion engine for the IMU mentioned in the docs, if it is available somewhere - I looked around the code/docs, but wasn't able to pinpoint it. Would also be curious if the IMU chip number can be shared. Specifically i'm curious about the procedure used to eliminate the sensor noise that is typical in IMUs (i'm assuming this is a MEMs IMU). It's been a few years since i worked with these (and i'm not sure if things have changed in the last few years), but if memory serves, the accelerometer sensor noise tends to be high frequency noise, and the gyroscope's angular velocity readings suffer from low frequency drift that accumulates over time.

nmt 26 September, 2024, 01:41:27

Hi @user-97c46e! Check out this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1105775561357393951

user-ea768f 27 September, 2024, 11:58:10

Hello Neon team, We are interested in determining the specific regions of the lens that subjects use, similar to the approach described in the following study, though they use a different eye tracker: https://www.mdpi.com/2075-1729/14/9/1178. We have data captured using the Neon eye tracker and would like to know if it’s possible to perform a similar analysis with these data. Any advice or guidance on how we could approach this would be greatly appreciated. Thanks!

user-d407c1 27 September, 2024, 13:04:21

Hi @user-ea768f ! 👋 This should be relatively straightforward to achieve, as we provide eye state metrics:

For each eye, the center of rotation of the eyeball is given in millimeters from the scene camera origin, allowing you to account for slippage without assuming invariance of the back vertex distance.

We also provide a vector originating at the center of the eyeball and passing through the pupil’s center (optical axis).

You can use it as a proxy and project it onto the lens surface (similar to what the authors of that paper seem to have done). Alternatively, if you’ve measured the kappa angle https://discord.com/channels/285728493612957698/1047111711230009405/1250840771045752966 from the subject, you can use that to obtain the visual axis and project it for a more “accurate” approach.

Anyway for any of those, you need to know that the scene camera is tilted 12 degrees downwards and the lenses on the Just Act Natural are about 10.5 deg downwards too from an horizontal plane of the IMU.

You can check our latest alpha lab article https://docs.pupil-labs.com/alpha-lab/imu-transformations/ which describes how to account for the scene camera tilt and bring in together with the IMU.

user-d4e38a 28 September, 2024, 02:29:02

Hi, I just received my new frame without the camera and I'd like to know how I can transplant the camera from my existing to the new frame. Do I need any tools to do that?

nmt 28 September, 2024, 03:14:34

Hi @user-d4e38a! Check out this page for instructions on how to swap frames: https://docs.pupil-labs.com/neon/hardware/swapping-frames/#swapping-frames

user-b6f43d 28 September, 2024, 16:45:06

Do we need to do any sort of noice filtering for the data we are getting from the eye tracker ??? If so can you share me some research papers to learn about them ??

user-b6f43d 29 September, 2024, 11:38:40

Heyy ?? Pupil labs ?

user-d407c1 30 September, 2024, 07:02:05

Hey @user-b6f43d 👋 !

While members of the Pupil Labs team are active here, and we do our best to assist with your endeavours, response times are not guaranteed. We try to respond as soon as possible, and we usually get promptly back to you, but if you post general questions over the weekend, we’ll likely get back to you on Monday, as the team usually takes downtime.

Regarding your question about noise filtering, it depends on the specific data stream you’re working with. Could you clarify which data you’re working with? For instance, gaze data already has a built-in filter (similar to a 1-euro filter), but additional filtering (like removing blinks or adding smoothing) might help improve your results depending on your context.

Several studies have used in the past, the Savitzky-Golay filter to smooth gaze data while preserving rapid eye movements like saccades, making it ideal for maintaining signal features. In contrast, the Butterworth filter is a low-pass filter that smooths data by reducing high-frequency noise but can blur sharp transitions, such as saccades, so there is no generalised filtering.

Looking forward to your clarification!

user-b6f43d 01 October, 2024, 14:05:35

Hey sorry Miguel, i didn't intend to disturb you guys.

I am going to work with gaze, fixations, saccade, pupil size and blinks. I am going to look at all of these visual data.

So just like how you talked about gaze data, is there anything I can do to other data streams that I have mentioned also ?

End of September archive