invisible


Year

user-8a4f8b 01 August, 2022, 09:29:40

Thank but the issue is that it is not sold in Japan officially. There are many vendors available importing it to sell in Japan

user-8a4f8b 01 August, 2022, 09:33:07

As far as I understand, there is no penalty to sell it but there is to use it. I really hope you can solve this to use Invisible properly in Japanโ€ฆ.

user-8a4f8b 01 August, 2022, 09:40:35

In addition there is a way to obtain permission to use it for 180days if you apply for temporary use https://www.cps.bureauveritas.com/needs/japan-market-access-compliance-wireless-type-approvals

user-efdf9e 01 August, 2022, 18:05:17

Hi all ๐Ÿ™‚ I have set the glasses up to record normal video but is there any in depth documentation on setting up heat mapping? We are testing this with a user sitting in front of 6 monitors (static images, mainly excel spreadsheets) and want to track what screens/sheets they look at the most during a 30 minute window. TIA

marc 02 August, 2022, 08:09:01

Hi @user-efdf9e! If I understand you correctly, you would like to visualize heatmaps on images/screenshots of your 6 monitors. To do this, the first step is mapping the gaze data onto the monitors. We offer two enrichments to help you with that:

1. Reference Image Mapper For this enrichment you need to define a reference image of your object of interest (and a scanning recording; see docs for details), which gaze will be mapped onto. So here you would input an image of your 6 monitors and gaze will be mapped onto this image.

Screen-based environments can be difficult for this algorithm though. Please see the setup requirements and limitations in the docs and ideally trial this algorithm in your setup before committing to it! https://docs.pupil-labs.com/invisible/explainers/enrichments/#reference-image-mapper

2. Marker Mapper For this enrichment you need to equip the surfaces you want to track, i.e. the monitors, with physical markers. Those markers will be detected by the algorithm and used to track the monitors in the scene camera to map gaze onto them.

While the addition of markers can be an issue in some setups, this approach allows you to track surfaces very robustly. https://docs.pupil-labs.com/invisible/explainers/enrichments/#marker-mapper

Heatmap Both enrichments allow you to draw a heatmap. Simply right-click on the calculated enrichment and select "Heatmap".

Let me know if you have further questions!

user-2bd5c9 02 August, 2022, 07:20:08

Hi all, I have a problem with getting my last measurements with Pupil Invisible to play in Pupil Player. This is the notification I've got. Do you know how to solve this?

papr 02 August, 2022, 07:24:40

Hi! In the past, we had multiple issues that were caused by the application or recording folder being located in the One Drive folder. Please move the recording outside of the One Drive folder and try opening it again. If this does not resolve the issue, please delete the recording's offline_data folder and try again.

user-2bd5c9 02 August, 2022, 07:20:27

Chat image

user-2bd5c9 02 August, 2022, 07:25:39

Ah yes I see, this resolved the problem. Thank you!

papr 02 August, 2022, 07:26:08

Just for the record, could you please clarify which of the two steps solved the issue?

user-2bd5c9 02 August, 2022, 07:26:42

Placing it out of the One Drive folder, the offline_data is still in it

papr 02 August, 2022, 07:26:52

Thank you!

user-430fc1 02 August, 2022, 16:18:06

Thanks, this would be good to know. The use case I have in mind is three people viewing a large display, each wearing pupil invisible. The aim is to plot gaze position on the display for each person in real time. Would this require the wifi approach?

marc 02 August, 2022, 16:35:35

Using a wifi you can expect something like 10-100 ms of transmission delay depending on your setup. I.e. if a subject changes their gaze point on the screen, it would take 10-100 ms for the gaze point visualization on the screen to update. For most applications, including things like gaze interaction, this should be sufficient. If it suffices for your use case depends on what exactly you are trying to achieve.

user-430fc1 02 August, 2022, 17:19:23

Thanks Marc, what I had in mind was three observers of a single large display. The display starts out blank, but each observers gaze position reveals the underlying image in a spatial Gaussian pattern with a temporal decay. Does that sound feasible?

marc 02 August, 2022, 17:20:46

I see! I'd recommend piloting it just to be safe, but yes, that sounds feasible to me!

user-430fc1 02 August, 2022, 17:24:07

๐Ÿ‘

user-619a15 03 August, 2022, 06:46:38

hey How can I buy this device in UAE?

user-755e9e 03 August, 2022, 07:16:00

๐Ÿ‘‹ Hi @user-619a15! Please send an email โœ‰๏ธ to [email removed] including your billing/shipping address, telephone number, desired product/s and quantity.

user-fb64e4 03 August, 2022, 11:36:26

hello, I am using ubuntu 18.04 and I wanted to use pupil-labs-realtime-api in my code but I'm getting this error when I tried to install it. what should I do?

Chat image

papr 03 August, 2022, 11:46:20

Please make sure to user Python 3.7 or higher

user-ccf2f6 03 August, 2022, 12:24:43

hi I'm not able to find any devices using pupil-labs-realtime-api the companion and laptop are on same network . I also checked that the RTSP streaming is working but when I search of devices using the python lib, there are no devices found

papr 03 August, 2022, 12:38:15

Hey ๐Ÿ‘‹

I also checked that the RTSP streaming is working How did you check that?

user-ccf2f6 03 August, 2022, 12:25:10

can anyone help with this please?

user-619a15 03 August, 2022, 12:29:01

I have a question : what is the best completable EEG to Integrate with invisible for neuromarketing ?

user-619a15 03 August, 2022, 12:55:17

yes I need a device for record EEG signals Please introduce a device that should be integrated with you. Are epoc flex or Enobio 8 devices suitable?

nmt 03 August, 2022, 14:28:37

Hi @user-619a15 ๐Ÿ‘‹. It looks like both the Epox Flex (Emotiv software) and Enobio 8 (NIC2 controller) have support for Lab Streaming Layer (LSL). We maintain an LSL relay for publishing data from Pupil Invisible in real-time using the LSL framework: https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/. This should enable unified and time-synchronised collection of measurements. For turnkey solutions to EEG and eye tracking fusion, we can also recommend checking out our official partners/integrations page: https://pupil-labs.com/partners-resellers/

user-ccf2f6 03 August, 2022, 13:16:13

the ip address

papr 03 August, 2022, 13:16:27

What happens if you enter the dns name instead?

user-ccf2f6 03 August, 2022, 13:35:39

yes, I'm using Anaconda Powershell on the same computer. No VM etc

user-183822 06 August, 2022, 18:29:07

hello everyone; for the data analysis of pupil data i tried running the code from 'https://pyplr.github.io/cvd_pupillometry/04d_analysis.html' but i am not able to run it.. i am getting an error in plr processing.. i dont know what to do.. i want to do the preprocessing of pupil diameter but i am not able to do it..' does anyone have any solution regarding this?

marc 08 August, 2022, 06:37:08

Hi @user-183822! Unfortunately, Pupil Invisible does not provide pupillometry data. The eye cameras are located so far off to the side that a lot of the time the pupil is not visible at all in the image, so the available data streams are limited to gaze, fixations and blinks. To measure the pupil diameter you'd have to use Pupil Core.

user-d119ac 08 August, 2022, 12:49:26

Hi, I used a pupil invisible for a car drive to check if the person has observed the traffic signs while driving. In here, I got the video and the person's gaze data which I want to map with each other but found out that the frequency of the both the data are different. The video frequency is 30 frames per second and gaze frequency is 50 frames per second. I would like to know how the pupil software deals with this problem and maps both the data. So that I could do the same.

marc 08 August, 2022, 13:44:07

Hi @user-d119ac! Both the video frames and the gaze samples are timestamped, which allows you to correlate them with one another. Given the differences in framerate you will end up with multiple gaze samples per video frame.

Today we coincidentally published a guide on how to manually sync data streams using timestamps in Python: https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/

Does this guide help you with your question, or were you looking for something else?

FYI If you download a Pupil Invisible recording from Pupil Cloud the gaze sampling frequency will be even higher at 200 Hz.

user-9429ba 09 August, 2022, 15:13:00

@marc Could you reply to this message from software-dev channel please: https://discord.com/channels/285728493612957698/446977689690177536/1006566797006360576

Hi! Could you please release an updated client (https://github.com/pupil-labs/pupil-cloud-client-python) for api version2 (https://api.cloud.pupil-labs.com/v2) Or is it possible that I can generate the python client code myself with the swagger.json? And is there any possibility to export videos via api? If not. Could you please implement this?

user-057596 10 August, 2022, 13:52:47

Two questions. Firstly after applying the marker mapper enrichment if in the video the markets are intermittently identified will fixations still be identified within the area of interest identified when setting up the enrichment or it only those fixations when the markers are identified? Also is it possible to download video of the fixations overlays together with the gaze overlays?

marc 11 August, 2022, 05:36:09

Hi @user-057596! There is the "raw" fixations signal, containing fixations in the scene camera, which would be included in the recording download. This data stream will always contain fixations throughout the recording.

And there is "mapped" fixations, which are the same fixations mapped onto the surface tracked by the marker mapper. Those would be included in the marker mapper download. In order to map a fixation to a surface, the surface needs to be detected. So if the detection failed for a couple seconds, the fixations within those seconds would not be mapped.

It is currently not possible to download the fixation scanpath as video. The only thing I can point you to is this Python example script of how to render the scanpath yourself: https://gist.github.com/marc-tonsen/d8e4cd7d1a322a8edcc7f1666961b2b9

user-057596 11 August, 2022, 07:18:34

Thanks Marc

user-0f58b0 11 August, 2022, 09:14:42

hi Pupil friends. we were wondering what this feature does (companion app) "Eye Video Transcoding"

marc 11 August, 2022, 10:11:08

Hi @user-0f58b0! Transcoding the eye video essentially means saving it in a more efficient format, which saves memory but requires additional processing. On OnePlus8 devices the available processing power is sufficient to allow transcoding, but on OnePlus6 devices it is disabled by default.

user-ccf2f6 11 August, 2022, 10:36:53

Hi, does someone know how the pupil invisible recordings are upscaled to 200 Hz in cloud since the recorded data on device is around 120Hz. Is there a software upscale implemented on the recorded data when it is uploaded to cloud or is there some meta data used to generate higher resolution

marc 11 August, 2022, 10:41:03

The eye video is recorded at 200 Hz on the phone. The only reason why the real-time gaze signal is lower is because the phone's processing power does not suffice. In cloud the gaze signal is calculated again from scratch using the full video framerate . So the 200 Hz are not just interpolated or upscaled, but are based on actual video frames at that frequency.

user-4f01fd 11 August, 2022, 17:51:34

Hello community! I have a problem with the Invisible calibration, I can't find the red circle, it doesn't make me see it while I try to do the setup of the device. But if I record a sequence, it appears and also the bindings, but of course they are not aligned to real look. What can happen? I did all the Oneplus 8 upgrades

marc 12 August, 2022, 06:52:10

Hi @user-4f01fd! So if I understand you correctly you do not see a gaze point in the live preview of the Companion app, but you do see it when playing back a recording in Pupil Cloud.

Could you please let me know the Android version installed on the phone (see system settings) and the version of the Companion app (tap and hold the app icon -> app info)?

user-4f01fd 11 August, 2022, 22:42:00

Chat image Chat image

user-4f01fd 12 August, 2022, 11:31:48

@marc Thank you for your reply, Android is 12. Invisible Companion Version is 1.4.25-prod.

marc 12 August, 2022, 12:18:43

Thanks! In theory this combination of versions should work.

In the screenshots you shared it looks like the gaze circle is visible in the live preview in the top left corner. Is the gaze circle stuck in this position?

Also, can you confirm that no error messages are shown by the app?

Would you be able to share an example recording of ~1 min length which is affected by the issue? If this recording is uploaded to Cloud, it would suffice for you to let me know the recording ID and to give us explicit permission to access it to investigate.

user-413ab6 17 August, 2022, 10:28:19

Hi! My setup is as follows: I have a camera fixed on the dashboard of the car and the driver is wearing a Pupil Invisible device. I have the gaze with respect to the world camera of the Invisible device. I want the gaze point with respect to the dashcam. How do I do this?

papr 17 August, 2022, 10:48:14

How much of the car inside is visible in the dashboard camera?

papr 17 August, 2022, 10:49:57

Generally, this is a very difficult task as you need to estimate the physical relationship between scene and dashboard camera for every scene video frame.

user-413ab6 17 August, 2022, 10:49:21

I can share sample images if needed

user-413ab6 17 August, 2022, 10:50:26

what do you think about attaching some marker to the Pupil Invisible?

user-eb13bc 18 August, 2022, 10:07:36

How can I export a movie in Recordings (Pupil Cloud) including gaze location, fixations and saccades (like in the preview)? I am only able to export raw data.

marc 18 August, 2022, 10:10:45

Hi @user-eb13bc! You can export a video with gaze overlay only using the Gaze Overlay enrichment: https://docs.pupil-labs.com/invisible/explainers/enrichments/#gaze-overlay

Downloading a rendering of fixations (or the fixation scanpath as you can enable it in the Cloud player) is currently not directly possible in Cloud. The only thing I can point you to is this Python example script of how to render the scanpath yourself using the raw data: https://gist.github.com/marc-tonsen/d8e4cd7d1a322a8edcc7f1666961b2b9

user-057596 18 August, 2022, 10:25:04

Hi Iโ€™m trying to live stream to my pad using the QR code which worked the last time in the same setting but but each time it say safari could not open the page because the server stopped responding. Thanks Gary

marc 18 August, 2022, 14:47:22

Hi @user-057596! In theory this should work. The most likely explanation would be some kind of network issue. Did you make sure that the Companion device and the iPad are connected to the same wifi? Could you try accessing the monitor app from a Laptop instead and see if you have more success? Note the live streaming requires MDNS and UDP traffic to be allowed on the network. In larger public wifis, e.g. at universities, this type of traffic is often forbidden. Could this be an issue here?

user-cd3e5b 18 August, 2022, 14:38:37

Hello, I was wondering if there was a way to pull surface gaze data using the updated API? I recall this was possible with the previous network API framework using zmq. I wasn't very successful finding anything that might say it's possible in the API documentation.

marc 18 August, 2022, 14:54:29

For Pupil Core and the old NDSI (the old real-time API) real-time surface tracking was available. This is because the surface tracking could be computed in Pupil Capture, which then published it to the network. The companion phone can however currently not take over the duty of calculating the surface tracking, so this is not immediately possible.

I can point you to this demo script however, which uses the API to receive gaze and scene video data on a computer, and uses the open-source surface tracking code to do real-time surface tracking on the computer. This would not make the surface tracking data available on the network, but on the computer that is executing the script. Note, that the README file is slightly out of date. You do no longer have to create an API token for this script to work and you can skip the according setup steps. https://gist.github.com/papr/40d332498bfacb5980a754c5692068ec

user-057596 18 August, 2022, 14:48:47

Hi Mark you were spot on, there was an undetected problem with the universityโ€™s internet system which has now been rectified. Thanks for your help.

user-26b243 19 August, 2022, 16:43:23

Hello @papr, the April tags we have that are placed on a physical board are not being recognized by our world view camera despite them working just fine yesterday. We have gone through the documentation and have ensured that there is enough white space around each tag. Do you have any suggestions on how to resolve this suggestion?

papr 19 August, 2022, 18:31:14

Just as a sanity check: Is the surface tracker running?

user-413ab6 22 August, 2022, 11:34:54

Hi. I downloaded a recording from Pupil Cloud and dragged it into the Pupil Player and I get the following message.

Chat image

papr 22 August, 2022, 12:12:52

Please delete the corresponding offline_data folder and try again ๐Ÿ™‚

user-ace7a4 23 August, 2022, 10:16:52

Is there a way to calculate or get the amount, and maybe even the duration, of saccades that were made within a recording?

marc 23 August, 2022, 12:43:46

There is no algorithm that is explicitly detecting saccades for Pupil Invisible. If the subjects in your recordings maintain a stable head-pose though, one can argue that the gaps between fixations correspond to saccades. In that case every "fixation end" would be an according "saccade start" and one could parse saccades from the fixations file.

user-ced35b 23 August, 2022, 18:15:55

Hello, I understand that the cameras don't start at the exact same time and record at different sampling rates. However, can I assume that the timestamp (and frame index) given by the annotation plugin corresponds to the nearest timestamp (and world index) in the pupil_positions.csv file? For example, If the annotation for the onset of an image starts at 7100.4801 (frame index 617), does the pupil diameter around that time (given by the timestamp column in pupil_positions) correspond to the onset of the image (given by the annotation marker) - is that pupil diameter time-locked to that annotation event? Thanks for your help!

marc 24 August, 2022, 07:05:07

Hi @user-ced35b! First off, are you using Pupil Invisible (rather than Pupil Core) for your recordings? In case you do, please note that measuring pupillometry data with Pupil Invisible in Pupil Capture is not recommended and that you will most likely get low quality data.

Besides that, all data streams made by Pupil Invisible and Core are 1) independent and 2) timestamped. "Independent" means, as you mention, that the according sensors run separately with different start times and sampling rates. Thus, samples from different data streams are never made at the exact same time. The timestamps allow you to match the data up to the level of tolerance you desire. E.g. given the 200 Hz pupil signal and the 30 Hz world video you could match all pupil samples to the closest respective video frame. Then, every frame would match with a set of pupil samples.

The world index columns provided by Pupil Capture do exactly this: they tell you which world video frame is the closest available one. So if you are interested in the pupil diameter value at a specific point in time, and you have annotated that point, such that you know the corresponding world frame, you are correct in selecting the pupil diameter value with the same world frame index. However, there will be multiple pupil diamter values associated to the same frame. You have to choose if you want to take their mean, or compare the timestamps in more detail to find the closest single pupil diameter sample.

Please see also Pupil Captures export documentation here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

And this how-to guide on sensor syncing and timestamp matching, which also discusses this problem: https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/

user-87b92a 24 August, 2022, 07:33:01

Hello everyone! my name is idan and iโ€™m part of Menuling. our goal i to improve profits for restaurants by engineering the menu According to behavioral economics Models and other statistics Methods. we are looking for best ways to use the glasses for this. performing experiments with the glassess on subjects reading a menu for the first time and analyzinf gace focus and heatmap, to know where he looked first, where did he focused the most, and how long did he stays on each focus point for exapmle, anyone had any expereince analyzing text gaze for exapmlt or something similar?

marc 24 August, 2022, 07:44:35

Hi @user-87b92a! If you are interested, feel free to reach out to [email removed] to schedule a meeting with one of our product specialists to discuss your use case and how it might be possible with our products in detail.

I can point you to a few resources that might be helpful. These blog post are about an example study we did in an art gallery. It's analysing gaze behaviour on paintings rather than menus, but the metrics calculated are somewhat similar: https://pupil-labs.com/blog/news/demo-workspace-walkthrough-part1/ https://pupil-labs.com/blog/news/demo-workspace-walkthrough-part2/

The key tool in the analysis process would likely be the Reference Image Mapper, which would allow you to map gaze data to your menus: https://docs.pupil-labs.com/invisible/explainers/enrichments/#reference-image-mapper

user-2bd5c9 24 August, 2022, 08:35:33

Hi all, yesterday I've done a measurement with Pupil Invisible and during this measurement the cable between the phone and glasses disconnected. After this happened, I reconnected the cable again and started a new measurement. Right now I can see that the duration of that measurement was 14 minutes (which is true), but in Pupil Player it only shows me 34 seconds. Can this recording still be saved or do I only have 34 seconds now?

papr 24 August, 2022, 08:39:19

Can you clarify if you explicitly stopped the running recording and started a new one when the cable disconnected?

user-ace7a4 24 August, 2022, 09:16:26

Hi! I have two questions: Is there an advantage for using ns over regular seconds in eye tracking data? I think for starting I would prefer using seconds, is there an option to convert the time before downloading the csv files? My second question is the following: I have 2 defined events, start and end of recording. However, I cannot find both timestamps in my gaze.csv file. How can that be?

papr 24 August, 2022, 09:26:52

Hi ๐Ÿ‘‹ For Pupil Invisible, we wanted to use the Unix epoch as the clock start. This makes it easy to sync it to other data. Now, you need a data structure to store the timestamp values. Unfortunately, 64-bit floats cannot represent seconds since the unix epoch with the precision that we wanted. Using 64-bit integers do though, even when using nanoseconds.

There is no option to change the time unit before the download. But you can easily convert ns to seconds by dividing by 1e9 (1_000_000_000) post-hoc.

user-ace7a4 24 August, 2022, 09:36:44

ah yes! this makes sense. Do you have an idea regarding the events though? The respective time stamps do not appear in the gaze datafile, is this due to the recording frequency?

papr 24 August, 2022, 09:42:46

recording.start and recording.stop are the timestamps of when you tap the ui button. The gaze sensor is recording independently of these two events.

user-ace7a4 24 August, 2022, 11:14:21

Okay I understand. But even when defining an random event after a few seconds, I cannot find the exact time within the time column. Is What could be the mistake that I made?

papr 24 August, 2022, 11:17:09

Do you use Pupil Cloud to define the event? These events are defined based on the corresponding scene video frame timestamp.

Rather than looking for the exact timestamp, I recommend finding the gaze entry with that smallest timestamp that is still larger than the event timestamp. In Python, you can use bisect or numpy.searchsorted to find it efficiently.

user-b5a4f6 25 August, 2022, 20:50:16

hi Pupil Labs team - I wanted to follow up on an old topic thread and see if anyone has thoughts: https://discord.com/channels/285728493612957698/633564003846717444/996770110750601267

At a high level, I am hoping to enable the OnePlus hotspot and interface with the RTSP API from a second battery-operated device without any public internet connection or a third device (e.g. a personal cell phone) serving as a router/hot spot. It sounds like the challenge might be related to DNS, in which case I'd be happy with a solution that just uses an IP address. Even device discovery isn't entirely necessary if the Companion app can report its IP.

From the message I was replying to, it sounds like other users may also have this need. Thank you for your consideration!

papr 26 August, 2022, 07:04:14

I responded in the linked thread ๐Ÿ™‚

marc 29 August, 2022, 08:58:25

@user-4f01fd Thank you for the workspace invite! We believe we were able to locate the issue. It seems like the "camera intrinsic values" saved in the app are broken, which causes the live preview to not show gaze properly. In Pupil Cloud, the correct intrinsic values are used, so everything works fine there.

To confirm that this is the problem, we'd like to inspect what values the Companion app has saved. To facilitate this, could you please collect and share this data with us? The steps for doing so are as follows: 1) Connect the phone to a computer via USB cable. See also this guide on how this works: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html

2) Navigate to Phone storage/Documents/Pupil Invisible. This folder should contain one or multiple folders whose names start with persist e.g. persist, persist.1, persist.2. Share all of those folders with us.

Once we have confirmed that this is the issue, it will be easy to fix!

user-4f01fd 29 August, 2022, 22:27:31

Hello Marc, i create a Drive folder to share with you and with the files you need. Please can you give me an email to give you access !! Many thanks

user-dad280 29 August, 2022, 11:46:36

Hi guys, New to pupil Labs and just bought a pair of invisibles. Does anyone know if it is possible to use use two recorder units (oneplus8) and one pair of glasses? I am afraid the battery will run out if I just use one pair while doing fieldwork. Cheers Jรธrn

marc 29 August, 2022, 11:51:32

Hi @user-dad280! Yes, that is possible and its the recommended approach when concerned about battery runtime. With one full battery you should get ~3h of recording time. You can swap between companion devices whenever you want to. You simply need to have the Companion app installed on both devices and be logged in on both.

In case its relevant, using a USB-C hub you could also attach a power bank/ a power cable and the Pupil Invisible Glasses at the same time. I can give you more details if this is relevant.

user-c1e127 29 August, 2022, 13:44:36

@user-ace7a4 There are various algorithms that can be used for segmentation of fixations and saccade but during my work in eye tracking for 2 years, I have never seen a paper explicitly mentioning any algorithm as very accuracte. This paper might help, Link: https://www.researchgate.net/publication/220811146_Identifying_fixations_and_saccades_in_eye-tracking_protocols

user-c1e127 29 August, 2022, 13:46:20

@user-ace7a4 I myself take the raw trajectories and use an algorithm called I-VT for the segmentation. It also depends what your use case requires, I use it for 'Biometrics using eye movements data'

marc 29 August, 2022, 13:51:19

Thanks a lot for the input @user-c1e127! The fixation detection algorithm used in Pupil Cloud is actually a derivative of the I-VT algorithm too! It is adding head movement compensation to it, trying to make it more robust against VOR in dynamic settings. We are currently in the process of publishing a white paper on the algorithm and now I wish I could already link to it ๐Ÿ˜…

user-ace7a4 29 August, 2022, 13:53:52

Thank you both so much! I will look into everything mentioned and will probably return!

user-e0a93f 29 August, 2022, 19:41:22

Hi guys ๐Ÿ™‚

I have problems with blink detection. When there are quick vertical eye movements ("saccades"), it is often counted as blinks. I am guessing it is because the AI is not trained for these types of acrobatic movements. Am I right ? / Is there something I can do about it ?

I joined a graph of the detected blinks (only read the black line, which is 1=blink, 0=no blink detected) and the associated video is here https://we.tl/t-WKDGYd7NMn. When you compare both, you see that the first blink is correctly detected, whereas the subsequent blinks are just eye movements that should not be categorized as blinks.

Chat image

marc 30 August, 2022, 12:18:17

Hi @user-e0a93f! Unfortunately the blink detector does indeed not work well in highly dynamic settings. While doing sports it is common to get lots of false positives due to fast vertical eye movements.

The blink detection algorithm we use is a machine learning model, but a very simple one. Unfortunately there isn't anything you can do to improve its performance. Blink detection in a sports settings is really difficult and the algorithms that are currently available do not really suffice. We are working on novel algorithms in this area, but no release can be expected in the foreseeable future.

So I think the only thing you could do is switching to manual annotation or manual removal of the false positives.

user-b5a4f6 30 August, 2022, 01:35:06

ndsi is a whole different API and has

End of August archive