Thank but the issue is that it is not sold in Japan officially. There are many vendors available importing it to sell in Japan
As far as I understand, there is no penalty to sell it but there is to use it. I really hope you can solve this to use Invisible properly in Japanβ¦.
In addition there is a way to obtain permission to use it for 180days if you apply for temporary use https://www.cps.bureauveritas.com/needs/japan-market-access-compliance-wireless-type-approvals
Hi all π I have set the glasses up to record normal video but is there any in depth documentation on setting up heat mapping? We are testing this with a user sitting in front of 6 monitors (static images, mainly excel spreadsheets) and want to track what screens/sheets they look at the most during a 30 minute window. TIA
Hi @user-efdf9e! If I understand you correctly, you would like to visualize heatmaps on images/screenshots of your 6 monitors. To do this, the first step is mapping the gaze data onto the monitors. We offer two enrichments to help you with that:
1. Reference Image Mapper For this enrichment you need to define a reference image of your object of interest (and a scanning recording; see docs for details), which gaze will be mapped onto. So here you would input an image of your 6 monitors and gaze will be mapped onto this image.
Screen-based environments can be difficult for this algorithm though. Please see the setup requirements and limitations in the docs and ideally trial this algorithm in your setup before committing to it! https://docs.pupil-labs.com/invisible/explainers/enrichments/#reference-image-mapper
2. Marker Mapper For this enrichment you need to equip the surfaces you want to track, i.e. the monitors, with physical markers. Those markers will be detected by the algorithm and used to track the monitors in the scene camera to map gaze onto them.
While the addition of markers can be an issue in some setups, this approach allows you to track surfaces very robustly. https://docs.pupil-labs.com/invisible/explainers/enrichments/#marker-mapper
Heatmap Both enrichments allow you to draw a heatmap. Simply right-click on the calculated enrichment and select "Heatmap".
Let me know if you have further questions!
Hi all, I have a problem with getting my last measurements with Pupil Invisible to play in Pupil Player. This is the notification I've got. Do you know how to solve this?
Hi! In the past, we had multiple issues that were caused by the application or recording folder being located in the One Drive folder. Please move the recording outside of the One Drive folder and try opening it again. If this does not resolve the issue, please delete the recording's offline_data folder and try again.
Ah yes I see, this resolved the problem. Thank you!
Just for the record, could you please clarify which of the two steps solved the issue?
Placing it out of the One Drive folder, the offline_data is still in it
Thank you!
Thanks, this would be good to know. The use case I have in mind is three people viewing a large display, each wearing pupil invisible. The aim is to plot gaze position on the display for each person in real time. Would this require the wifi approach?
Using a wifi you can expect something like 10-100 ms of transmission delay depending on your setup. I.e. if a subject changes their gaze point on the screen, it would take 10-100 ms for the gaze point visualization on the screen to update. For most applications, including things like gaze interaction, this should be sufficient. If it suffices for your use case depends on what exactly you are trying to achieve.
Thanks Marc, what I had in mind was three observers of a single large display. The display starts out blank, but each observers gaze position reveals the underlying image in a spatial Gaussian pattern with a temporal decay. Does that sound feasible?
I see! I'd recommend piloting it just to be safe, but yes, that sounds feasible to me!
π
hey How can I buy this device in UAE?
π Hi @user-619a15! Please send an email βοΈ to [email removed] including your billing/shipping address, telephone number, desired product/s and quantity.
hello, I am using ubuntu 18.04 and I wanted to use pupil-labs-realtime-api in my code but I'm getting this error when I tried to install it. what should I do?
Please make sure to user Python 3.7 or higher
hi I'm not able to find any devices using pupil-labs-realtime-api the companion and laptop are on same network . I also checked that the RTSP streaming is working but when I search of devices using the python lib, there are no devices found
Hey π
I also checked that the RTSP streaming is working How did you check that?
can anyone help with this please?
I have a question : what is the best completable EEG to Integrate with invisible for neuromarketing ?
yes I need a device for record EEG signals Please introduce a device that should be integrated with you. Are epoc flex or Enobio 8 devices suitable?
Hi @user-619a15 π. It looks like both the Epox Flex (Emotiv software) and Enobio 8 (NIC2 controller) have support for Lab Streaming Layer (LSL). We maintain an LSL relay for publishing data from Pupil Invisible in real-time using the LSL framework: https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/. This should enable unified and time-synchronised collection of measurements. For turnkey solutions to EEG and eye tracking fusion, we can also recommend checking out our official partners/integrations page: https://pupil-labs.com/partners-resellers/
by going to the address in browser
Did you enter the IP address or the dns name (pi.local)?
the ip address
What happens if you enter the dns name instead?
I can still access the stream
@user-ccf2f6 if that does not work, it is likely that your network does not support the mdns service. Typically, that happens if you host a hotspot on the companion device or a in a restricted network, e.g. university network.
it works, I've tried on multiple networks also but the api can still not find devices.
Does this alternative work for you? https://discord.com/channels/285728493612957698/633564003846717444/999952090707267634
This is unusual. What is the exact API call that you use for discovery?
I've tried discover_one_device/discover_devices/wait_for_new_device calls from the async code examples on pupil docs
Have you tried passing a longer timeout?
yes, I've also tried indefinite search
it results in AttributeError: 'Device' object has no attribute '_event_manager'
That means that it cannot setup the connection (This is consistent with the Python program being unable to find the device). Just to confirm, you are running the Python program on the same computer as the browser, correct? There are no VMs, docker images, windows linux subsystems, etc involved, correct?
yes, I'm using Anaconda Powershell on the same computer. No VM etc
hello everyone; for the data analysis of pupil data i tried running the code from 'https://pyplr.github.io/cvd_pupillometry/04d_analysis.html' but i am not able to run it.. i am getting an error in plr processing.. i dont know what to do.. i want to do the preprocessing of pupil diameter but i am not able to do it..' does anyone have any solution regarding this?
Hi @user-e91538! Unfortunately, Pupil Invisible does not provide pupillometry data. The eye cameras are located so far off to the side that a lot of the time the pupil is not visible at all in the image, so the available data streams are limited to gaze, fixations and blinks. To measure the pupil diameter you'd have to use Pupil Core.
Hello Marc, I am using Pupil core onlz and from that i am getting pupi_positions.csv file...in that i am getting manz columns from those columns i just need to use some columns like pupil_timestamp. diameter, method. diameter_3d, pupil id and confidence..
Hi, I used a pupil invisible for a car drive to check if the person has observed the traffic signs while driving. In here, I got the video and the person's gaze data which I want to map with each other but found out that the frequency of the both the data are different. The video frequency is 30 frames per second and gaze frequency is 50 frames per second. I would like to know how the pupil software deals with this problem and maps both the data. So that I could do the same.
I see! I'll forward your message to the π core channel then, so my colleagues can help you there!
thank you π
Hi @user-d119ac! Both the video frames and the gaze samples are timestamped, which allows you to correlate them with one another. Given the differences in framerate you will end up with multiple gaze samples per video frame.
Today we coincidentally published a guide on how to manually sync data streams using timestamps in Python: https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/
Does this guide help you with your question, or were you looking for something else?
FYI If you download a Pupil Invisible recording from Pupil Cloud the gaze sampling frequency will be even higher at 200 Hz.
Thank you Marc, this is exactly what I was looking for.
@user-4a6a05 Could you reply to this message from π» software-dev channel please: https://discord.com/channels/285728493612957698/446977689690177536/1006566797006360576
Hi! Could you please release an updated client (https://github.com/pupil-labs/pupil-cloud-client-python) for api version2 (https://api.cloud.pupil-labs.com/v2) Or is it possible that I can generate the python client code myself with the swagger.json? And is there any possibility to export videos via api? If not. Could you please implement this?
Two questions. Firstly after applying the marker mapper enrichment if in the video the markets are intermittently identified will fixations still be identified within the area of interest identified when setting up the enrichment or it only those fixations when the markers are identified? Also is it possible to download video of the fixations overlays together with the gaze overlays?
Hi @user-057596! There is the "raw" fixations signal, containing fixations in the scene camera, which would be included in the recording download. This data stream will always contain fixations throughout the recording.
And there is "mapped" fixations, which are the same fixations mapped onto the surface tracked by the marker mapper. Those would be included in the marker mapper download. In order to map a fixation to a surface, the surface needs to be detected. So if the detection failed for a couple seconds, the fixations within those seconds would not be mapped.
It is currently not possible to download the fixation scanpath as video. The only thing I can point you to is this Python example script of how to render the scanpath yourself: https://gist.github.com/marc-tonsen/d8e4cd7d1a322a8edcc7f1666961b2b9
Thanks Marc
hi Pupil friends. we were wondering what this feature does (companion app) "Eye Video Transcoding"
Hi @user-0f58b0! Transcoding the eye video essentially means saving it in a more efficient format, which saves memory but requires additional processing. On OnePlus8 devices the available processing power is sufficient to allow transcoding, but on OnePlus6 devices it is disabled by default.
Hi, does someone know how the pupil invisible recordings are upscaled to 200 Hz in cloud since the recorded data on device is around 120Hz. Is there a software upscale implemented on the recorded data when it is uploaded to cloud or is there some meta data used to generate higher resolution
The eye video is recorded at 200 Hz on the phone. The only reason why the real-time gaze signal is lower is because the phone's processing power does not suffice. In cloud the gaze signal is calculated again from scratch using the full video framerate . So the 200 Hz are not just interpolated or upscaled, but are based on actual video frames at that frequency.
understood, that is very helpful, thanks!
Hello community! I have a problem with the Invisible calibration, I can't find the red circle, it doesn't make me see it while I try to do the setup of the device. But if I record a sequence, it appears and also the bindings, but of course they are not aligned to real look. What can happen? I did all the Oneplus 8 upgrades
Hi @user-4f01fd! So if I understand you correctly you do not see a gaze point in the live preview of the Companion app, but you do see it when playing back a recording in Pupil Cloud.
Could you please let me know the Android version installed on the phone (see system settings) and the version of the Companion app (tap and hold the app icon -> app info)?
@user-4a6a05 Thank you for your reply, Android is 12. Invisible Companion Version is 1.4.25-prod.
Thanks! In theory this combination of versions should work.
In the screenshots you shared it looks like the gaze circle is visible in the live preview in the top left corner. Is the gaze circle stuck in this position?
Also, can you confirm that no error messages are shown by the app?
Would you be able to share an example recording of ~1 min length which is affected by the issue? If this recording is uploaded to Cloud, it would suffice for you to let me know the recording ID and to give us explicit permission to access it to investigate.
Hi Was there a solution to this? after searching the forum i can't seem to find any other mention of calibration issues with the pupil labs invisible. My issue is that the gaze overlay circle seems offset to the left. Are there any tips or tricks to help this? i cant seem to find any relevant settings to change?
Hi! My setup is as follows: I have a camera fixed on the dashboard of the car and the driver is wearing a Pupil Invisible device. I have the gaze with respect to the world camera of the Invisible device. I want the gaze point with respect to the dashcam. How do I do this?
Generally, this is a very difficult task as you need to estimate the physical relationship between scene and dashboard camera for every scene video frame.
How much of the car inside is visible in the dashboard camera?
I can share sample images if needed
what do you think about attaching some marker to the Pupil Invisible?
How can I export a movie in Recordings (Pupil Cloud) including gaze location, fixations and saccades (like in the preview)? I am only able to export raw data.
Hi @user-eb13bc! You can export a video with gaze overlay only using the Gaze Overlay enrichment: https://docs.pupil-labs.com/invisible/explainers/enrichments/#gaze-overlay
Downloading a rendering of fixations (or the fixation scanpath as you can enable it in the Cloud player) is currently not directly possible in Cloud. The only thing I can point you to is this Python example script of how to render the scanpath yourself using the raw data: https://gist.github.com/marc-tonsen/d8e4cd7d1a322a8edcc7f1666961b2b9
Hi Iβm trying to live stream to my pad using the QR code which worked the last time in the same setting but but each time it say safari could not open the page because the server stopped responding. Thanks Gary
Hi @user-057596! In theory this should work. The most likely explanation would be some kind of network issue. Did you make sure that the Companion device and the iPad are connected to the same wifi? Could you try accessing the monitor app from a Laptop instead and see if you have more success? Note the live streaming requires MDNS and UDP traffic to be allowed on the network. In larger public wifis, e.g. at universities, this type of traffic is often forbidden. Could this be an issue here?
Hello, I was wondering if there was a way to pull surface gaze data using the updated API? I recall this was possible with the previous network API framework using zmq. I wasn't very successful finding anything that might say it's possible in the API documentation.
Hi Mark you were spot on, there was an undetected problem with the universityβs internet system which has now been rectified. Thanks for your help.
For Pupil Core and the old NDSI (the old real-time API) real-time surface tracking was available. This is because the surface tracking could be computed in Pupil Capture, which then published it to the network. The companion phone can however currently not take over the duty of calculating the surface tracking, so this is not immediately possible.
I can point you to this demo script however, which uses the API to receive gaze and scene video data on a computer, and uses the open-source surface tracking code to do real-time surface tracking on the computer. This would not make the surface tracking data available on the network, but on the computer that is executing the script. Note, that the README file is slightly out of date. You do no longer have to create an API token for this script to work and you can skip the according setup steps. https://gist.github.com/papr/40d332498bfacb5980a754c5692068ec
Thank you so much @user-4a6a05 ! This was very helpful. Given the time crunch, I'll keep the old method for now, but find the new implementation will be useful (not rely on having pupil capture up and running).
Hello @user-e3f20f, the April tags we have that are placed on a physical board are not being recognized by our world view camera despite them working just fine yesterday. We have gone through the documentation and have ensured that there is enough white space around each tag. Do you have any suggestions on how to resolve this suggestion?
Just as a sanity check: Is the surface tracker running?
Thank you for checking in, I am not sure what happened yesterday, but it is working fine now! I did, however, come across another question. We have two square AOIs set on our board (one on the left and one on the right) and in the center of the AOI squares there is a target light. We recorded some data where the user looked at the center of the right side AOI square and then looked at the left side AOI square all in one go. After processing our data and creating two respective surfaces for the right and left sides we exported the data. When opening the gaze_position_on_surface_Left and Right files I noticed that the right side file had around 250 data points and the left side file had only 35 data points. I am not sure why the right-side surface file would have more data points than the left-side surface file since it is pulling data all from the same recording. We are specifically interested in the on_surf column of these files as we want to see when the gaze of the participant falls out/in the AOIs.
These files only contain gaze data from the time that the surfaces were actually detected. Please ensure that the marker detection works well. Also, before exporting, wait for the marker detection to finish (you can see the progress in the marker cache timeline)
Hi. I downloaded a recording from Pupil Cloud and dragged it into the Pupil Player and I get the following message.
Please delete the corresponding offline_data folder and try again π
Is there a way to calculate or get the amount, and maybe even the duration, of saccades that were made within a recording?
There is no algorithm that is explicitly detecting saccades for Pupil Invisible. If the subjects in your recordings maintain a stable head-pose though, one can argue that the gaps between fixations correspond to saccades. In that case every "fixation end" would be an according "saccade start" and one could parse saccades from the fixations file.
Hello, I understand that the cameras don't start at the exact same time and record at different sampling rates. However, can I assume that the timestamp (and frame index) given by the annotation plugin corresponds to the nearest timestamp (and world index) in the pupil_positions.csv file? For example, If the annotation for the onset of an image starts at 7100.4801 (frame index 617), does the pupil diameter around that time (given by the timestamp column in pupil_positions) correspond to the onset of the image (given by the annotation marker) - is that pupil diameter time-locked to that annotation event? Thanks for your help!
Hi @user-ced35b! First off, are you using Pupil Invisible (rather than Pupil Core) for your recordings? In case you do, please note that measuring pupillometry data with Pupil Invisible in Pupil Capture is not recommended and that you will most likely get low quality data.
Besides that, all data streams made by Pupil Invisible and Core are 1) independent and 2) timestamped. "Independent" means, as you mention, that the according sensors run separately with different start times and sampling rates. Thus, samples from different data streams are never made at the exact same time. The timestamps allow you to match the data up to the level of tolerance you desire. E.g. given the 200 Hz pupil signal and the 30 Hz world video you could match all pupil samples to the closest respective video frame. Then, every frame would match with a set of pupil samples.
The world index columns provided by Pupil Capture do exactly this: they tell you which world video frame is the closest available one. So if you are interested in the pupil diameter value at a specific point in time, and you have annotated that point, such that you know the corresponding world frame, you are correct in selecting the pupil diameter value with the same world frame index. However, there will be multiple pupil diamter values associated to the same frame. You have to choose if you want to take their mean, or compare the timestamps in more detail to find the closest single pupil diameter sample.
Please see also Pupil Captures export documentation here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
And this how-to guide on sensor syncing and timestamp matching, which also discusses this problem: https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/
Thank you this is very helpful! Yes I'm using Core (meant to post this in the core channel). I'll definitely consider taking the mean of the multiple pupil diameter values for a given frame.
Hello everyone! my name is idan and iβm part of Menuling. our goal i to improve profits for restaurants by engineering the menu According to behavioral economics Models and other statistics Methods. we are looking for best ways to use the glasses for this. performing experiments with the glassess on subjects reading a menu for the first time and analyzinf gace focus and heatmap, to know where he looked first, where did he focused the most, and how long did he stays on each focus point for exapmle, anyone had any expereince analyzing text gaze for exapmlt or something similar?
Hi @user-87b92a! If you are interested, feel free to reach out to [email removed] to schedule a meeting with one of our product specialists to discuss your use case and how it might be possible with our products in detail.
I can point you to a few resources that might be helpful. These blog post are about an example study we did in an art gallery. It's analysing gaze behaviour on paintings rather than menus, but the metrics calculated are somewhat similar: https://pupil-labs.com/blog/news/demo-workspace-walkthrough-part1/ https://pupil-labs.com/blog/news/demo-workspace-walkthrough-part2/
The key tool in the analysis process would likely be the Reference Image Mapper, which would allow you to map gaze data to your menus: https://docs.pupil-labs.com/invisible/explainers/enrichments/#reference-image-mapper
Hi all, yesterday I've done a measurement with Pupil Invisible and during this measurement the cable between the phone and glasses disconnected. After this happened, I reconnected the cable again and started a new measurement. Right now I can see that the duration of that measurement was 14 minutes (which is true), but in Pupil Player it only shows me 34 seconds. Can this recording still be saved or do I only have 34 seconds now?
Can you clarify if you explicitly stopped the running recording and started a new one when the cable disconnected?
This is what it shows in Pupil Cloud. So the duration is right (top one is the important one), but when replaying it I can only see 34 sec
I am not sure if I stopped the recording myself, I think it was stopped when the cable disconnected but I am not sure. I did specifically start a new recording to be sure
I think what happened is the following: Generally, the app is able to handle disconnects of the glasses. In some cases, there can be USB errors on the operating system level which are difficult to catch for the Android application. As a result, the scene video recording might have been aborted, leaving the file in a corrupted state. Could you please share the broken and the following working recording with [email removed] It might be possible to recover the corrupted video file.
Ah oke, I will send an email. Thanks for the quick response!
Hi! I have two questions: Is there an advantage for using ns over regular seconds in eye tracking data? I think for starting I would prefer using seconds, is there an option to convert the time before downloading the csv files? My second question is the following: I have 2 defined events, start and end of recording. However, I cannot find both timestamps in my gaze.csv file. How can that be?
Hi π For Pupil Invisible, we wanted to use the Unix epoch as the clock start. This makes it easy to sync it to other data. Now, you need a data structure to store the timestamp values. Unfortunately, 64-bit floats cannot represent seconds since the unix epoch with the precision that we wanted. Using 64-bit integers do though, even when using nanoseconds.
There is no option to change the time unit before the download. But you can easily convert ns to seconds by dividing by 1e9 (1_000_000_000) post-hoc.
ah yes! this makes sense. Do you have an idea regarding the events though? The respective time stamps do not appear in the gaze datafile, is this due to the recording frequency?
recording.start and recording.stop are the timestamps of when you tap the ui button. The gaze sensor is recording independently of these two events.
Okay I understand. But even when defining an random event after a few seconds, I cannot find the exact time within the time column. Is What could be the mistake that I made?
Do you use Pupil Cloud to define the event? These events are defined based on the corresponding scene video frame timestamp.
Rather than looking for the exact timestamp, I recommend finding the gaze entry with that smallest timestamp that is still larger than the event timestamp. In Python, you can use bisect or numpy.searchsorted to find it efficiently.
Hello Marc, yes I confirm there is no error messages shown by the app. Sometimes the gaze point appears in the preview of the Companion app like in the picture but you cannot do any calibration with it and is fixed in this place, sometimes either not appear in the screen. How can I give you permission to see in my cloud app?
Simply write a message here in chat saying something like "I give you permission to access my recording with the ID <insert_ID_here> to investigate the issue I have reported"! Some sort of written consent is required for us before accessing user recordings due to our privacy policy.
"I give you permission to Pupil Labs access my recording with the ID [email removed] investigate the issue I have reported"
Thank you! I had one more thought, that would make this process a lot easier for me, and that is if you could simply add me as a user to your workspace temporarily. Then we do not have to jump though the hoops of providing access exceptions to user data on our side. To do this you'd have to add my user account [email removed] to your workspace in the workspace settings.
hi Pupil Labs team - I wanted to follow up on an old topic thread and see if anyone has thoughts: https://discord.com/channels/285728493612957698/633564003846717444/996770110750601267
At a high level, I am hoping to enable the OnePlus hotspot and interface with the RTSP API from a second battery-operated device without any public internet connection or a third device (e.g. a personal cell phone) serving as a router/hot spot. It sounds like the challenge might be related to DNS, in which case I'd be happy with a solution that just uses an IP address. Even device discovery isn't entirely necessary if the Companion app can report its IP.
From the message I was replying to, it sounds like other users may also have this need. Thank you for your consideration!
I responded in the linked thread π
So a stable head-pose would be equal to a stationary head-pose? I am not sure if I understood this correct :). Would it be an issue if the participants are moving their heads, if one were interested in calculating saccades?
Let's assume the following eye movements are what is possible:
Fixations: when the eye "stands still" and looks at a fixed point.
Saccades: when the eye jumps between fixations.
Smooth pursuit: When the eye is smoothly following a moving object.
VOR Fixations: When the eye is moving to counter body movement in order to maintain a fixation.
Our fixation detection algorithm is good at detecting Fixations. Unlike most other algorithms, it is also decent at detecting VOR fixations, but not perfect. Smooth pursuit movements can sometimes also be detected as fixations by our algorithm, but this is rather inconsistent.
One could now argue, that every eye movement, that is not a Fixation, Smooth Pursuit or VOR Fixation must be a Saccade. If we could guarantee that Smooth pursuits and VOR Fixations do not occur in the recording setting, we could argue that everything that is not a fixation must be a saccade.
In a setting where the head-pose is stationary (much better word than "stable"!), there should be no VOR fixations. Unless there are stimuli triggering them, there should also be no smooth pursuits. So in summary, in a setting where the head-pose is stationary on can assume that the gaps between fixations are saccades. In other settings it is more complicated because VOR makes things a lot more dynamic and difficult to classify.
@user-4f01fd Thank you for the workspace invite! We believe we were able to locate the issue. It seems like the "camera intrinsic values" saved in the app are broken, which causes the live preview to not show gaze properly. In Pupil Cloud, the correct intrinsic values are used, so everything works fine there.
To confirm that this is the problem, we'd like to inspect what values the Companion app has saved. To facilitate this, could you please collect and share this data with us? The steps for doing so are as follows: 1) Connect the phone to a computer via USB cable. See also this guide on how this works: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html
2) Navigate to Phone storage/Documents/Pupil Invisible. This folder should contain one or multiple folders whose names start with persist e.g. persist, persist.1, persist.2.
Share all of those folders with us.
Once we have confirmed that this is the issue, it will be easy to fix!
Hello Marc, i create a Drive folder to share with you and with the files you need. Please can you give me an email to give you access !! Many thanks
Hi guys, New to pupil Labs and just bought a pair of invisibles. Does anyone know if it is possible to use use two recorder units (oneplus8) and one pair of glasses? I am afraid the battery will run out if I just use one pair while doing fieldwork. Cheers JΓΈrn
Hi @user-dad280! Yes, that is possible and its the recommended approach when concerned about battery runtime. With one full battery you should get ~3h of recording time. You can swap between companion devices whenever you want to. You simply need to have the Companion app installed on both devices and be logged in on both.
In case its relevant, using a USB-C hub you could also attach a power bank/ a power cable and the Pupil Invisible Glasses at the same time. I can give you more details if this is relevant.
Yes please, if you could shed some light on how to set a USB C hub as well that would be great. And also any tips on which USB c hub would be greatπ
Okay, in principle the setup is easy: you attach a USB-C hub to the phone and then the Pupil Invisible Glasses are plugged into the hub, as well as the power source. There are a couple of caveats though:
1) Compatibility of USB-C hubs and Android is a bit weird and some hubs are incompatible for no apparent reason. If the description of the hub includes Android or OTG/On-The-Go it should be compatible.
2) The hub needs to be a "powered hub", i.e. it needs to support plugging in a power source and supplying that power to other connected devices. Note, that the female USB-C socket you connect the power source to is usually not capable of transferring data as well. I.e. if you connect the Pupil Invisible Glasses using this socket, no data can be transferred. Its only good for power.
Most hubs that meet the above requirements should work. We have successfully tested this e.g. model: https://www.amazon.com/gp/product/B08HZ1GGH9/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
This makes a lot of sense! I think I get it now. Is there a way to still calculate saccades in dynamic i.e. non stationary scenarios? Maybe via a plugin or a pipeline? Or even by setting events in the recordings, meaning saccades are detected manually? I am fairly new to eye tracking data and the invisible glasses, so please forgive me the many questions. Best π
I am happy to answer any questions! π Detecting saccades and fixations in dynamic settings is an open research problem. When considering the effects of VOR it is not even clear what the definition of these movements exactly is. So for a precise answer you'd have to dive into the literature on how the detection algorithms work, but AFAIK there is no generally accepted method for detecting fixations and saccades in dynamic settings. Historically, research has been focused on stationary settings.
Maybe someone else in the community here with more expertise on eye movement classification has additional input?
Manually annotating saccades using events is of course an option. It is labor intensive, but allows you to detect saccades in any setting using the definition you see fit. Event annotations you make in the project editor will be included in the Raw Data Exporter enrichment's download.
@user-ace7a4 There are various algorithms that can be used for segmentation of fixations and saccade but during my work in eye tracking for 2 years, I have never seen a paper explicitly mentioning any algorithm as very accuracte. This paper might help, Link: https://www.researchgate.net/publication/220811146_Identifying_fixations_and_saccades_in_eye-tracking_protocols
@user-ace7a4 I myself take the raw trajectories and use an algorithm called I-VT for the segmentation. It also depends what your use case requires, I use it for 'Biometrics using eye movements data'
Thanks a lot for the input @user-e91538! The fixation detection algorithm used in Pupil Cloud is actually a derivative of the I-VT algorithm too! It is adding head movement compensation to it, trying to make it more robust against VOR in dynamic settings. We are currently in the process of publishing a white paper on the algorithm and now I wish I could already link to it π
Thank you both so much! I will look into everything mentioned and will probably return!
Hi guys π
I have problems with blink detection. When there are quick vertical eye movements ("saccades"), it is often counted as blinks. I am guessing it is because the AI is not trained for these types of acrobatic movements. Am I right ? / Is there something I can do about it ?
I joined a graph of the detected blinks (only read the black line, which is 1=blink, 0=no blink detected) and the associated video is here https://we.tl/t-WKDGYd7NMn. When you compare both, you see that the first blink is correctly detected, whereas the subsequent blinks are just eye movements that should not be categorized as blinks.
Hi @user-e0a93f! Unfortunately the blink detector does indeed not work well in highly dynamic settings. While doing sports it is common to get lots of false positives due to fast vertical eye movements.
The blink detection algorithm we use is a machine learning model, but a very simple one. Unfortunately there isn't anything you can do to improve its performance. Blink detection in a sports settings is really difficult and the algorithms that are currently available do not really suffice. We are working on novel algorithms in this area, but no release can be expected in the foreseeable future.
So I think the only thing you could do is switching to manual annotation or manual removal of the false positives.
ndsi is a whole different API and has
Great, please share it with [email removed] Thanks!
Already sent, hope you can fix this issue !! many thanks
Thanks for sharing the folder! This does confirm the issue. The fix to the problem is for you to
1) delete the persist folder
2) restart the app by going to App Info (Tap and hold the app icon -> App Info) and select "force stop". Then open the app again.
This will trigger the app to create the folder again and re-download the correct intrinsic values. You may have to re-new your login to the app, but besides that nothing should change, e.g. no recording data will be deleted.
We are back in action !!! Muchas gracias !!!
Thank you so much Marc, will try today to fix the problem and be back to you with some news !!
Ok thanks π¦ Are you planning on enlarging the data base with sport data for AI training? If yes, is there a way we can share our data with you to help improve the product in the future?
Access to real eye tracking data in a sports context would absolutely be valuable for product development in many ways. If you would be interested in sharing your data with us or collaborating with us somehow in this regard, I'd be very happy to discuss this in more detail!
Regarding the blink detector specifically, only adding sport data to the training data would not suffice to make it robust in such a setting though. The current blink detection algorithm is very simple and not really an AI. It's not a neural network, but a much more basic machine learning algorithm (an XGBoost classifier).
To get a more robust algorithm we'd have to develop a new algorithm that utilizes more complex machine learning and computer vision tools. A larger dataset would for sure help with this! But the blink detection problem is less trivial than one might think and is not easily solvable by "throwing enough data at the neural network", which sometimes is enough for other problems.
So in summary, a collaboration allowing us to access additional sports data would be super helpful, but also then a more robust algorithm would still take some time and effort to develop.