Hey guys, working on some blink rate code in πΆ invisible , has anyone produced something to plot/determine when in a recording the highest number of blinks occurred?
Hey, are you basing your work on Cloud's blink detection output?
No way I missed a feature in Pupil Cloud, right?
Yeah, the code is pulling from a raw data export - it's using blinks.csv and events.csv. Most recent developments are in #1047595857610022972 , but I can't for the life of me get the code that either of us made to plot the way I need it to
hey hey, I am trying to install and test piupil core on macos ventura. the problem is that the system sees your hardware as a keyboard and I have no clue how to change this. is there perhaps a plug in or other solution that can help?
Hey, that is new to me. How does this "see as keyboard" behaviour look like/show?
when plugged in, a driver installation window pops up with a message similar to 'we're trying to install your keyboard'.
and when I turn the program on, alternating messages pop up (see screenshot)
Please simply close the "install keyboard" window. On macOS, Capture needs to run with admin rights to access the camera. See https://github.com/pupil-labs/pupil/issues/2240 for details
aahhhh ok! thank you very much for super fast respond
Hey, one more question, thanks to your instructions all the cameras are on, now I have problems with the calibration of the world (I went through the Getting Started section). I believe it is Gazer3D that is causing this problem:
WORLD: Calibration Failed! WORLD: Not sufficient pupil data available. WORLD: Plugin Gazer3D failed to initialize
Do you have the eye windows opened? Without them, pupil detection is not running and there wouldn't be any pupil data, explaining the error above.
if you mean pupil capture eye 0 and pupil capture eye 1, then yes. also process of calibration seems to be fine, because im getting the green dots - the error above pops up right after
ok ok! got it! sorry
Please see also the best practices of fitting the 3d eye model (before the calibration) https://docs.pupil-labs.com/core/best-practices/#pye3d-model
Hello everyone, I wanted to use Pupil_apriltags to detect Apriltags in real-time for two Pupil invisible video streams at the same time. However, I encountered a Segmentation Fault. I was curious as to what was causing the problem. Is Pupil_apriltags using just a single thread to detect tags in frames?
Hey, yes, unfortunately the module is not thread safe
Hey @papr What does the eye0_lookup.npy used for? In case it is not available in a certain recording, is there any way of restoring it?
It is a cache file that helps Pupil Player seek efficiently. Player generates it based on the timestamp files if it is not present.
Thanks
Is there any solution to solve the issue? at the moment, i'm using mutex but i would like to know your idea.
A typical approach would be to create multiprocessing Pool https://docs.python.org/3.8/library/multiprocessing.html#multiprocessing.pool.Pool that executes the detection function. Make sure to make the import part of the executed function. Do not import it globally.
Hello @papr
I added surfaces markers to my monitors display and I defined a surface with the resolution of my monitor. How I map those coordinated to a screenshot taken on that monitor from surfaces.pltdata file?
Thanks
That worked! Thanks @papr
Is it possible to stream 'x_scaled' and 'y_scaled' through LSL?
Hi, in my research environment I'm not allowed to use the Pupil Cloud. So I would like to create a pipeline that creates a recording, does the fixation extraction via Pupil code, then does further processing with my own code. As far as I understand if you want to have the Invisible recordings on your computer you should use Pupil Capture instead of the Companion App to do the recordings. But using Pupil Capture I can't automatically start and stop recordings via the realtime API. Any ideas how to resolve this?
Hey, unfortunately, due to the eye camera position, Pupil Invisible glasses are not meant to be used with the Pupil Core pipeline. The pupil detection just does not work well with it. π
Got it, then I'm only able to do recordings via Companion and there is currently no automatable way to get recordings from the companion device to a computer. Would it be feasible to create API methods to export recordings via realtime API? Or maybe event transfer the recording via API?
You could record the realtime streamed data yourself. But I do not recommend it. There might be data loss due to network bandwidth issues. Unfortunately, there is no good way to automate the local export of recordings.
Hello. I'm working on some research which requires me to read data from the surface tracker plugin API. I have a few questions surrounding the rate at which data comes in and the amount of data. Specifically, I need to check whether or not the user is looking at a surface. In each message, there are 6-10 boolean values that convey this. Is there any way to standardize the number of booleans per message? Furthermore, the rate at which the data comes in seems pretty variable. Is there any way to standardize the rate at which messages are sent? Finally, is there any way to speed up the rate of messages or is the best option using a computer with more performance power?
Okay, also it is only the scene video as far as I know (didnt test it) that is streamed. To make the Export function of the companion app available over the realtime-API is also not possible for the moment right? Then I could track changes in the "Pupil Invisible Export" folder of the comp device
That won't be possible. Instead we are working towards making Cloud available for more use cases, even those with extra ordinary data privacy requirements.
Okay, thanks for your inputπ
Hello, could you help me? I'm doing the scan path basically using the pupil labs example but I have a problem that the generated plots always end up centered on the image
Hey, which file do you read the data from?
Also, not sure where I read your questions, but you will need to multiply the normalized surface positions by the image size to get the corresponding Pixel locations
I'm reading from fixations, I'm already multiplying by the size of the image
Fixations.csv are fixations in scene camera coordinates, not surface coordinates. Please check the surfaces export folder for surface-mapped data
So is it fixation_on_surfaces or gaze pozitions on surface?
Gaze are all samples, fixations those that belong to a fixation. But yes, those are the correct files