@user-8b0162 you will have to synchronize time between Psychopy and Capture, and save the presentation timestamp for each video frame. Post-hoc, you can align the gaze and video frame timestamps for comparison
I've also been using Pupil with PsychoPy, although I'm just getting PsychoPy to send an annotation to Pupil Capture, so I have a start time for that annotation event which is common to both, as a way of synchronising between them. Precisely matching the gaze to specific frames is less important for my purpose, and perhaps there's a better way of synchronising.
@user-8b0162 you will have to synchronize time between Psychopy and Capture, and save the presentation timestamp for each video frame. Post-hoc, you can align the gaze and video frame timestamps for comparison @papr Thanks. I will look into this.
Hey all! New here, so apologies if this isn't in the right spot, but I'm looking to run Pupil from the source with Python, and I was wondering if it was possible to install/run it in Anaconda/Spyder?
@user-ccb076 You are at the right place. Technically, that would work, but it is possible that you run into some issues during the installation since the documentation is designed to install the dependencies on system level.
May I ask for your reason to run from source? Most of the stuff can be done by running from the bundled application.
Got it. I'll probably just download the virtual environment/python 3.6 then, that seems like it'll be simpler. And sure! My goal is less about running the Pupil Tracker and more that I'm pretty interested in taking a deep dive into the code, looking at reading/writing frames accurately and getting to understand how all of that works for my own research. I thought running from source would be the best option for that!
(also thanks for your response!)
Hi guys, what is the format of the output from the pupil labs tool?
@user-757e1e Checkout our export documentation https://docs.pupil-labs.com/core/software/pupil-player/#export
@papr Thank you!
Hello Everyone! Iβm Tejas, a student at University of California, San Diego. Iβm currently working on a research project with a MIT professor that is trying to track microsaccades(small involuntary eye movements) through a mobile phone camera.
To my knowledge, this has never been done before!
If possible, I would love to speak to you all about the technical limitations and considerations that I might need to address for eg-resolution, sampling frequency, detection methods etc!
@user-8c3893 I am not an expert in this regard, but to my knowledge, microsaccades require at least a sampling frequency of 200Hz or higher and a fairly high pupil detection accuracy in order to be able to differentiate detection noise from microsaccades. Therefore, I believe it will be technically difficult to detect microsaccades with a mobile phone camera. But I would love to stand corrected, in case you are able to pull it of. π Personally, I would start with reading microsaccade literature in order to find out more about their exact definition(s) and potential physiological limitations.
Thank you for your response @papr! Iβve been reading up a lot on microsaccades to understanding its causes, uses and implications and I would love to discuss that with you if you are interested in it.
Due to the issue with noise, we were considering using a VR(which has IR ray scanners) but ultimately went down the mobile phone route as it is way more scalable and we can thus get more information through it.
Since this project is still in its early stages, I was looking to get some advice on software considerations from an experienced person such as you! Till now Iβve not found concrete information as to how/how much I can change the sampling frequency of a camera through a mobile app and how that co relates through a change in resolution.
Right now, Iβm looking to find the minimum sampling frequency and resolution required to get a βgood enoughβ quality of microsaccades data. Later, I will be focusing on how to implement it and how to analyse it to get rid of the noise
@user-8c3893 unfortunately, I won't be able to help you much with that as this server concentrates on head mounted eye tracking, specifically with Pupil Labs hardware and software. https://pupil-labs.com/
@papr Hi again! Code related question, so I'm putting it here this time. From what I've seen in the code, the pupil detector_3d_plugin alters its behavior if a serialized version of the 2d detector's output is provided to it, and uses that (plus a timestamp) to generate the 3d output, right?
@user-3cff0d correct
Okay, glad I understood that correctly. Since I'm modifying the 2d detection algorithm, I don't get a serialized output like the pupil-detectors' 2d detector provides. I'm getting around that by serializing the output I do have into the same form using cython, but I was wondering, might the upcoming feature you were talking about allowing custom pupil detection plugins also allow for a standard python dict or something similar to be passed into the 3d detector instead?
@user-3cff0d If you are running from source you can install https://github.com/pupil-labs/pye3d-detector/
And this detector uses a normal python 2d dict as input
That's the according pupil plugin for the detector: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/pye3d_plugin.py
I'll look into that, thank you!
Hi. Just want to ask if this software have and SDK that we can use to implement it on our android app?
Are there any windows prebuilt wheels for the above pye3d detector? I'm having trouble building it due to some Eigen3 issues.
@user-3cff0d not yet, but we will work on that.
Thanks!
@user-3cff0d We have a pye3d branch that we use for automated deployment. It includes install instructions for eigen3 https://github.com/pupil-labs/pye3d-detector/blob/setup-travis/.travis/setup_win.sh#L70-L82
That's great, thanks so much!
Hi! I would like to use pupil labs code for a DIY kit that i am building using RPi NoiR and IR leds. Is it possible?
@user-40de8a Technically yes, but since the RPi uses the ARM architecture, you will have to compile a lot of modules from scratch. Also, its CPU is likely to be not powerful enough to run the pupil detection algorithm at full speed.
Oh I see, thanks! I'll look into it and see if I can compile these modules first π
I am able to get the pupil labs software running on RPi. Could you point me towards the exact python file/method I could modify in order to get the image from picamera instead of pupil labs one?
@papr Hi again, regarding the pye3d plugin, what sort of stage of development is it in? I've gotten some output from it that differs quite a bit from the output I'm getting from the default 3d plugin. Is it meant as a total replacement, or something more specialized, etc? If you're looking for more specific feedback, I can provide it with screenshots and such.
@user-3cff0d It will replace the current 3d detector. It is to be expected that it will provide different results as it includes refraction correction. https://www.researchgate.net/publication/333490770_A_fast_approach_to_refraction-aware_eye-model_fitting_and_gaze_prediction
This also means that the eyeball outlines are not comparable between the two eye models, as one is refraction corrected.
@user-3cff0d We will only apply refraction correction to 3d data, not back-projected 2d data: https://github.com/pupil-labs/pye3d-detector/pull/4
As a result, the 2d visualizations should be consistent with the current 3d detector.
Thanks for the update!
Hi, I am building the pupil codes from the source. While installing pupil-apriltags via pip, it throws an error 'No matching distribution found for cmake==3.14.4'. When I tried to install this specific version of cmake via pip, it cannot find this version as well
@user-40de8a Please try the following:
pip install [email removed]
@papr Thanks, it worked. However, when I tried installing pyndsi it throws up an error - 'error: command 'arm-linux-gnueabihf-gcc' failed with exit status 1'.
any idea how to resolve this? π
@user-40de8a Could you share the complete logs with us? Also, please let us know the exact command(s) that you used to install it.
THis is the complete log. The first line has the command used. Thanks!
@user-40de8a It looks like you are trying to install pyuvc, not pyndsi, and libuvc is not correctly installed. Please read about the libuvc dependency here: https://github.com/pupil-labs/pyuvc#dependencies-linux