Hello everyone, I am currently doing my engineering thesis and I would like to create a Pupil Player plugin. To do this, I started by installing the necessary dependencies as described in the documentation. Being on macOS, I followed the following instructions https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-macos.md However I'm on a MacBook Pro Max M1 and OSX version 12.4, I could easily install the dependencies with brew (cmake,pkg-config,libjpeg-turbo,etc...) but I have some problems when installing dependencies with the pip install -r requirements.txt command (when installing pupil-detectors). I would like to know if someone has already had the opportunity to work with pupils on an M1 or if you would be aware of some dependencies that are not compatible with ARM architectures or last version of OSX?
Hi, pyav does not build on ARM. But what I recommend is: 1. Install the Intel-version for brew 2. Install an Intel-version Python
With that setup you should be able to follow the instructions
Thanks for your advice but I still have problems with av and uvc, can you help me ? here is the error:
Hi, I was looking into head pose tracking with Pupil Capture, and I was wondering, does Pupil Capture rely on an external camera facing the person for head pose tracking?
Hi @user-1bda7f Yes, it is using the external facing camera (camera capturing the wearer/subject's FOV) plus fiducial markers in the scene to estimate pose: If you haven't already taken a look at this section of the docs please check it out: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
For pyuvc, please install this branch https://github.com/pupil-labs/pyuvc/tree/no-kernel-detach-older-macos For pyav, what is your output for
brew --prefix ffmpeg
/usr/local/opt/ffmpeg
Additionally, to build pyuvc, you might need to add this prefix:
PKG_CONFIG_PATH=/opt/homebrew/opt/jpeg-turbo/lib/pkgconfig:~/work/libuvc-build/lib/pkgconfig
before the pip command. Note that you will need to adapt the libuvc path according to your installation
Thanks for ur help, pyuvc is now installed 😀
Does that folder contain files?
yes
Hey @papr About the error with Pyav, do you think the problem is my version of FFmpeg or its installation folder? It looks like it can't find ffmpeg
Hey, I am looking at that as we speak.
Please install the official pyav for now
pip install av
and use this Pupil branch https://github.com/pupil-labs/pupil/tree/official_pyav
Known issue: Playback in Player can be laggy/slow
ok i did what u say but i have a new error when running python3 main.py player (same error on master and official_pyav):
ImportError: ('Unable to load OpenGL library', "dlopen(OpenGL, 0x000A): tried: 'OpenGL' (no such file), '/usr/local/lib/OpenGL' (no such file), '/usr/lib/OpenGL' (no such file), '/Users/nathanbourquenoud/Documents/HEIAFR/TB/pupil/pupil_src/OpenGL' (no such file)", 'OpenGL', None)
Which python version are you using?
3.6.15
You might need to download this file and import it at the beginning of each process https://github.com/pupil-labs/pupil/blob/master/deployment/find_opengl_bigsur.py
ok.., where do I have to import it exactly? I'm not sure I understand
Save it into the pupil_src/launchables folder. In player.py add in line 34 import find_opengl_bigsur
same error
Ok I think i have fix this error with this instructions : https://github.com/PixarAnimationStudios/USD/issues/1372 .
But there is a new error ... : File "/Users/nathanbourquenoud/Documents/HEIAFR/TB/pupil-official-pyav/pupil_src/shared_modules/video_capture/file_backend.py", line 21, in <module> import av File "/Users/nathanbourquenoud/.pyenv/versions/3.6.15/lib/python3.6/site-packages/av/init.py", line 9, in <module> from av._core import time_base, pyav_version as version ImportError: dlopen(/Users/nathanbourquenoud/.pyenv/versions/3.6.15/lib/python3.6/site-packages/av/_core.cpython-36m-darwin.so, 0x0002): symbol not found in flat namespace '_avdevice_configuration'
Could you please run pip download av
and share the downloaded whl file with us?
I have not seen this issue before
it's probably because I'm using the latest version of pyav ?
I think it might be related to using the intel python. But I am not sure
OK, that means you did not get the precompiled one, and it used your local home brew ffmpeg installation. Do have both, Intel and native, home brews installed?
yes i have both but i have config my .zshrc to create alias like that :
rosetta terminal setupif [ $(arch) = "i386" ]; then alias brew86="/usr/local/bin/brew" fi
and i use brew86 cmd instead of brew to use intel version
@user-c661c3 Then the compiled version might be pointing to the wrong version. Try:
PKG_CONFIG_PATH="/usr/local/opt/ffmpeg/lib/pkgconfig" pip install av
https://gist.github.com/NaYthanB/e81a9ab21abaebeda19585e264779579
Right! The official pyav does no longer support Python 3.6. 😕
This issue does not happen on Python 3.7 or newer
Ok, i will try it
Hi! Is there a location that I can download a previous version of pupil capture/player? I'm having trouble getting a third-party plugin to work with the current version of pupil.
This is typical of all github repos - the "releases" section should have a history of prior releases.
hi sorry ive asked here before but im trying really hard to access all the data i can outside of the pupil capture app, but no success at all. My goal is to use the pupil glasses in realtime to track gaze direction of a person in a 3d environment and use it to communicate with physical robots. It would also be great if I could stream the video capture data to my program (touchdesigner) as well.
Ive read through the documentation and tried setting up an environment to read/capture the data on spyder. I'm having no success. Im getting asyncio errors (which i thought were only jupyter specific). Im also getting package dependency warnings out the wazoo. I've tried but couldn't figure out how to use it with jupyter. If i try using the site-packages of this env in touchdesigner directly, it also freezes the program. I see the different methods of streaming as well (from here https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html) But have no idea how to get started.
Im not a very knowledgeable python guy, but am trying to learn. is there any way i can get some help (a step by step guide) with accessing the live data through some kind of communication protocol? I dont mind how the data is extracted or how i need to set up my python env to work, but i really need to be able to obtain data outside of the capture app so I can use it for my research. (recordings and csv files are unusable in my situation).
https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/ <- the sample code from here i'm having trouble with.
It also says to download the code from the guide here: https://github.com/pupil-labs/pupil-docs/tree/master/src/invisible/how-tos/integrate-with-the-real-time-api/introduction/ but there is no code to download?
The README.ipynb
is a Jupyter notebook that contains the tutorial code and can be executed locally. Please use the official Jupyter notebook [1] instead of Spyder. While it should be compatible, nest_asyncio
is just a workaround for a specific issue and it was likely not tested in Spyder.
[1] https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html
Alternatively, I would recommend running the examples from a normal Python file without nest_asyncio
. You can find them here: https://pupil-labs-realtime-api.readthedocs.io/en/latest/examples/simple.html
If i try to combat asyncio errors with the nest_asyncio package, using the same code from here (https://pupil-labs-realtime-api.readthedocs.io/en/stable/examples/simple.html#remote-control-devices) instead, i get this
[SpyderKernelApp] ERROR | Exception in message handler:
Traceback (most recent call last):
File "C:\Users\User\anaconda3\envs\toio_td_37\lib\site-packages\spyder_kernels\comms\frontendcomm.py", line 164, in poll_one
asyncio.run(handler(out_stream, ident, msg))
File "C:\Users\User\anaconda3\envs\toio_td_37\lib\site-packages\nest_asyncio.py", line 33, in run
task = asyncio.ensure_future(main)
File "C:\Users\User\anaconda3\envs\toio_td_37\lib\asyncio\tasks.py", line 592, in ensure_future
raise TypeError('An asyncio.Future, a coroutine or an awaitable is '
TypeError: An asyncio.Future, a coroutine or an awaitable is required
[SpyderKernelApp] ERROR | Exception in message handler:
Traceback (most recent call last):
File "C:\Users\User\anaconda3\envs\toio_td_37\lib\site-packages\spyder_kernels\comms\frontendcomm.py", line 164, in poll_one
asyncio.run(handler(out_stream, ident, msg))
File "C:\Users\User\anaconda3\envs\toio_td_37\lib\site-packages\nest_asyncio.py", line 33, in run
task = asyncio.ensure_future(main)
File "C:\Users\User\anaconda3\envs\toio_td_37\lib\asyncio\tasks.py", line 592, in ensure_future
raise TypeError('An asyncio.Future, a coroutine or an awaitable is '
TypeError: An asyncio.Future, a coroutine or an awaitable is required
[SpyderKernelApp] ERROR | Exception in message handler:
Traceback (most recent call last):
File "C:\Users\User\anaconda3\envs\toio_td_37\lib\site-packages\spyder_kernels\comms\frontendcomm.py", line 164, in poll_one
And ive also set up a TCP listener on localhost:50020, which i believe is actively streaming from the capture app, just in case anything comes through. but the only thing i ever receive is an occasional '.' or '...' or '....' ; no legible data.
@user-78b456 Hey, I am not in the office today so I cannot give a detailed answer. But please be aware that there are two different realtime apis, each designed to work with a different product:
Pupil Capture is designed to work with the Pupil Core headset. Its realtime API is available via port 50020.
Pupil Invisible Companion is designed to work with Pupil Invisible glasses. It's API is what you have been linking above.
Note that the two programs and APIs are not compatible and not meant to be used together.
I will be able to look into the issues tomorrow
thank you immensely for the help, i didn't realize i was switching between the two platforms, i now am able to access and starting to understand most of the data being sent using this format https://docs.pupil-labs.com/developer/core/overview/#timing-data-conventions
Could you quickly clarify which of the two hardware models you are using?
i have a new question about the gaze points.
I believe i understand gaze normals 3d -> the vector of the normal (unit vector) direction of your line of sight. Used for things like line of sight lasers, right?
Gaze point 3d -> basically a point, mapped onto the "sphere" around the head of the user. Maybe would be the point you are looking at if you're in the center of a big ball.
But to my knowledge, the Pupil Core headset doesnt(?) have accelerometers? This may be more of a question for VR but, say I calibrate looking forward. Works fine. I see my gaze normal in my virtual software, looks pretty accurate. Now i tilt my head upward, and look at a point on the ground. How would i adjust that "laser" to accomodate for my head tilting up?
Sorry yes, i didn't realize there was a Pupil Invisible, I've been using the pupil core headset, and here's the script i was able to run in Spyder (python 3.7) to send the received data to my touchdesigner program via OSC
Ok, thank you for clarifying. Does your touchdesigner stimulus run on a desktop-based screen in front of the subject?
the "stimulus" part is going over my head, but yes... touchdesigner i'm programming on my desktop in front of me.
However, in my end goal, the user will be standing in lets say "the goalie box on a soccer field". I want to calibrate them from the center of the goalie box, and then in a virtual environment, shoot lasers from their eyes onto wherever they look on the field.
Let's try to clarify the exact target use case first, then I will be able to give concrete suggestions. 🙂
Will the goalie box be in a virtual environment or will it be in the real world?
"stimulus" as a general term for something that is presented to the subject
ahh understood. There will be no screen based stimulus for them, however im not sure how to calibrate in an open field like that if there is no desktop based stimulus
I do have the single calibration marker, but would i need 5?
real world. user will be on a real field, however i will simulate/mirror the real world in a virtual environment. For example if i see that they are looking at the other goal, based on some virtual "collisions" then that real world goal might light up
How do you track the real world? And what do you track in it?
So the real world will mostly be a known fixed size field. It wont require much tracking, although there will be many moving things on the field. 40+ robots, trapdoors etc coming from the walls.
Basically my objective is to set the wearer on a known position on the edge of the field, and calibrate them from this angle. Then based on the gaze "lasers", divide up the field into a grid in the VR space, and actuate things on the field by "closest match". Hopefully this makes sense, if it helps i can maybe draw the concept
Ok, I think I got it. The most important thing to understand is that gaze is calibrated and estimated in the scene camera coordinate system which is fixed to the subject's head. You will need to perform an additional mapping step from scene camera to real world coordinates.
Built-in features that do something similar to what you need: - https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking Maps gaze onto a flat area of interest, e.g. your screen - https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking Tracks the scene camera in a virtual coordinate system and allows mapping gaze into it
Both approaches requires AprilTag markers for the tracking and are therefore not quite feasible for you.
What might work is an object detection algorithm that works on the 2d scene image to identify the object of interest. But this does not give you the 3d gaze.
Okay I'll take some time to process this thanks for the head start!
Hi, I'm currently working with Pupil Capture and Player for Head Pose Tracking, and now have .pldata files which I can't seem to read. According to the docs, msgpack can be used to read the files, but I'm not sure how. Is there a script, or any docs for how to read the pldata files?
In Pupil Player, is there any way to reduce the Polyline display to a single point? Meaning, rather than drawing a polyline across 'x' amount of gazepoints, only the current gazepoint is rendered? The practical result would be that the polyine is just a circle rather than a polyline.
Hi, to only display the gaze point that is closest to the current frame, please use this plugin https://gist.github.com/papr/d364b379b1b311fdd185bc383f43ef95 In addition, you will need to disable the Vis Polyline plugin to remove the line.
perfect, thanks!
Hi, how exactly do I download the surface tracker plugin?
Given that you are writing in the 💻 software-dev channel, may I ask what you are working on?
Hey, there is no need to download it. This particular plugin is built into the Core applications.
We're trying to track pupil movement as a participant works on pegboard exercises, and so we're trying to define the parameters of the pegboard--where can the plugin be found? I thought the plugin was represented by an "S" on the left hand side between the R and the A, although our version only has C T and R
You need to enable the plugin in the Plugin Manager menu on the right.
Ah ok great, thank you!
hi guys, probably asked hundreds of times, is there a MATLAB function to read .pldata? I found .npy
I don't have a ready-to-use solution but technically it should be possible to reimplement the python version in Matlab with one of the multiple msgpack-matlab implementations.
hi. I want to stream the videos of the user's eyes to a UI that runs in a browser? Is there a different way to access the eye frames than via ZMQ (https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py )? Do you have any experience regarding the delay of frames published via ZMQ?
Hey, yeah, zmq is not websocket compatible, i.e. there is no way to stream the video directly to the browser. The workaround would be to build a relay that receives the data via zmq and forwards it via websockets. I am not aware of a tool that does that already. But with a library like https://websockets.readthedocs.io/en/stable/ such a script should not be difficult to develop.