Hi everyone,
I want to receive the original frames from world camera along side with pupil data (3D gaze data) in my application. What is the best way to interface with pupil system to get those data in my C++ code.
c.c @papr
(I can also update minor changes needed for compiling on fedora)
@user-4fb664 Hey, subscribe to the frame.world
and gaze.3d
topics. See these Python examples:
- https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
- https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
These are analogous to a possible c++ implementation.
Hi, I am working on using Pupil Capture over a network to solve import issues. Is there a way to obtain the fixation point over a network?
Hi @user-b5c63f on your remote machine use pupil-helper script [1] and subscribe to fixations
sub.setsockopt_string(zmq.SUBSCRIBE, 'fixations')
[1] https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
Hi there , I'm gonna use batch exporting , I want to extract more feature not just five of them, such as X, Y and Z location . May I ask you to help me please?
Hi @user-fbd5db the "batch exporter" plugin is no longer available in Pupil Player. What you can do, is open recordings individually and export csv data and then run a script to consume the exported data like seen in [1]
@wrp Thanks, I have already run these two files and got the csv export as a batch but not for all features. There are five features on .CSV file. I would like to have other features through the changing the cod. I have attached the code which are available online that i used and my .csv output that is not completed.
source files
batch export but not complete for all features
could you please help me?
@wrp thanks π
@user-fbd5db I or my team will take a look at your code later today, thanks for sharing
@wrp Thanks
Hi, How can I get duration of whole recording in seconds? Is it somewhere in g_pool?
@user-a6cc45
duration = g_pool.timestamps[-1] - g_pool.timestamps[0]
@papr Thanks!!! π
Hi, I'm trying to get the fixation point in real-time for a plugin. Is there a function for this, please? π
Hi, I have some question that is.. When the Pupil Service is running on background - Can I communicate with Pupil Service via my custom C# application?
@user-b5c63f yes, you can get fixation position data in real-time by running a script and using the network based API as I noted yesterday. Hi @user-b5c63f on your remote machine use pupil-helper script [1] and subscribe to fixations
sub.setsockopt_string(zmq.SUBSCRIBE, 'fixations')
[1] https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
If you have follow up questions about how this works, please feel free to list them here πΈ
@user-09f6c7 You can communicate with Pupil Capture and/or Pupil Service in real-time over the network. Please see the following: 1. https://docs.pupil-labs.com/#interprocess-and-network-communication 2. https://github.com/pupil-labs/pupil-helpers/tree/master/python (I know this is not C#, but the minimal examples should be helpful for you to understand the communication pattern using messagepack + zmq)
@wrp Thank you for you info. I'm trying out.
@wrp thank you π this time I'm trying to do it locally and not over a network
you can use localhost - what is the applicatoin that you are trying to develop/what is the goal?
I'm trying to create a plugin similar to the calibration, where there are targets. Each time the user fixates a target (fixationPoint == targetPosition), a new target automatically appears and the old one disappears
@papr Is the java/android Zyre implementation used in Pupil Mobile available anywhere?
@user-09f6c7 Take a look at the hmd_eyes plugin fΓΌr Unity (C#), this might be some code you're looking for.
@user-54376c Oh I got it! Thanks!
oh, I just realized there's a software dev channel as well: Is there any working cli for the pupil capture that runs on linux and if not, what's the setup process for a recording to start?
@user-8e47a4 you might want to see: https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py - while there is no cli (headless) version of Pupil software, you can communicate with Pupil Capture, Player, and Service via messages over the network. Would this work for you?
@wrp Yes, it's a very good first step, thank you! I'll have to test if the pi can handle all that, we planned to just capture the data and do all calculations offline
@user-8e47a4 I am pretty certain that the pi is not powerful enough to do the pupil detection in realtime for the full 200Hz frame rate
@papr yes, I'd doubt that too, but wouldn't simply forwarding the recorded frames via zmq do? and then detect the pupils later?
@user-8e47a4 either that, or if you have a fast enough sd card, you could save the recording on the pi. The advantage is that this will create a valid Player recording. If you forward the frames and record it remotely, you will have to make additions to make it compatible with Player.
@papr wait there already is an implementation of simply recording to disk and then calculating everything later? Then switching out the safe to disk part for some networking should be an easy task, right?
@user-8e47a4 Capture's task is two-fold: Publish data via zmq and save the data to disk during a recording. So technically, yes, you could write a simple script that listens to the data stream and saves that data itself. As I said, you just need to record the data in a specific format s.t. it is compatible with Player
@user-8e47a4 Are you aware of Pupil Mobile? It looks like it already does what you want to do.
@papr no I was not. Interesting. I'll have to check with my teammates if we can just use that. Problem is, we need to synchronize measurements of multiple biometric sensors and be able to start and stop together. However, since all pupil measurements are neatly time stamped anyway, that might not be a problem at all.
@papr Capture/Mobile come with a time sync protocol, in case you want to sync multiple sensors. https://github.com/pupil-labs/pupil-helpers/tree/master/network_time_sync
@papr wow, that's great! For our first test run in 2 weeks we plan to just do a simple recording and networking test run. So no focus on synchronization yet. I'll check out pupil mobile and otherwise I'll hack the save to disk function and add a networking part there. Thanks for the help and pointers to resources, you guys really have outstanding support!
Where can I find documentation about the individual data processing and writing steps? As far as I can see the data producers only serialize and send their frames to the IPC which is just a wrapper around the functionality provided by zmq_tools.py
. Since for my use-case I only need data-grabbing from the sensors and format compliant storage, splitting the code in 2 devices makes sense at the intra-process communication step. However making out the data flow isn't as straightforward as I'd hoped.
Can anyone please point me towards the relevant files for data producers and capture+storage for 200hz binocular + world capture and proper processing of frames and timestamps?
[email removed] I or my team will take a look at your code later today, thanks for sharing
Please let me know if there is any updates on the code. I aim to do batch exporting. Thanks.
I'm trying to use a different world camera (Logitech Brio 4k) in Pupil, which works fine - in [email removed] When I try to increase the resolution (1080p, 1440p, 4k) the camera image displayed in the Pupil Capture just freezes and it keeps logging those messages:
world - [INFO] video_capture.uvc_backend: Hardware timestamps not supported for Logitech BRIO. Using software timestamps.
world - [INFO] camera_models: No user calibration found for camera Logitech BRIO at resolution (1600, 896)
world - [INFO] camera_models: No pre-recorded calibration available
world - [WARNING] camera_models: Loading dummy calibration
world - [WARNING] uvc: Could not set Value. 'Absolute Exposure Time'.
world - [WARNING] uvc: Could not set Value. 'White Balance temperature'.
When I lower the fps I can even use the 1080p mode. the camera is connected via USB3, doesn't seem like a bandwidth issue.
As it turns out your libuvc probably just doesn't support it properly. There is a patch to support newer camera models (including the 4k Logitech Brio) for libuvc. Did you consider integrating that patch or did anyone successfully use the Logitech Brio 4k at [email removed] or higher resolution/fps?
@user-54376c Could you please create a corresponding PR? Then we can see if we can integrate the changes
Hi, how do I obtain the current gaze position or fixation from the events dict?
@user-b5c63f If there is gaze or fixation data, then it is placed under the gaze
and fixation
keys if I remember correctly. You can always use the dict's keys() function to list all entries
Thanks :)
Hi, if I have an Android device running Pupil Mobile with an eye tracking headset plugged in via USBC, is it possible to access a stream of the pupil data from a custom Unity application installed on the same Android device? I have already been doing a similar process on my PC with the hmd-eyes Unity plugin, which allows me to access eye tracking data in Unity while Pupil Capture runs in the background.
@user-32853a I have no idea if this works, but the concept should be implementable if it doesn't exist yet. There's a network protocol called Lab Streaming Layer that's primarily used for data acquisition across multiple devices, but can also be used for real-time visualization by accessing the streams (https://github.com/sccn/labstreaminglayer/wiki). There is an LSL plugin for Pupil, there's also an LSL plugin for Unity (LSL4Unity) and android apps (liblsl-Android/AndroidStudio). The missing piece is putting them all together. I've tested the utility of using LSL for pupil to Unity (hmd-eyes), but my setup is on the PC and the deployment target is PC/SteamVR. I'm sure it's possible, but it may take time to become familiar with how the streams in LSL works.
Hi everyone, we're looking to have a plugin licensed so that it is open-source (can be modified, etc.) but cannot be used for commercial use. Does anyone have licensing suggestions for this, please?
@user-b5c63f You have likely already visited: https://choosealicense.com/licenses/ - but why not just a strong "copy left" license which allows for commercial use but requires that changes to source code be published/disclosed?
Hi @wrp thanks for the response π That sounds good, but my team would prefer a license that prevents commercial use π
I think this is kind of against the general ethos/framework of open source (so maybe you're looking for a license, but not necessarily an open source license). Perhaps there is a license out there that suits your purpose that I am not aware of that is not usually applied to software. Would be interested to hear what you decide on and the motivation behind the decision.
Which lens parameters are used for 100d and 60d world cams? Can I use additional lens, for e.g. with 80degree FOV?
@user-1d894f I do not know the answer to that. @user-755e9e are you able to help here?
@user-1d894f Which world cameras are you referring to exactly? Are you talking about Intel Realsense 3D cameras?
No, standard high-speed
@user-1d894f I see. Pupil headsets are shipped with two different lenses, a narrow and a wide field of view lens. Pupil Capture uses the camera intrinsics for the lens that is attached by default. You can use the Camera Intrinsics Estimation plugin to reestimate the camera intrinsics after changing the lens.
Yes I know π So my question is which focal lenght are they and can I use my own lenses (with Intrinsics Estimation)
After running the intrinsics estimation, the results will be saved to a <camera name>.intrinsics
file within the pupil_capture_settings
folder. You can read it using:
import msgpack
with open(file_path, "rb") as fh:
data = msgpack.unpack(fh, raw=False)
hi, Is there somewhere a docker image which will run pupil player on Windows? Or is it possible to build such image?
@user-a6cc45 you can just download the app bundle from https://github.com/pupil-labs/releases/latest and run Player on Windows
@wrp I know but I'm making custom plugin and I need to run from source :/
I followed instructions on https://github.com/pupil-labs/pupil/pull/1455 but when it comes to build the compiled modules I get error after
python setup.py install --install-lib ..\
(I also followed instructions how to remove error related to _ENABLE_EXTENDED_ALIGNED_STORAGE)
Error message is:
LINK : fatal error LNK1104: cannot open file 'boost_pythonPY_MAJOR_VERSIONPY_MINOR_VERSION-vc141-mt-x64-1_67.lib'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x64\\link.exe' failed with exit status 1104
@user-a6cc45 can you add the plugin at runtime? Or running from source is a hard requirement for you?
@wrp I'm using Matplotlib in my plugin and I've beed told that running from source is the only option. 2 months ago (before new pupil release and cboulay instruction update) I succeed in running from source, but now I want to run it on other computer and with new instructions it doesn't work :/
@user-a6cc45 if you have everything set up on the first computer, you could build your own bundle including matplotlib, and distribute to other computers
@papr how can I build my own bundle? Is there any instruction?
@user-a6cc45 https://github.com/pupil-labs/pupil/tree/master/deployment
Use the x64 Native Tools Command Prompt for VS 2017
, change the working directory to the deployment
folder, and execute bundle.bat
I am expecting this will not work smoothly though. Good luck!
@papr Thanks! I'll try that π
@papr can anyone help me with pupil apps installation on a Mac? I have entered all code listed on the masters doc into my terminal but still get a message that the apps are damaged and can't be opened? Surely someone else has come across this?
@user-f3a0e4 Running from bundle is different than from source. The error regarding the damaged application might be due to a corrupted upload. We will re-upload as soon as possible.
Regarding installing the source dependencies: on macos you might need slightly different install instructions for libuvc. If it is not build correctly, pyuvc will fail as well.
I will let you know about the different install instructions as soon as I am on my computer.
Thank you. Yes, the main issue appears to be with downloading pyuvc (where I get the errors from terminal) where I get errors relating to libuvc/libusb
@user-f3a0e4 Please rerun the libuvc installation and try to reinstall pyuvc afterwards
git clone --single-branch --branch build_fix_mac https://github.com/pupil-labs/libuvc
cd libuvc
mkdir build
cd build
cmake ..
make && make install
@papr I get the following error message when attempting to rerun the libuvc installation: [ 11%] Linking C shared library libuvc.dylib ld: library not found for -lusb-1.0 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[2]: [libuvc.0.0.9.dylib] Error 1 make[1]: [CMakeFiles/uvc.dir/all] Error 2 make: *** [all] Error 2
@user-f3a0e4 have you followed the instructions from above?
I think so. I simply entered the above code into my terminal
Okay, I have just re-entered it and it seems to have worked! I will update you on the success of the remaining installation process π
@user-f3a0e4 great!
Okay, now I am getting more errors when trying to install the python packages with pip...
The error messages are long...would it be worth pasting the errors into here?
from what I can tell (not great at this), the main error now appears to be: fatal error: 'gl.h' file not found
@user-f3a0e4 Yeah, this is a problem on newer versions of macOS. Please install Xcode. It comes with the required header files.
I am working on an update for the docs right now
@user-f3a0e4 Also, please use pip3 install msgpack==0.5.6
to install msgpack
okay, it's a relief to know I wasn't doing anything stupid π
will I have to do anything once Xcode is installed?
@user-f3a0e4 No, the docs were just out of date. Apologies for that. As soon Xcode is installed, rerun the pyglui installation line
Perfect, thank you!
@papr Okay that has definitely worked! I now get through the terminal processes without any errors! However, now I don't know exactly where pupil capture/player has been installed on my device? π tried searching but cannot find!
@user-f3a0e4 you need to start it from the terminal. As I said, running the bundled application is different from running from source. They are independent from each other.
Ooh right sorry. So the error I got when trying to open the "pupil_v1.13-29-g277ac8c3_macos_x64.zip" file is independent from any errors relating to the terminal processes?
@user-f3a0e4 correct!
So am I right in thinking that there's nothing I can do about the bundles application until a new patch is released? Also, would you mind letting me know how to start pupil apps from terminal?
@user-f3a0e4 you are correct on that. Navigate to the pupil_src directory, and use python3 main.py
to start capture.
Append the player
argument to start player.
Right sorry again for asking simple questions, but I cannot find the pupil_src directory
@user-f3a0e4 did you git clone the Pupil repository?
Please check the beginning of the developer documentation for details
Okay, I'm starting to realise my errors. I thought the terminal could be used as a secondary means to download the pupil labs applications, and bypass the error I got with the bundled version.
The bundle just comes with all dependencies and executes the above command internally.
Okay I have finally got it working and have managed to load pupil capture! Can you please let me know what you mean by append the player argument to start player?
@papr Okay I have finally got it working and have managed to load pupil capture! Can you please let me know what you mean by append the player argument to start player?
@user-f3a0e4
python3 main.py player
or
python3 main.py player <path to recording>
Amazing, thanks @papr all sorted. Thanks a million for your help!
Hi I am experiencing difficulties with the latest release, i installed the deb packages on linux mint without problems, but pupil capture won't start up, I don't have any console output to provide ...
Good day! Tell me please, is there an API for working with glasses in C++ (for Windows)? We need to integrate work with glasses into our program without using additional software.
@user-98f03c Unfortunately, there is no such "headless" library. The primary way to access real-time data is through our network api while running Pupil Capture or Service.
Ok, thanks you!
@user-42995d you should have access to the log file at ~/pupil_capture_settings/capture.log
@user-f3a0e4 I do not get any errors with the zip file. Could you try to redownload it?
https://github.com/pupil-labs/hmd-eyes/releases/tag/v0.61
Is this the latest version of the pupil software that can be used with the hololens?
@user-adf88b If I remember correctly, yes
Awesome ty
Hi @papr I have just tried again and got the same error message. If it helps, I am working on a macbook air (macOS Mojave 10.14.5)
Helle there, we also have some issue with the installation of the new version of Pupil-Labs capture. Here the screenshot. We tried to follow the instructions to install it via Terminal, but we were not successful. Is it possible to have an older version ? Thank you in advance for the help.
@user-f3a0e4 @user-76a6ff could you try copying the application to the Applications folder before running the application?
@user-f3a0e4 @user-76a6ff We were able to replicate the error and are working on a fix
Okay great. Opening through terminal is working for me so no worries!
@user-f3a0e4 @user-76a6ff The release has been updated. Please redownload the bundles and try again.
@papr is there a datasheet where I can find the camera specifications used in the 120Hz pupil eye camera such as camera sensor dimensions?
How similar are the components used in making the pupil labs eye tracking module with the ones mentioned here in the bill of materials https://github.com/pupil-labs/pupil-docs/blob/master/pupil-hardware/pupil-diy.md ?
@user-28fba0 You can estimate the eye camera intrinsics yourself by activating the eye camera in Capture's world process and running the Camera Intrinsics Estimation plugin. You will need to print out the circle pattern such that is visible in IR light. The pattern shown on the screen is not visible in IR.
okay thanks
@papr where can I find the latest version? I have tried downloading latest from ...pupil-labs/pupil/releases again but it still fails with same error. It also says this version was last updated 3 days ago? Perhaps I am downloading wrong version?
@user-f3a0e4 https://github.com/pupil-labs/pupil/releases/download/v1.13/pupil_v1.13-29-g277ac8c3_macos_x64_fixed.zip
Hi @user-dd52c0 unfortunately I am still getting the same error: "pupil player is damaged and can't be opened. You should move it to the trash"
Sorry * @papr
@papr can I please ask for help on a new issue? When controlling the start and end of a recording using an arduino (to synchronise with motion capture), the recorded video is always missing a random amount of data. For example, the motion capture is set to record for a 25 second period, but the recorded video will randomly only save somewhere between 22.5 and 24.9 seconds of data. The strange part, that makes me think it is not due to the arduino script, is that on the first frame of the video fixations and trails will often appear across the screen - presumably reflecting all eye data occurring before the video started recording? Have you come across this before and know how I might fix it?
@papr we copied the application in the application folder, but unfortunately is not still working. It said the same error of @user-f3a0e4 I am sorry. Thank you for you help
@user-f3a0e4 @user-76a6ff ok, looking into it
@user-f3a0e4 Yes, this is expected, since the cameras are controlled by different processes and it is not guaranteed that all processes start recording at the exact same time. When opening a recording in Player, gaze data is assigned to their closest world frame. If the world recording started later than the gaze recording, there will be multiple gaze points assigned to the first frame, as you described.
Okay I see. From you experience, is there anyway to reduce the extent of this lag? Sometimes it is up to 2.5 seconds which is a lot to try and correct for
@user-f3a0e4 Do you have a custom plugin running in Capture?
@user-f3a0e4 @user-76a6ff Please try to redownload the updated release: https://github.com/pupil-labs/pupil/releases/download/v1.13/pupil_v1.13-31-ge54e13f4_macos_x64_signed.service_player_capture.zip
There shouldn't be any more signing related issues.
@papr Many thanks. Now it works properly
@papr Yep working now! I'll also find out today about the custom plugin