hello~ I'm trying to run with python in Ubuntu 18.04... installed Linux dependencies for 18.04 and pyuvc(https://github.com/pupil-labs/pyuvc). but a problem is that there are not uvc.py file in pyuvc folder. so I'm suffered by 'ImportError: No module named uvc'.. what can I do for fix it?
@user-09f6c7 Did you successfully build pip3 install git+https://github.com/pupil-labs/pyuvc
@user-97591f you gave me a nice point! thanks a lot
@user-09f6c7 You're welcome, I might put together some instructions for building and running from source for Ubuntu (or other distros) using Anaconda/Miniconda. It's already been done for Windows and Mac, because using a virtual environment makes version control easier, and won't mess up other system installs using pip3
or python3
.
@papr We do not have a custom plugin. We are interacting with Pupil capture via Pupil remote (https://github.com/N-M-T/Pupil-Labs-Vicon-trigger/blob/master/Trigger_Connect.py) lines 64 onwards. The rest of the code is in development, but timing tests show it be fairly accurate when used for other devices (not pupil capture). Thanks
@papr With your comment about the process, I suppose it would be more accurate to record a unix timestamp on the machine running Pupil capture when we start and stop a motion capture recording, then use that to sync the pupil recording with our mocap?
@papr That is, during a continuing pupil recording
We're looking for ways to integrate the pupil data output with ROS. In particular, we need to make the world camera video available as a ROS node. I've found a couple of existing implementations, but they receive the video over ZMQ and then send it back out over ROS (i.e two separate TCP connections) and I'm worried about latency.
Questions: (1) Do y'all know of a plugin-based implementation rather than a ZMQ interface? (2) Is my understanding correct that a plugin-based bridge would eliminate sending the video over ZMQ and thus remove one of the TCP connections, so should reduce latency?
(If (2) is yes and (1) is no, I may build one -- it looks pretty straightforward. I'd be curious, though, if other people would be interested in this feature.)
@user-3a93aa (1) not that I know of. (2) is correct.
@papr sorry to keep bothering you, but did you manage to have a look at my above messages?
@user-f3a0e4 Sorry, I have been quite busy the last days. I will have a look at it in the coming week.
Can anyone help me with parsing the streamed data? I am using zeromq library to receive Pupil Capture application's streaming data.
@user-72cb32 In which programming language?
@user-72cb32 Btw, the data is serialized with msgpack. You might need to get a msgpack decoder for your programming language.
Hi! Thanks for the quick reply!
my current code is in java
and it is in following scheme:
ZMQ.Context context = ZMQ.context(1); requester = context.socket(ZMQ.REQ); requester.connect("tcp://127.0.0.1:5001"); String port = "SUB_PORT"; requester.send(port.getBytes()); String sub_port = requester.recvStr(); subscriber = context.socket(ZMQ.SUB); subscriber.connect("tcp://127.0.0.1:52686"); String filter=""; subscriber.subscribe(filter.getBytes()); String data = subscriber.recvStr();
So, when I print the last string 'data', i do get some information, but I am not sure how to parse them.
@user-72cb32 The network api publishes zmq messages with at least two frames: 1. topic as utf-encoded string 2. msgpack encoded dictionary
calling recvStr() might only receive the first frame, calling it for the second time, might give you the second frame
Check your documentation for multi-part receiving data.
@papr Thank you for your help! Does that mean that first frame and second frame contain different data?
@user-72cb32 correct, and only the second one needs decoding with msgpack
Is there any documentation for decoding scheme?
@user-72cb32 There are implementations for it in many languages. Check out the link above
@papr I will try! Thank you!
@papr Dear Papr, I've tried to receive the incoming streaming data, but it seems like I need to know more about sent data from Pupil Capture. For instance, I need to know what are the order of data (from eye gaze, diameter and more)
Is there any documentation or explanation regarding this?
@user-72cb32 you mean the exact fields of the msgpack encoded frame? These can vary. Please checkout the hmd-eyes project if you need to know the exact types of the message in advance. Usually, msgpack decoders are able to determine the fields and their types during decoding. The result is a nested mapping from keys to objects.
@user-72cb32 https://github.com/msgpack/msgpack-java/blob/develop/msgpack-jackson/README.md this document should be helpful
@papr Ok!!! Thanks! I will try!
@user-f3a0e4 I finally had time to have a look at the code that you linked above. The code looks generally correct. One note though: You are not calling vicon.close()
before exiting in Recorder.connectionFailed()
. Not sure if this is intended.
I am not sure where the 2.5s lag between the processes comes from. I would recommend to save the vicon data together with the pupil timestamp* when receiving its data (vicon.read()
).
This way you can correlate your vicon and pupil data after the effect, without having to care about the recording start lag.
time()
) and pupil time (socket.send("t"); socket.recv()), and compensating for the synchronization delay by substracting half the round trip time.Hi, when using pupil mobile in different light environments, is there a way to set auto exposure for the eye cameras?
@user-14d189 Unfortunately, this feature needs to be implemented in software for the eye cameras , which we did not do for Pupil Mobile, as far as I know.
Hi, I have query regarding the compatibility of Pupil capture/ player. I am planning to upgrade macOS to Catalina (beta version). So, just wanted to know if anyone has tried on that version and if there are any compatibility issues with Pupil player....Thanks
@user-331121 Not specific to Capture/Player, but I read that the Catalina beta is much less stable than the betas in the last years (user data getting lost, etc). If you decide to upgrade anyway, please let us know if there any problems with Capture/Player on Catalina.
Hi all, I'm somewhat new to the community. Are there any modules that have been developed for real-time (or slightly buffered real-time) analysis of on-going pupil data?
@user-6bb2e5 I’m no expert but the hmd-eyes GitHub would be the place to start
@user-adf88b @user-6bb2e5 the hmd-eyes project does not do any analysis but is a baseline implementation for a client of the network api. You can access the network real-time api with other languages as well. Regarding analysis, Capture does some analysis on its own, e.g. Blink and fixation detection. This data is available through the same api. Higher level analysis would have to be done based on data published by the api in a custom script/client.
Great, thank you both! One more question and apologies if this is not the right thread: I am recording audio alongside the pupil data but when I load the recording directory into Pupil Player and use the export raw data function I do not get any output audio or audio timestamps. Do I need to use the file ops to read the raw data into python or am I missing something about Pupil Player?
@user-6bb2e5 the raw data exporter just exports pupil and gaze data as csv files. If you just need the audio and it's timestamps you can copy it from the original recording folder. If you use the world video exporter, you get the world video with synced audio.
Hi,
Recently I downloaded Pupil software (v. 1.13).
I defined surfaces in Capture, and later in Pupil Player I got warning that my definitions of surfaces are deprecated: player - [WARNING] surface_tracker.surface: You have loaded an old and deprecated surface definition! Please re-define this surface for increased mapping accuracy!
How is that possible since I used the latest software version? 🤔
@user-a6cc45 are you sure that you did not make the recording with an older version of Capture? You can check the info csv file. If it was made with v1.13, it is a bug and we will try to reproduce it.
@papr
@papr there is one more problem: when I export raw data I cannot see csv files with prefix "fixations_on_surface_" . Did you remove them in last release or is this a bug?
@user-a6cc45 yes, we did, because we originally planned to remove the old fixation detector in favor of the eye movement detector. This plugin assigns each gaze datum to an eye movement. Afterwards, one can use it to find the eye movement for each gaze_on_surface. Unfortunately, when we decided to keep the old fixation detector for now, we forgot to put in the explicit fixation_on_surface functionality. For now, you need to check if any of the gaze_on_surface is within one of the fixations found in fixations.csv
This will be fixed in the next release
Hey. How can we make pupillabs run on Arm64 Linux?
Hi @user-94ac2a You will have to follow the developer instructions and install all dependencies from source. https://docs.pupil-labs.com/#linux-dependencies
We upgraded to the new 200 Hz eye tracker for the Vive
Since the upgrade the call get_pupil_timestamp from the referenze solution takes 550ms every 2 to 3 seconds.
The old 120Hz eye tracker does not experience this problem.
Pupil Capture v1.13
https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/hmd_calibration_client.py
@user-82f8e3 Hey, how often do you call this method? Also, at which frame rate do you run the eye cameras?
We call it 60 to 90 times a second and the eye camera runs at 200Hz
Hi! I've installed pupil capture in Linux following the steps on website. But the videoCapture cannot initialized. (Capture is started in ghost mode. Both the world camera and eye camera.) I've test the cameras in other program and it worked. So I think the driver is fine. Anyone can tell me what other things I need to check? Thanks a lot.
@user-9c3078 Are the cameras listed as unknown
in the uvc manager?
Yes, they are unknown.
@user-9c3078 In this case, you need to add your user to the plugdev
user group. Afterwards, you might need to logout to apply the new user rights
But I've checked the plugdev and the cameras are in the list.
Hi, I'm a new user of Pupil Labs and we'd like to send Remote Annotations from Unity (C#, not Python). The Doc links an example in Python which instead of sending a string to the socket, seems to be sending a Binary Serialized object. How could I achieve the same thing from within Unity (i.e. sending the 'equivalent' info) on a TCP Socket.
I've searched this discord and found 2-3 discussions about Annotations and Unity, but couldn't find a lead on how to do it.
hello, there is a security statement of pupil hmd?
i mean, there is a calculation of IR-camera according to ICNIRP or something?
Hello. Can anyone tell me when/under what circumstances the software will attempt to do anything via powershell command? I'm not a user of this software, but the antivirus administrator who noticed that a PS command (presumably for updating drivers) got blocked from execution on a computer that has Pupillabs installed. Also, is there another way to update drivers that doesn't use PS? What impacts are there if this PS command remains blocked? Thanks.
@user-9c3078 Sorry, but I do not understand. What is the output of the groups
command if you run it in a terminal?
@user-b08428 You will need to use an existing C# implementation for zeromq and msgpack to reproduce the behaviour.
Sorry I misunderstand before. Now it works perfectly. Thank you so much!!!
@user-dcc24b You can install drivers manually. Without them, Capture won't be able to access the cameras. https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
Hey! Is there any file or Doc Page where I can learn about the events dictionary and its members?
Right now im struggling to understand whats exactly in the events.get("pupil") list EDIT: I realized that they are just (x,y) values for the respective pupils. I guess I was confused as to what eye the respective x,y value belongs to
Hi! When Auto Exposure mode is set to 'aperture priority' for the UVC source world cam, is there any way to query what the current Absolute Exposure Time is being set to? When checking the current exposure time, it never changes.
@papr Those errors when installing pyndsi on arm64 Linux. Something wrong?
Is it possible to use pupil to record winks as well as blinks, ie only launch commands if one eye is closed?
Looks like it won't, I guess I'll have to make my own plugin
Any tips on modifying the blink script for winking?
@papr Thanks for your response. How often will the software attempt to run powershell commands?
@user-dcc24b On windows, every time a UVC camera is initialised, we will try to verify the drivers., i.e. at least once for each process (world, eye0, eye1)
@user-adf88b You would have to modify the blink detector itself. It is based on the assumption that the confidence shortly drops in both eyes during a blink. A wink would only drop confidence in a single eye.
@user-94ac2a The issue is that the compiler is not able to find the declaration of AVCodecParameters
. It is defined in <libavcodec/avcodec.h>
. Please make sure that the ffmpeg headers are installed correctly. On Ubuntu you can do this with: sudo apt install -y libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libavresample-dev
@user-e594a2 In this case, @user-894365 might be able to help you 👍
Hi @user-e594a2 The Absolute Exposure Time
shown above the slider is not the actual value set to UVC.
In fact, there is no way to access the actual Absolute Exposure Time
when the Auto Exposure mode is set to aperture priority
.
Thanks @user-894365 ! I was afraid this may be the case. My workaround is going to be to manually control the exposure time building some simple auto-exposure logic - then we'll know the current exposure time since we set it ourselves. I noticed that even in manual mode, the pupil custom world camera jumps in resulting brightness (up or down) every 16 steps when you move the slider 'capture' mode (same when setting the UVC camera's Exposure Time directly).
@user-e594a2 the eye process has already some auto exposure code that we use for the 200hz cameras. I am currently on mobile, so I can't link the file. I will be able to do so tomorrow.
in a post from 2019 there was a mention of auto exposure code for the 200hz cameras. Would it be possible to direct me to this link? I was not able to find the link in the existing messages. Thank you very much for your help.
Dear Team, I am Raam. I am working on a react native app that must have an eye tracking feature in it. This app is used by people with physical disabilities. I checked git and found py and c++ projects. I wanted to check with you if there is a repository of this project with js?
Please help
@user-ac6fcd The project does not have a library that you can integrate into your software directly. There is a stand-alone software running (Pupil Capture) which provides a network interface to receive the eye tracking data in real time. The api uses zeromq and msgpack. You should not have a problem to find appropriate bindings for these written in JS.
@user-ac6fcd What type of eye tracker do you plan to use?
Thank you @papr
Initially I tried using tobii but it is not compatible with mobile devices so I was trying to build this feature to be used with device camera.
Do you think this is a feasible option?
Which device are we talking about? Phones?
yes, both android and iPhones
Is the react app running on the same phone?
yes it is
Has anyone written a drift correction algorithm that I might be able to use?
time sync
@user-f3a0e4
Hello all. I'm attempting to build from source in a Windows 10 virtual machine. I'm interested in learning how the pupil detection works.
I'm running into issues with the GLFW windows creation. Can someone confirm for me: You cannot run Pupil software in a Hyper-V VM, because the processor doesn't support OpenGL, right? That is, unless you use Discrete Device Assignment?
Hi everyone, im currently working on a research paper with Pupil device. I was wondering if anyone previously coded a Matlab script to import gaze and pupil position automatically from Pupil Capture. Any help is appreciated
@user-139bd4 https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
Thank you!
Im trying to run the receivemessage script but im encountering this error. Can anyone help out?
Hi @papr we have been trying to use the latest Eye-movement detector plugin to analyse our data. However, the plugin provides vastly different results when using on either windows or mac, with more detailed and accurate results evident on the windows system. Do you know why this would be, or how I might go about getting it to work on mac? Thanks!
@user-f3a0e4 What is your setup to compare both? Do you have a single recording that you opened in Player on Windows and Mac?
Yes exactly 😃
One system reports 32 segments, the other 109
@user-f3a0e4 could you share the recording with [email removed]
@user-139bd4 unfortunately, I do not have a Matlab environment to reproduce the error 😕
@papr I have just emailed the recording
@user-f3a0e4 Thanks, I got it. I will come back to you as soon as it is investigated.
Thank you!!
@papr its fine
I have an additional question: Im looking through the open source code of pupil to find which file has the conversion of the gaze position from pixel to coordinates. Can someone guide me through?
Hi! Can anyone tell me what 'pso' is?
Also what's the standard to define the class? I thought it's not the duration. Is that right?
Hi @user-9c3078 pso
= post saccadic oscillation
. For definitions and more information on the Naive Segmented Linear Regression method for eye movement classification (NSLR method) please see the paper in Nature: https://www.nature.com/articles/s41598-017-17983-x
Hi all - has anyone done work with OpenCV in a plugin? I’m working on a plugin, part of which is to display a video using OpenCV. The plugin seems to run fine, the video plays, and the actual eye tracking process still records eye movements. But the world view camera freezes, and also the surface tracking freezes. (Running from source) When the video ends/closes, the world view and surface tracker kick in again. Wondering if there’s an easy way to play an opencv video without pausing those processes? Or if anyone has an idea of the right direction to look for a solution?
@user-40ad0b plugins are not suppose to call blocking functions. Ideally recent_events implements anything to update the Opencv ui, returning as quickly as possible
The eye processes are separate, therefore they do not freeze. But all plugins run in the world process. Therefore, if one plugin freezes, the world process freezes.
Perfect, thanks! Definitely on the right path now