software-dev


user-09f6c7 02 July, 2019, 03:15:05

hello~ I'm trying to run with python in Ubuntu 18.04... installed Linux dependencies for 18.04 and pyuvc(https://github.com/pupil-labs/pyuvc). but a problem is that there are not uvc.py file in pyuvc folder. so I'm suffered by 'ImportError: No module named uvc'.. what can I do for fix it?

user-97591f 02 July, 2019, 05:17:43

@user-09f6c7 Did you successfully build pip3 install git+https://github.com/pupil-labs/pyuvc

user-09f6c7 02 July, 2019, 05:43:19

@user-97591f you gave me a nice point! thanks a lot

user-97591f 02 July, 2019, 20:39:08

@user-09f6c7 You're welcome, I might put together some instructions for building and running from source for Ubuntu (or other distros) using Anaconda/Miniconda. It's already been done for Windows and Mac, because using a virtual environment makes version control easier, and won't mess up other system installs using pip3 or python3.

user-f3a0e4 03 July, 2019, 12:53:37

@papr We do not have a custom plugin. We are interacting with Pupil capture via Pupil remote (https://github.com/N-M-T/Pupil-Labs-Vicon-trigger/blob/master/Trigger_Connect.py) lines 64 onwards. The rest of the code is in development, but timing tests show it be fairly accurate when used for other devices (not pupil capture). Thanks

user-f3a0e4 03 July, 2019, 12:57:36

@papr With your comment about the process, I suppose it would be more accurate to record a unix timestamp on the machine running Pupil capture when we start and stop a motion capture recording, then use that to sync the pupil recording with our mocap?

user-f3a0e4 03 July, 2019, 12:58:23

@papr That is, during a continuing pupil recording

user-3a93aa 03 July, 2019, 20:27:32

We're looking for ways to integrate the pupil data output with ROS. In particular, we need to make the world camera video available as a ROS node. I've found a couple of existing implementations, but they receive the video over ZMQ and then send it back out over ROS (i.e two separate TCP connections) and I'm worried about latency.

user-3a93aa 03 July, 2019, 20:28:25

Questions: (1) Do y'all know of a plugin-based implementation rather than a ZMQ interface? (2) Is my understanding correct that a plugin-based bridge would eliminate sending the video over ZMQ and thus remove one of the TCP connections, so should reduce latency?

user-3a93aa 03 July, 2019, 20:28:45

(If (2) is yes and (1) is no, I may build one -- it looks pretty straightforward. I'd be curious, though, if other people would be interested in this feature.)

papr 04 July, 2019, 07:12:34

@user-3a93aa (1) not that I know of. (2) is correct.

user-f3a0e4 06 July, 2019, 09:52:09

@papr sorry to keep bothering you, but did you manage to have a look at my above messages?

papr 07 July, 2019, 18:37:56

@user-f3a0e4 Sorry, I have been quite busy the last days. I will have a look at it in the coming week.

user-72cb32 08 July, 2019, 07:48:11

Can anyone help me with parsing the streamed data? I am using zeromq library to receive Pupil Capture application's streaming data.

papr 08 July, 2019, 07:48:32

@user-72cb32 In which programming language?

papr 08 July, 2019, 07:49:11

@user-72cb32 Btw, the data is serialized with msgpack. You might need to get a msgpack decoder for your programming language.

papr 08 July, 2019, 07:50:42

https://msgpack.org/

user-72cb32 08 July, 2019, 07:52:05

Hi! Thanks for the quick reply!

user-72cb32 08 July, 2019, 07:52:28

my current code is in java

user-72cb32 08 July, 2019, 07:53:01

and it is in following scheme:

ZMQ.Context context = ZMQ.context(1); requester = context.socket(ZMQ.REQ); requester.connect("tcp://127.0.0.1:5001"); String port = "SUB_PORT"; requester.send(port.getBytes()); String sub_port = requester.recvStr(); subscriber = context.socket(ZMQ.SUB); subscriber.connect("tcp://127.0.0.1:52686"); String filter=""; subscriber.subscribe(filter.getBytes()); String data = subscriber.recvStr();

user-72cb32 08 July, 2019, 07:53:25

So, when I print the last string 'data', i do get some information, but I am not sure how to parse them.

papr 08 July, 2019, 07:56:34

@user-72cb32 The network api publishes zmq messages with at least two frames: 1. topic as utf-encoded string 2. msgpack encoded dictionary

papr 08 July, 2019, 07:57:51

calling recvStr() might only receive the first frame, calling it for the second time, might give you the second frame

papr 08 July, 2019, 07:58:38

Check your documentation for multi-part receiving data.

user-72cb32 08 July, 2019, 08:00:45

@papr Thank you for your help! Does that mean that first frame and second frame contain different data?

papr 08 July, 2019, 08:01:54

@user-72cb32 correct, and only the second one needs decoding with msgpack

user-72cb32 08 July, 2019, 08:03:27

Is there any documentation for decoding scheme?

papr 08 July, 2019, 08:03:49

@user-72cb32 There are implementations for it in many languages. Check out the link above

user-72cb32 08 July, 2019, 08:20:43

@papr I will try! Thank you!

user-72cb32 08 July, 2019, 15:36:41

@papr Dear Papr, I've tried to receive the incoming streaming data, but it seems like I need to know more about sent data from Pupil Capture. For instance, I need to know what are the order of data (from eye gaze, diameter and more)

user-72cb32 08 July, 2019, 15:36:55

Is there any documentation or explanation regarding this?

papr 08 July, 2019, 16:57:48

@user-72cb32 you mean the exact fields of the msgpack encoded frame? These can vary. Please checkout the hmd-eyes project if you need to know the exact types of the message in advance. Usually, msgpack decoders are able to determine the fields and their types during decoding. The result is a nested mapping from keys to objects.

papr 08 July, 2019, 17:01:10

@user-72cb32 https://github.com/msgpack/msgpack-java/blob/develop/msgpack-jackson/README.md this document should be helpful

user-72cb32 09 July, 2019, 08:11:02

@papr Ok!!! Thanks! I will try!

papr 09 July, 2019, 13:48:03

@user-f3a0e4 I finally had time to have a look at the code that you linked above. The code looks generally correct. One note though: You are not calling vicon.close() before exiting in Recorder.connectionFailed(). Not sure if this is intended.

I am not sure where the 2.5s lag between the processes comes from. I would recommend to save the vicon data together with the pupil timestamp* when receiving its data (vicon.read()).

This way you can correlate your vicon and pupil data after the effect, without having to care about the recording start lag.

  • which can be calculated by the offset between local clock (time()) and pupil time (socket.send("t"); socket.recv()), and compensating for the synchronization delay by substracting half the round trip time.
user-14d189 10 July, 2019, 00:03:39

Hi, when using pupil mobile in different light environments, is there a way to set auto exposure for the eye cameras?

papr 10 July, 2019, 06:55:56

@user-14d189 Unfortunately, this feature needs to be implemented in software for the eye cameras , which we did not do for Pupil Mobile, as far as I know.

user-331121 10 July, 2019, 14:57:14

Hi, I have query regarding the compatibility of Pupil capture/ player. I am planning to upgrade macOS to Catalina (beta version). So, just wanted to know if anyone has tried on that version and if there are any compatibility issues with Pupil player....Thanks

papr 10 July, 2019, 14:59:28

@user-331121 Not specific to Capture/Player, but I read that the Catalina beta is much less stable than the betas in the last years (user data getting lost, etc). If you decide to upgrade anyway, please let us know if there any problems with Capture/Player on Catalina.

user-6bb2e5 11 July, 2019, 19:07:10

Hi all, I'm somewhat new to the community. Are there any modules that have been developed for real-time (or slightly buffered real-time) analysis of on-going pupil data?

user-adf88b 11 July, 2019, 19:07:59

@user-6bb2e5 I’m no expert but the hmd-eyes GitHub would be the place to start

papr 11 July, 2019, 19:15:11

@user-adf88b @user-6bb2e5 the hmd-eyes project does not do any analysis but is a baseline implementation for a client of the network api. You can access the network real-time api with other languages as well. Regarding analysis, Capture does some analysis on its own, e.g. Blink and fixation detection. This data is available through the same api. Higher level analysis would have to be done based on data published by the api in a custom script/client.

user-6bb2e5 11 July, 2019, 21:30:12

Great, thank you both! One more question and apologies if this is not the right thread: I am recording audio alongside the pupil data but when I load the recording directory into Pupil Player and use the export raw data function I do not get any output audio or audio timestamps. Do I need to use the file ops to read the raw data into python or am I missing something about Pupil Player?

papr 11 July, 2019, 21:53:35

@user-6bb2e5 the raw data exporter just exports pupil and gaze data as csv files. If you just need the audio and it's timestamps you can copy it from the original recording folder. If you use the world video exporter, you get the world video with synced audio.

user-a6cc45 14 July, 2019, 12:40:55

Hi, Recently I downloaded Pupil software (v. 1.13). I defined surfaces in Capture, and later in Pupil Player I got warning that my definitions of surfaces are deprecated: player - [WARNING] surface_tracker.surface: You have loaded an old and deprecated surface definition! Please re-define this surface for increased mapping accuracy! How is that possible since I used the latest software version? 🤔

papr 14 July, 2019, 12:46:34

@user-a6cc45 are you sure that you did not make the recording with an older version of Capture? You can check the info csv file. If it was made with v1.13, it is a bug and we will try to reproduce it.

user-a6cc45 14 July, 2019, 13:07:55

@papr

Chat image

user-a6cc45 14 July, 2019, 13:10:42

@papr there is one more problem: when I export raw data I cannot see csv files with prefix "fixations_on_surface_" . Did you remove them in last release or is this a bug?

papr 14 July, 2019, 13:24:55

@user-a6cc45 yes, we did, because we originally planned to remove the old fixation detector in favor of the eye movement detector. This plugin assigns each gaze datum to an eye movement. Afterwards, one can use it to find the eye movement for each gaze_on_surface. Unfortunately, when we decided to keep the old fixation detector for now, we forgot to put in the explicit fixation_on_surface functionality. For now, you need to check if any of the gaze_on_surface is within one of the fixations found in fixations.csv

papr 14 July, 2019, 13:25:13

This will be fixed in the next release

user-94ac2a 15 July, 2019, 08:30:31

Hey. How can we make pupillabs run on Arm64 Linux?

papr 15 July, 2019, 08:34:07

Hi @user-94ac2a You will have to follow the developer instructions and install all dependencies from source. https://docs.pupil-labs.com/#linux-dependencies

user-82f8e3 15 July, 2019, 08:36:03

We upgraded to the new 200 Hz eye tracker for the Vive
Since the upgrade the call get_pupil_timestamp from the referenze solution takes 550ms every 2 to 3 seconds. The old 120Hz eye tracker does not experience this problem. Pupil Capture v1.13 https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/hmd_calibration_client.py

papr 15 July, 2019, 08:36:58

@user-82f8e3 Hey, how often do you call this method? Also, at which frame rate do you run the eye cameras?

user-82f8e3 15 July, 2019, 08:38:30

We call it 60 to 90 times a second and the eye camera runs at 200Hz

user-9c3078 15 July, 2019, 11:08:02

Hi! I've installed pupil capture in Linux following the steps on website. But the videoCapture cannot initialized. (Capture is started in ghost mode. Both the world camera and eye camera.) I've test the cameras in other program and it worked. So I think the driver is fine. Anyone can tell me what other things I need to check? Thanks a lot.

papr 15 July, 2019, 11:35:31

@user-9c3078 Are the cameras listed as unknown in the uvc manager?

user-9c3078 15 July, 2019, 13:36:08

Yes, they are unknown.

papr 15 July, 2019, 13:37:21

@user-9c3078 In this case, you need to add your user to the plugdev user group. Afterwards, you might need to logout to apply the new user rights

user-9c3078 15 July, 2019, 13:37:56

But I've checked the plugdev and the cameras are in the list.

user-b08428 15 July, 2019, 13:43:03

Hi, I'm a new user of Pupil Labs and we'd like to send Remote Annotations from Unity (C#, not Python). The Doc links an example in Python which instead of sending a string to the socket, seems to be sending a Binary Serialized object. How could I achieve the same thing from within Unity (i.e. sending the 'equivalent' info) on a TCP Socket.

user-b08428 15 July, 2019, 13:43:31

I've searched this discord and found 2-3 discussions about Annotations and Unity, but couldn't find a lead on how to do it.

user-6fdb19 15 July, 2019, 13:46:38

hello, there is a security statement of pupil hmd?

user-6fdb19 15 July, 2019, 13:47:22

i mean, there is a calculation of IR-camera according to ICNIRP or something?

user-dcc24b 15 July, 2019, 13:56:11

Hello. Can anyone tell me when/under what circumstances the software will attempt to do anything via powershell command? I'm not a user of this software, but the antivirus administrator who noticed that a PS command (presumably for updating drivers) got blocked from execution on a computer that has Pupillabs installed. Also, is there another way to update drivers that doesn't use PS? What impacts are there if this PS command remains blocked? Thanks.

papr 15 July, 2019, 13:58:55

@user-9c3078 Sorry, but I do not understand. What is the output of the groups command if you run it in a terminal?

papr 15 July, 2019, 14:01:27

@user-b08428 You will need to use an existing C# implementation for zeromq and msgpack to reproduce the behaviour.

user-9c3078 15 July, 2019, 14:07:35

Sorry I misunderstand before. Now it works perfectly. Thank you so much!!!

papr 15 July, 2019, 14:22:35

@user-dcc24b You can install drivers manually. Without them, Capture won't be able to access the cameras. https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-76e81f 15 July, 2019, 18:07:54

Hey! Is there any file or Doc Page where I can learn about the events dictionary and its members?

user-76e81f 15 July, 2019, 18:08:37

Right now im struggling to understand whats exactly in the events.get("pupil") list EDIT: I realized that they are just (x,y) values for the respective pupils. I guess I was confused as to what eye the respective x,y value belongs to

user-e594a2 16 July, 2019, 01:05:08

Hi! When Auto Exposure mode is set to 'aperture priority' for the UVC source world cam, is there any way to query what the current Absolute Exposure Time is being set to? When checking the current exposure time, it never changes.

user-94ac2a 16 July, 2019, 04:07:13

@papr Those errors when installing pyndsi on arm64 Linux. Something wrong?

Chat image

user-adf88b 16 July, 2019, 16:44:28

Is it possible to use pupil to record winks as well as blinks, ie only launch commands if one eye is closed?

user-adf88b 16 July, 2019, 16:53:50

Looks like it won't, I guess I'll have to make my own plugin

user-adf88b 16 July, 2019, 18:45:36

Any tips on modifying the blink script for winking?

user-dcc24b 16 July, 2019, 20:53:13

@papr Thanks for your response. How often will the software attempt to run powershell commands?

papr 17 July, 2019, 09:46:39

@user-dcc24b On windows, every time a UVC camera is initialised, we will try to verify the drivers., i.e. at least once for each process (world, eye0, eye1)

papr 17 July, 2019, 09:48:07

@user-adf88b You would have to modify the blink detector itself. It is based on the assumption that the confidence shortly drops in both eyes during a blink. A wink would only drop confidence in a single eye.

papr 17 July, 2019, 09:53:00

@user-94ac2a The issue is that the compiler is not able to find the declaration of AVCodecParameters. It is defined in <libavcodec/avcodec.h>. Please make sure that the ffmpeg headers are installed correctly. On Ubuntu you can do this with: sudo apt install -y libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libavresample-dev

papr 17 July, 2019, 09:54:33

@user-e594a2 In this case, @user-894365 might be able to help you 👍

user-894365 17 July, 2019, 12:16:01

Hi @user-e594a2 The Absolute Exposure Time shown above the slider is not the actual value set to UVC. In fact, there is no way to access the actual Absolute Exposure Time when the Auto Exposure mode is set to aperture priority.

user-e594a2 17 July, 2019, 19:30:19

Thanks @user-894365 ! I was afraid this may be the case. My workaround is going to be to manually control the exposure time building some simple auto-exposure logic - then we'll know the current exposure time since we set it ourselves. I noticed that even in manual mode, the pupil custom world camera jumps in resulting brightness (up or down) every 16 steps when you move the slider 'capture' mode (same when setting the UVC camera's Exposure Time directly).

papr 17 July, 2019, 19:32:07

@user-e594a2 the eye process has already some auto exposure code that we use for the 200hz cameras. I am currently on mobile, so I can't link the file. I will be able to do so tomorrow.

user-10fa94 25 February, 2021, 20:04:22

in a post from 2019 there was a mention of auto exposure code for the 200hz cameras. Would it be possible to direct me to this link? I was not able to find the link in the existing messages. Thank you very much for your help.

user-ac6fcd 18 July, 2019, 14:41:04

Dear Team, I am Raam. I am working on a react native app that must have an eye tracking feature in it. This app is used by people with physical disabilities. I checked git and found py and c++ projects. I wanted to check with you if there is a repository of this project with js?

user-ac6fcd 18 July, 2019, 14:41:09

Please help

papr 18 July, 2019, 14:42:58

@user-ac6fcd The project does not have a library that you can integrate into your software directly. There is a stand-alone software running (Pupil Capture) which provides a network interface to receive the eye tracking data in real time. The api uses zeromq and msgpack. You should not have a problem to find appropriate bindings for these written in JS.

papr 18 July, 2019, 14:44:15

@user-ac6fcd What type of eye tracker do you plan to use?

user-ac6fcd 18 July, 2019, 14:46:47

Thank you @papr
Initially I tried using tobii but it is not compatible with mobile devices so I was trying to build this feature to be used with device camera. Do you think this is a feasible option?

papr 18 July, 2019, 14:47:31

Which device are we talking about? Phones?

user-ac6fcd 18 July, 2019, 14:47:50

yes, both android and iPhones

papr 18 July, 2019, 14:49:00

Is the react app running on the same phone?

user-ac6fcd 18 July, 2019, 14:49:07

yes it is

user-ffdd08 18 July, 2019, 21:05:31

Has anyone written a drift correction algorithm that I might be able to use?

user-f3a0e4 22 July, 2019, 15:50:57

time sync

user-f3a0e4 22 July, 2019, 15:51:08

@user-f3a0e4

user-5402ef 24 July, 2019, 15:45:14

Hello all. I'm attempting to build from source in a Windows 10 virtual machine. I'm interested in learning how the pupil detection works.

I'm running into issues with the GLFW windows creation. Can someone confirm for me: You cannot run Pupil software in a Hyper-V VM, because the processor doesn't support OpenGL, right? That is, unless you use Discrete Device Assignment?

user-139bd4 24 July, 2019, 19:19:35

Hi everyone, im currently working on a research paper with Pupil device. I was wondering if anyone previously coded a Matlab script to import gaze and pupil position automatically from Pupil Capture. Any help is appreciated

papr 24 July, 2019, 19:38:20

@user-139bd4 https://github.com/pupil-labs/pupil-helpers/tree/master/matlab

user-139bd4 24 July, 2019, 19:43:26

Thank you!

user-139bd4 24 July, 2019, 19:53:31

Im trying to run the receivemessage script but im encountering this error. Can anyone help out?

Chat image

user-f3a0e4 25 July, 2019, 14:20:34

Hi @papr we have been trying to use the latest Eye-movement detector plugin to analyse our data. However, the plugin provides vastly different results when using on either windows or mac, with more detailed and accurate results evident on the windows system. Do you know why this would be, or how I might go about getting it to work on mac? Thanks!

papr 25 July, 2019, 14:25:42

@user-f3a0e4 What is your setup to compare both? Do you have a single recording that you opened in Player on Windows and Mac?

user-f3a0e4 25 July, 2019, 14:38:05

Yes exactly 😃

user-f3a0e4 25 July, 2019, 14:38:26

One system reports 32 segments, the other 109

papr 25 July, 2019, 14:43:02

@user-f3a0e4 could you share the recording with [email removed]

papr 25 July, 2019, 14:43:54

@user-139bd4 unfortunately, I do not have a Matlab environment to reproduce the error 😕

user-f3a0e4 25 July, 2019, 14:59:32

@papr I have just emailed the recording

papr 25 July, 2019, 15:08:30

@user-f3a0e4 Thanks, I got it. I will come back to you as soon as it is investigated.

user-f3a0e4 25 July, 2019, 15:09:01

Thank you!!

user-139bd4 25 July, 2019, 19:50:58

@papr its fine

user-139bd4 25 July, 2019, 19:52:23

I have an additional question: Im looking through the open source code of pupil to find which file has the conversion of the gaze position from pixel to coordinates. Can someone guide me through?

user-9c3078 29 July, 2019, 09:05:58

Hi! Can anyone tell me what 'pso' is?

user-9c3078 29 July, 2019, 09:08:08

Also what's the standard to define the class? I thought it's not the duration. Is that right?

wrp 30 July, 2019, 09:45:47

Hi @user-9c3078 pso = post saccadic oscillation. For definitions and more information on the Naive Segmented Linear Regression method for eye movement classification (NSLR method) please see the paper in Nature: https://www.nature.com/articles/s41598-017-17983-x

user-40ad0b 30 July, 2019, 16:14:28

Hi all - has anyone done work with OpenCV in a plugin? I’m working on a plugin, part of which is to display a video using OpenCV. The plugin seems to run fine, the video plays, and the actual eye tracking process still records eye movements. But the world view camera freezes, and also the surface tracking freezes. (Running from source) When the video ends/closes, the world view and surface tracker kick in again. Wondering if there’s an easy way to play an opencv video without pausing those processes? Or if anyone has an idea of the right direction to look for a solution?

papr 30 July, 2019, 16:17:23

@user-40ad0b plugins are not suppose to call blocking functions. Ideally recent_events implements anything to update the Opencv ui, returning as quickly as possible

papr 30 July, 2019, 16:18:21

The eye processes are separate, therefore they do not freeze. But all plugins run in the world process. Therefore, if one plugin freezes, the world process freezes.

user-40ad0b 30 July, 2019, 17:18:39

Perfect, thanks! Definitely on the right path now

End of July archive