software-dev


user-5d12b0 01 May, 2019, 14:29:36

@papr Have you encountered crashing on this line before? https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detectors/detector_3d.pyx#L64 It's quite a bit of work to debug the C++ code so I'm hoping that you may know the problem.

Further details: Both @user-d28f08 and my student have the exact same crash, but I don't have the crash on either of my Windows computers. The pupil source is on the same git commit in all cases. It crashes whether running from within PyCharm or from the Anaconda Prompt. One thing that is probably different is that I configured my vcpkg and conda environments weeks ago, but they did theirs recently, so some of the dependencies are probably different versions.

Please let me know if you have any suggestions.

user-54376c 02 May, 2019, 07:02:53

Could someone maybe look into this? https://github.com/pupil-labs/pupil-mobile-app/issues/32#issuecomment-487842904 Would be very nice if Pupil Mobile worked on Android 9.

user-d28f08 02 May, 2019, 10:29:40

I have a problem with pupil-helpers plugins https://github.com/pupil-labs/pupil-helpers/tree/master/python When I try to load them, the pupil capture program doesn't start. There is no error or anything, it just seems to stop loading at world - [INFO] cython_methods.build: Building extension modules... The only plugin that doesn't interrupt opening is pupil_remote_control, but I've noticed that the code looks very similar to pupil_remote (older version of the plugin?), but the plugin itself doesn't work. There is no difference if pupil remote plugin is enabled or disabled while opening the program. Can someone help me with this issue?

papr 02 May, 2019, 11:02:39

@user-d28f08 Please be aware that the helpers are not meant as plugins. You are meant to run them explicitly in a terminal, next to Capture

papr 02 May, 2019, 11:08:20

There are some cython extentions that are built on the first run. This usually should not take longer than one minute. Please remove the helper scripts from the plugin directory since any python scripts there will be imported and therefore executed.

user-d28f08 02 May, 2019, 11:12:18

Program works without these files. I didn't know I should run them explicity, I'll check that.

user-54376c 02 May, 2019, 11:15:45

@papr Does that like mean that someone will look at it? 🤔

papr 02 May, 2019, 11:24:55

@user-54376c Yes

user-92713f 02 May, 2019, 12:03:40

Hello, i built a room-scale VR multiplayer in Unreal Engine and now basically got a Vive Pro with Pupil Labs Eye Tracker dropped into my lap. I have at the very max 2 weeks time to get it working and a quick search resulted in no Unreal support, only Unity. Best result so far is https://github.com/SysOverdrive/UPupilLabsVR but i'm unsure if this will do it . All help welcome ^_^"

papr 02 May, 2019, 12:04:41

@user-92713f What is your goal? What do you want to use the eye tracking for?

papr 02 May, 2019, 12:04:58

What kind of data are you interested in?

user-92713f 02 May, 2019, 12:07:12

Collaborative VR research experiments. So realtime eye animation. Stereo gaze would be nice but a basic "look at" vector would do it.

user-d28f08 02 May, 2019, 12:12:41

@papr Thank you for information, now everything works

papr 02 May, 2019, 12:14:59

@user-92713f I would recommend you to mirror the current structure of the hmd-eyes-alpha version. I do not know if the repository above is still compatible with the current version of Capture

user-92713f 02 May, 2019, 12:20:51

Hmmm so there is no easy way, just good old code crunching. Thanks for the information/confirmation.

papr 02 May, 2019, 12:23:44

@user-92713f There are only examples on how to interact with the network api, yes.

papr 06 May, 2019, 12:29:31

@user-5d12b0 @user-d28f08 My first intuition would be to blame ceres or anything that is related to it (especially linking correctly to it)

papr 06 May, 2019, 13:22:03

@user-8779ef Re streaming video from unity to Capture: We did a proof-of-concept with uncompressed video (basically publishing in the same format as the frame publisher). It worked well between two computers connected via LAN

user-8779ef 06 May, 2019, 15:55:56

Awesome.

papr 06 May, 2019, 16:37:43

@user-8779ef Just tested. Setup on the same computer: - Capture instance A is connected to a headset and publishes the world video via the Frame Publisher - Capture instance B runs the new HMD Streaming backend and is subscribed to frame.world on its own ipc backbone - A forward script subscribed to frame.world at A publishes any incoming frames to B

Therefore, B basically mirrors A.

With A, I recorded a timer on my phone. Then, I put A and B next to each other and made a screen recording, recording the displayed timer in both instances. The timer has a resolution of 0.01 seconds.

Attached, you can find the screen recording. I see a varying delay of 0.00-0.03 seconds.

papr 06 May, 2019, 16:40:00

In the actual use case of HMD Eyes, there is no forwarding, effectively reducing the delay. Also, the frames come with their original timestamp. Since the purpose of this is mostly for recordings, the only effective delay will be the time synchronization delay between HMD eyes and capture.

user-8779ef 06 May, 2019, 16:54:44

Very nice. So, the scene video recording is well below the 90 Hz Unity update speed. good test!

user-8779ef 06 May, 2019, 16:55:18

I'm still corresponding to Felix about his tests regarding real-time latency of the gaze data in Unity.

user-8779ef 06 May, 2019, 16:55:59

Really glad you guys are testing this stuff with the appropriate rigor 😃

user-8779ef 06 May, 2019, 16:57:43

How taxed is the GPU? What's your feel here - will you need a separate PC for data l logging with scene video?

user-8779ef 06 May, 2019, 16:57:54

Will it add latency to the gaze mapper?

user-8779ef 06 May, 2019, 16:58:29

Gaze mapping while rendering already suffers from a bit of a latency.

papr 07 May, 2019, 06:49:19

@user-8779ef I do not think that the gpu is taxed by it since it only involves memory copies. The receiver is implemented such that it actively drops all received frames but the most recent ones. This is to avoid aggregating lag as soon as there are more frames than the receiver can handle. I think @fxlange and I will have to combine our tests and some point to find out the other questions.

papr 07 May, 2019, 06:50:13

My intuition is that on a heavily taxed unity system it would be to run Capture on a separate device connected via LAN.

papr 07 May, 2019, 06:51:03

Especially, since Capture will have to encode the images when making a recording 🤔

user-26fef5 08 May, 2019, 10:36:08

Hi there, just a quick hardware based question. We just received our new pupil labs binocular headset with usb-c mount. We are using a Realsense D435i with it. I just figured out, that when i use the usb-c to usb-A plug from pupil-labs, the realsense times out and wont get data through (neither when coupled through the headset mount nor when we direclty couple that to the realsense without the headset). It will only work if i use the cable that came with the D435i camera. Is this a known issue ?

user-bd8441 08 May, 2019, 18:01:36

Hi, I'm wondering whether pupil is a good candidate for hardware acceleration (e.g. FPGA, ASIC type stuff). Some high-level questions I have are: What/where are the most demanding computational tasks in the pupil processing pipeline? Which stage consumes the most memory bandwidth? What is the size of pupil's working set (perhaps as a function of frame size)? I'm new here, and before I begin to investigate these questions myself, I wanted to check with all of you experienced folks whether similar profiling work has already been done for pupil, or if the community has any insights into these. Thanks!

user-5d12b0 10 May, 2019, 20:10:27

@user-d28f08 @papr ceres-solver is now on conda-forge. I've updated the build instructions with vcpkg eliminated.

user-5d12b0 10 May, 2019, 20:10:39

This fixes the 3d detector problem.

user-78dc8f 15 May, 2019, 13:59:14

@papr Hi. I am working on pulling out video frames from the head cameras that are sync'd across a parent and child (close to the same time stamp). The goal is to pull out the same number of sync'd video frames and then use tensor flow to recognize the objects in these frames. I've started by using matlab and ffmpeg; making progress, but this seems like something others might be working on. Any pointers toward useful plug-ins or otherwise that might be available from the pupil community?

papr 15 May, 2019, 14:57:51

@user-78dc8f I can send you python code for time correlation and video frame decoding with pyav tomorrow.

user-78dc8f 15 May, 2019, 15:19:08

@papr thanks. That would be great.

papr 16 May, 2019, 09:35:10

@user-78dc8f

Time correlation: - https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L136 - Usage: find_closest() takes two 1d-arrays of timestamps, one from video A and one from video B. You can use numpy.load() to read the *_timestamps.npy files. The function will return a list of indices with the same length as B, where the values are valid indeces for A. So if result[i] == j, then A's j-th frame is closest to B's i-th frame.

The easiest way for the frame extraction is to use opencv:

import cv2

cap = cv2.VideoCapture('<path to video file>')

while(cap.isOpened()):
    ret, frame = cap.read()
user-d6c80d 16 May, 2019, 14:31:27

Hi there, can anyone help me with setting Pupil Labs up in Unity?

wrp 17 May, 2019, 05:00:44

Hi @user-d6c80d I see that you re-posted your question in vr-ar - that channel is the appropriate place to discuss Unity/VR-AR related questions

user-78dc8f 17 May, 2019, 08:56:14

@papr Thanks for the reply yesterday. The time correlation is basically the same as what I have implemented in matlab (good confirmation there). Can you say more about the video extraction? I didn't see opencv in the player_methods file...the extraction is where we seem to be hitting errors.

user-78dc8f 17 May, 2019, 08:57:22

@papr Note that your opencv part of the reply was cut off--is there a way to open your full reply in discord?? (sorry I'm such a novice here...)

papr 17 May, 2019, 08:59:20

@user-78dc8f The opencv example is not cut off. The frame object is the BGR pixel buffer. Pupil does not use opencv to decode videos but pyav. Accessing the image data frame by frame with opencv is simpler than with pyav. That is why I provided the opencv example. What kind of errors are you hitting?

papr 17 May, 2019, 09:00:19

You can always use ffprobe <video file> if the video file is corrupted or not. ffprobe is usually installed next to ffmpeg

user-78dc8f 17 May, 2019, 09:02:02

@papr when we use ffmepg to extract a specific number of frames starting at the timestamp, it sometimes hits repeated frames and we get tons of warnings. What I want is just to extract the frames associated with the timestamps I have selected post-time correlation. That way, I have a set of frames from each world cam that are sampling the world at the same moments in time.

papr 17 May, 2019, 09:03:04

Ok, I understand!

papr 17 May, 2019, 09:11:06

@user-78dc8f do you seek to a specific timestamp at first, or do you loop over all frames until you are at the correct index?

user-78dc8f 17 May, 2019, 09:15:29

@papr Here's an example call: 'ffmpeg -i 06NIHVWM074G_child_worldviz.mp4 -vf "select=gte(n\,4449)" -vframes 17593 06NIHVWM074G_childframes/06NIHVWM074G_Child%d.jpg'

user-78dc8f 17 May, 2019, 09:17:09

@papr so we are specifying the first frame (4449 in this example) and the number of frames we want thereafter (17593) [which matches the number of frames we extracted from the timestamp data structure.

user-78dc8f 17 May, 2019, 09:18:28

@papr the assumption here is that the number of timestamps in 'timestamps.npy' and the number of frames in 'worldviz.mp4' should match.

papr 17 May, 2019, 09:23:01

Yes, that assumption should be correct. Have you tried applying the same command to the intermediate video files instead of those that were exported from Player?

user-78dc8f 17 May, 2019, 09:24:20

@papr No, haven't tried this. Which files are the 'intermediate' video files?

papr 20 May, 2019, 06:45:37

@user-78dc8f Those files that are recorded by Capture/Pupil Mobile. Meaning, not those that were exported by Player.

papr 20 May, 2019, 06:55:16

@user-90207c In your case, you can create a Pupil Player recording from the single video, run Offline Pupil Detection, and export the data as csv files.

Check out this Jupyter Notebook on how to create a recording from a single file: https://gist.github.com/papr/d3e9d3863b934d1d4893e91b3f935ed1

In order for your video to be detected as eye video, you will have to name it eye0.mp4 or eye1.mp4

papr 20 May, 2019, 06:57:23

@user-90207c Alternatively, @user-23fe58 has been working on a similar method to create recordings from a single file.

user-a6cc45 20 May, 2019, 20:07:48

Hi, I have a question: I'm making a custom plugin class MyPlugin(Plugin): and I don't understand why I have to pass radius, color, thickness and fill parameters to init function? why can't I pass only g_pool parameter?

papr 20 May, 2019, 20:16:57

@user-a6cc45 you can!

papr 20 May, 2019, 20:17:21

G_pool is the only required argument

user-a6cc45 20 May, 2019, 20:21:59

@papr You're right but it worked after I removed user_settings_player 😄 before I got errors about providing too much arguments

papr 20 May, 2019, 20:24:20

@user-a6cc45 ah, be aware about get_init_dict(). Dicts returned by it will be passed to init when restoring the session

user-b5c63f 26 May, 2019, 08:47:06

Hi, I'm trying to make a plugin, but the Pupil Labs software crashes if the plugin imports external libraries. Is there a way around this?

papr 26 May, 2019, 08:57:02

@user-b5c63f You are probably running from bundle. You either need to place all dependencies next to the plugin or run from source.

user-b5c63f 27 May, 2019, 10:10:52

@papr Thanks! I'll try that

user-a6cc45 28 May, 2019, 14:02:49

Hi, When I'm trying to enable Offline Surface Tracker in Pupil Player the app crashes and I get 2 errors:

C:\Users\Jolka\pupil\pupil_src\shared_modules\reference_surface.py:455: RuntimeWarning: invalid value encountered in subtract
  and np.mean(np.abs(corners_robust - self.old_corners_robust)) < 0.02
C:\Users\Jolka\pupil\pupil_src\shared_modules\reference_surface.py:455: RuntimeWarning: invalid value encountered in less
  and np.mean(np.abs(corners_robust - self.old_corners_robust)) < 0.02

What can be wrong?

papr 28 May, 2019, 14:05:51

@user-a6cc45 when did you pull the repository? Or better: At which commit are you?

papr 28 May, 2019, 14:06:10

git describe --long

user-a6cc45 28 May, 2019, 14:14:51

@papr v1.11-57-gf4d63ebc

papr 28 May, 2019, 14:17:59

@user-a6cc45 my best guess is that that either corners_robust or self.old_corners_robust contains NaN values. Not sure how that can happen. We have recently restructured our the surface tracker. You could try to pull the current master and try out the newest version. Or you could send the recording to data@pupil-labs.com and I will have a look at it on Thursday.

user-96755f 29 May, 2019, 14:28:24

hello! i'm finally switching from the diy headset to the official Pupil Headset! Is there something that I should do to make ubuntu recognize it or is it plug & play and i'm doing something wrong?

papr 29 May, 2019, 14:30:27

@user-96755f you might need to add your user to the plugdev group

papr 29 May, 2019, 14:30:48

Are the cameras listed as Unknown?

user-96755f 29 May, 2019, 14:32:15

ok, the cameras are working!

user-96755f 29 May, 2019, 14:33:55

but the eye cam is flipped upside down

papr 29 May, 2019, 14:38:11

@user-96755f This is expected since the eye camera is physically flipped 🙂

user-96755f 29 May, 2019, 14:39:08

haha solved! sorry for the silly question

papr 29 May, 2019, 14:39:11

@user-96755f you can flip the visualization in the eye process. The image being flipped does not affect the gaze mapping

user-32853a 29 May, 2019, 14:41:09

Hi--I've been trying to set up the dependencies on MacOS 10.14.5, but I'm running into problems installing libuvc. I have already installed libusb using homebrew like the docs describe. I'm able to follow these commands up until the last one:

user-32853a 29 May, 2019, 14:41:31

Chat image

user-32853a 29 May, 2019, 14:41:47

Then I get the following error:

Chat image

user-32853a 29 May, 2019, 14:42:03

Any help would be greatly appreciated. I've been trying this for a while 😦

papr 29 May, 2019, 14:42:27

@user-32853a there is a related issue k. Github. I can link it later when I am back at my laptop.

user-32853a 29 May, 2019, 14:44:46

@papr Is it this one? https://github.com/pupil-labs/pyuvc/issues/30 Unfortunately I tried those but I'm still getting the issue

papr 29 May, 2019, 14:45:50

@user-32853a yes

user-96755f 29 May, 2019, 14:52:58

another silly silly question: is there anything that i can do to limit the fish eye effect?

papr 29 May, 2019, 14:54:55

@user-96755f the headset should have come with a narrow replacement lens

user-96755f 29 May, 2019, 14:55:28

ok the silver pack right?

papr 29 May, 2019, 14:58:08

@user-96755f correct

user-32853a 29 May, 2019, 15:01:39

@papr I'm still having trouble with the issue after trying the ideas on that GitHub issue again. Any other ideas?

papr 29 May, 2019, 15:03:11

@user-32853a I will come back to you as soon as I have my laptop

user-32853a 29 May, 2019, 15:07:07

@papr Thanks

papr 29 May, 2019, 15:48:58

@user-32853a What is your output when running make LIBRARY_PATH=/usr/local/lib && make install?

user-32853a 29 May, 2019, 15:56:20

I get this:

Chat image

papr 29 May, 2019, 16:04:41

Oh, OK, my bad. Misread the error message in the first place

papr 29 May, 2019, 16:08:17

@user-32853a What does ls -l /usr/local/include/*usb* output?

user-32853a 29 May, 2019, 16:14:40

lrwxr-xr-x 1 dh1044646 admin 42 May 29 10:26 /usr/local/include/libusb-1.0 -> ../Cellar/libusb/1.0.22/include/libusb-1.0

user-32853a 29 May, 2019, 16:14:46

@papr

papr 29 May, 2019, 16:17:01

@user-32853a Please try building libuvc again with:

make LIBRARY_PATH=/usr/local/lib INCLUDE_PATH=/usr/local/include && make install
user-32853a 29 May, 2019, 16:25:53

Unfortunately I'm still getting the same error @papr

papr 29 May, 2019, 16:28:09

@user-32853a ls -l /usr/local/include/libusb-1.0/ just to make sure the header file is actually there

papr 29 May, 2019, 16:30:28

@user-32853a Also, let's move this into a private conversation since this seems to be a setup issue. We can update the channel with a solution as soon as we have one.

user-fbd5db 29 May, 2019, 22:19:20

Hi there, I'm gonna do BatchExport. could you please help how I can run it on Jupyter? Thanks.

user-97591f 30 May, 2019, 15:56:05

@papr @user-32853a Last month I've spent the majority of my time building pupil from source on MacOS / Linux / Windows. I could chime in, and maybe smooth out some hiccups

papr 30 May, 2019, 16:36:40

@user-97591f Feel free to bring in your newly gained knowledge, answer build related questions, and/or propose changes to the install instructions.

wrp 31 May, 2019, 09:13:26

@user-97591f PRs are welcomed at https://github.com/pupil-labs/pupil-docs

End of May archive