@papr Have you encountered crashing on this line before? https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detectors/detector_3d.pyx#L64 It's quite a bit of work to debug the C++ code so I'm hoping that you may know the problem.
Further details: Both @user-d28f08 and my student have the exact same crash, but I don't have the crash on either of my Windows computers. The pupil source is on the same git commit in all cases. It crashes whether running from within PyCharm or from the Anaconda Prompt. One thing that is probably different is that I configured my vcpkg and conda environments weeks ago, but they did theirs recently, so some of the dependencies are probably different versions.
Please let me know if you have any suggestions.
Could someone maybe look into this? https://github.com/pupil-labs/pupil-mobile-app/issues/32#issuecomment-487842904 Would be very nice if Pupil Mobile worked on Android 9.
I have a problem with pupil-helpers plugins https://github.com/pupil-labs/pupil-helpers/tree/master/python When I try to load them, the pupil capture program doesn't start. There is no error or anything, it just seems to stop loading at world - [INFO] cython_methods.build: Building extension modules...
The only plugin that doesn't interrupt opening is pupil_remote_control
, but I've noticed that the code looks very similar to pupil_remote
(older version of the plugin?), but the plugin itself doesn't work. There is no difference if pupil remote plugin is enabled or disabled while opening the program. Can someone help me with this issue?
@user-d28f08 Please be aware that the helpers are not meant as plugins. You are meant to run them explicitly in a terminal, next to Capture
There are some cython extentions that are built on the first run. This usually should not take longer than one minute. Please remove the helper scripts from the plugin directory since any python scripts there will be imported and therefore executed.
Program works without these files. I didn't know I should run them explicity, I'll check that.
@papr Does that like mean that someone will look at it? 🤔
@user-54376c Yes
Hello, i built a room-scale VR multiplayer in Unreal Engine and now basically got a Vive Pro with Pupil Labs Eye Tracker dropped into my lap. I have at the very max 2 weeks time to get it working and a quick search resulted in no Unreal support, only Unity. Best result so far is https://github.com/SysOverdrive/UPupilLabsVR but i'm unsure if this will do it . All help welcome ^_^"
@user-92713f What is your goal? What do you want to use the eye tracking for?
What kind of data are you interested in?
Collaborative VR research experiments. So realtime eye animation. Stereo gaze would be nice but a basic "look at" vector would do it.
@papr Thank you for information, now everything works
@user-92713f I would recommend you to mirror the current structure of the hmd-eyes-alpha version. I do not know if the repository above is still compatible with the current version of Capture
Hmmm so there is no easy way, just good old code crunching. Thanks for the information/confirmation.
@user-92713f There are only examples on how to interact with the network api, yes.
@user-5d12b0 @user-d28f08 My first intuition would be to blame ceres or anything that is related to it (especially linking correctly to it)
@user-8779ef Re streaming video from unity to Capture: We did a proof-of-concept with uncompressed video (basically publishing in the same format as the frame publisher). It worked well between two computers connected via LAN
Awesome.
@user-8779ef Just tested. Setup on the same computer:
- Capture instance A is connected to a headset and publishes the world video via the Frame Publisher
- Capture instance B runs the new HMD Streaming backend and is subscribed to frame.world
on its own ipc backbone
- A forward script subscribed to frame.world
at A publishes any incoming frames to B
Therefore, B basically mirrors A.
With A, I recorded a timer on my phone. Then, I put A and B next to each other and made a screen recording, recording the displayed timer in both instances. The timer has a resolution of 0.01 seconds.
Attached, you can find the screen recording. I see a varying delay of 0.00-0.03 seconds.
In the actual use case of HMD Eyes, there is no forwarding, effectively reducing the delay. Also, the frames come with their original timestamp. Since the purpose of this is mostly for recordings, the only effective delay will be the time synchronization delay between HMD eyes and capture.
Very nice. So, the scene video recording is well below the 90 Hz Unity update speed. good test!
I'm still corresponding to Felix about his tests regarding real-time latency of the gaze data in Unity.
Really glad you guys are testing this stuff with the appropriate rigor 😃
How taxed is the GPU? What's your feel here - will you need a separate PC for data l logging with scene video?
Will it add latency to the gaze mapper?
Gaze mapping while rendering already suffers from a bit of a latency.
@user-8779ef I do not think that the gpu is taxed by it since it only involves memory copies. The receiver is implemented such that it actively drops all received frames but the most recent ones. This is to avoid aggregating lag as soon as there are more frames than the receiver can handle. I think @fxlange and I will have to combine our tests and some point to find out the other questions.
My intuition is that on a heavily taxed unity system it would be to run Capture on a separate device connected via LAN.
Especially, since Capture will have to encode the images when making a recording 🤔
Hi there, just a quick hardware based question. We just received our new pupil labs binocular headset with usb-c mount. We are using a Realsense D435i with it. I just figured out, that when i use the usb-c to usb-A plug from pupil-labs, the realsense times out and wont get data through (neither when coupled through the headset mount nor when we direclty couple that to the realsense without the headset). It will only work if i use the cable that came with the D435i camera. Is this a known issue ?
Hi, I'm wondering whether pupil is a good candidate for hardware acceleration (e.g. FPGA, ASIC type stuff). Some high-level questions I have are: What/where are the most demanding computational tasks in the pupil processing pipeline? Which stage consumes the most memory bandwidth? What is the size of pupil's working set (perhaps as a function of frame size)? I'm new here, and before I begin to investigate these questions myself, I wanted to check with all of you experienced folks whether similar profiling work has already been done for pupil, or if the community has any insights into these. Thanks!
@user-d28f08 @papr ceres-solver is now on conda-forge. I've updated the build instructions with vcpkg eliminated.
This fixes the 3d detector problem.
@papr Hi. I am working on pulling out video frames from the head cameras that are sync'd across a parent and child (close to the same time stamp). The goal is to pull out the same number of sync'd video frames and then use tensor flow to recognize the objects in these frames. I've started by using matlab and ffmpeg; making progress, but this seems like something others might be working on. Any pointers toward useful plug-ins or otherwise that might be available from the pupil community?
@user-78dc8f I can send you python code for time correlation and video frame decoding with pyav tomorrow.
@papr thanks. That would be great.
@user-78dc8f
Time correlation:
- https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L136
- Usage: find_closest()
takes two 1d-arrays of timestamps, one from video A and one from video B. You can use numpy.load()
to read the *_timestamps.npy
files. The function will return a list of indices with the same length as B, where the values are valid indeces for A. So if result[i] == j
, then A's j-th frame is closest to B's i-th frame.
The easiest way for the frame extraction is to use opencv:
import cv2
cap = cv2.VideoCapture('<path to video file>')
while(cap.isOpened()):
ret, frame = cap.read()
Hi there, can anyone help me with setting Pupil Labs up in Unity?
Hi @user-d6c80d I see that you re-posted your question in vr-ar - that channel is the appropriate place to discuss Unity/VR-AR related questions
@papr Thanks for the reply yesterday. The time correlation is basically the same as what I have implemented in matlab (good confirmation there). Can you say more about the video extraction? I didn't see opencv in the player_methods file...the extraction is where we seem to be hitting errors.
@papr Note that your opencv part of the reply was cut off--is there a way to open your full reply in discord?? (sorry I'm such a novice here...)
@user-78dc8f The opencv example is not cut off. The frame object is the BGR pixel buffer. Pupil does not use opencv to decode videos but pyav. Accessing the image data frame by frame with opencv is simpler than with pyav. That is why I provided the opencv example. What kind of errors are you hitting?
You can always use ffprobe <video file>
if the video file is corrupted or not. ffprobe
is usually installed next to ffmpeg
@papr when we use ffmepg to extract a specific number of frames starting at the timestamp, it sometimes hits repeated frames and we get tons of warnings. What I want is just to extract the frames associated with the timestamps I have selected post-time correlation. That way, I have a set of frames from each world cam that are sampling the world at the same moments in time.
Ok, I understand!
@user-78dc8f do you seek to a specific timestamp at first, or do you loop over all frames until you are at the correct index?
@papr Here's an example call: 'ffmpeg -i 06NIHVWM074G_child_worldviz.mp4 -vf "select=gte(n\,4449)" -vframes 17593 06NIHVWM074G_childframes/06NIHVWM074G_Child%d.jpg'
@papr so we are specifying the first frame (4449 in this example) and the number of frames we want thereafter (17593) [which matches the number of frames we extracted from the timestamp data structure.
@papr the assumption here is that the number of timestamps in 'timestamps.npy' and the number of frames in 'worldviz.mp4' should match.
Yes, that assumption should be correct. Have you tried applying the same command to the intermediate video files instead of those that were exported from Player?
@papr No, haven't tried this. Which files are the 'intermediate' video files?
@user-78dc8f Those files that are recorded by Capture/Pupil Mobile. Meaning, not those that were exported by Player.
@user-90207c In your case, you can create a Pupil Player recording from the single video, run Offline Pupil Detection, and export the data as csv files.
Check out this Jupyter Notebook on how to create a recording from a single file: https://gist.github.com/papr/d3e9d3863b934d1d4893e91b3f935ed1
In order for your video to be detected as eye video, you will have to name it eye0.mp4
or eye1.mp4
@user-90207c Alternatively, @user-23fe58 has been working on a similar method to create recordings from a single file.
Hi, I have a question:
I'm making a custom plugin class MyPlugin(Plugin):
and I don't understand why I have to pass radius, color, thickness and fill parameters to init function? why can't I pass only g_pool parameter?
@user-a6cc45 you can!
G_pool is the only required argument
@papr You're right but it worked after I removed user_settings_player 😄 before I got errors about providing too much arguments
@user-a6cc45 ah, be aware about get_init_dict(). Dicts returned by it will be passed to init when restoring the session
Hi, I'm trying to make a plugin, but the Pupil Labs software crashes if the plugin imports external libraries. Is there a way around this?
@user-b5c63f You are probably running from bundle. You either need to place all dependencies next to the plugin or run from source.
@papr Thanks! I'll try that
Hi, When I'm trying to enable Offline Surface Tracker in Pupil Player the app crashes and I get 2 errors:
C:\Users\Jolka\pupil\pupil_src\shared_modules\reference_surface.py:455: RuntimeWarning: invalid value encountered in subtract
and np.mean(np.abs(corners_robust - self.old_corners_robust)) < 0.02
C:\Users\Jolka\pupil\pupil_src\shared_modules\reference_surface.py:455: RuntimeWarning: invalid value encountered in less
and np.mean(np.abs(corners_robust - self.old_corners_robust)) < 0.02
What can be wrong?
@user-a6cc45 when did you pull the repository? Or better: At which commit are you?
git describe --long
@papr v1.11-57-gf4d63ebc
@user-a6cc45 my best guess is that that either corners_robust
or self.old_corners_robust
contains NaN values. Not sure how that can happen. We have recently restructured our the surface tracker. You could try to pull the current master and try out the newest version. Or you could send the recording to data@pupil-labs.com and I will have a look at it on Thursday.
hello! i'm finally switching from the diy headset to the official Pupil Headset! Is there something that I should do to make ubuntu recognize it or is it plug & play and i'm doing something wrong?
@user-96755f you might need to add your user to the plugdev group
Are the cameras listed as Unknown?
ok, the cameras are working!
but the eye cam is flipped upside down
@user-96755f This is expected since the eye camera is physically flipped 🙂
haha solved! sorry for the silly question
@user-96755f you can flip the visualization in the eye process. The image being flipped does not affect the gaze mapping
Hi--I've been trying to set up the dependencies on MacOS 10.14.5, but I'm running into problems installing libuvc. I have already installed libusb using homebrew like the docs describe. I'm able to follow these commands up until the last one:
Then I get the following error:
Any help would be greatly appreciated. I've been trying this for a while 😦
@user-32853a there is a related issue k. Github. I can link it later when I am back at my laptop.
@papr Is it this one? https://github.com/pupil-labs/pyuvc/issues/30 Unfortunately I tried those but I'm still getting the issue
@user-32853a yes
another silly silly question: is there anything that i can do to limit the fish eye effect?
@user-96755f the headset should have come with a narrow replacement lens
ok the silver pack right?
@user-96755f correct
@papr I'm still having trouble with the issue after trying the ideas on that GitHub issue again. Any other ideas?
@user-32853a I will come back to you as soon as I have my laptop
@papr Thanks
@user-32853a What is your output when running make LIBRARY_PATH=/usr/local/lib && make install
?
I get this:
Oh, OK, my bad. Misread the error message in the first place
@user-32853a What does ls -l /usr/local/include/*usb*
output?
lrwxr-xr-x 1 dh1044646 admin 42 May 29 10:26 /usr/local/include/libusb-1.0 -> ../Cellar/libusb/1.0.22/include/libusb-1.0
@papr
@user-32853a Please try building libuvc again with:
make LIBRARY_PATH=/usr/local/lib INCLUDE_PATH=/usr/local/include && make install
Unfortunately I'm still getting the same error @papr
@user-32853a ls -l /usr/local/include/libusb-1.0/
just to make sure the header file is actually there
@user-32853a Also, let's move this into a private conversation since this seems to be a setup issue. We can update the channel with a solution as soon as we have one.
Hi there, I'm gonna do BatchExport. could you please help how I can run it on Jupyter? Thanks.
@papr @user-32853a Last month I've spent the majority of my time building pupil from source on MacOS / Linux / Windows. I could chime in, and maybe smooth out some hiccups
@user-97591f Feel free to bring in your newly gained knowledge, answer build related questions, and/or propose changes to the install instructions.
@user-97591f PRs are welcomed at https://github.com/pupil-labs/pupil-docs