Hey guys,
I'm trying to run Pupil from source (tag v1.10) on macOS. I followed the setup guide, which is not entirely up to date (https://github.com/pupil-labs/pupil-docs/issues/233).
When I run python3 main.py
everything starts nicely, but when I start the calibration I get the following error:
world - [INFO] calibration_routines.screen_marker_calibration: Starting Calibration
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "/Users/mb/git/pupil/pupil_src/launchables/world.py", line 607, in world
p.recent_events(events)
File "/Users/mb/git/pupil/pupil_src/shared_modules/calibration_routines/screen_marker_calibration.py", line 306, in recent_events
self.markers = self.circle_tracker.update(gray_img)
File "/Users/mb/git/pupil/pupil_src/shared_modules/circle_detector.py", line 59, in update
markers = self._check_frame(img)
File "/Users/mb/git/pupil/pupil_src/shared_modules/circle_detector.py", line 99, in _check_frame
ellipses_list = find_pupil_circle_marker(img, scale)
File "/Users/mb/git/pupil/pupil_src/shared_modules/circle_detector.py", line 249, in find_pupil_circle_marker
min_ellipses_num=2,
File "/Users/mb/git/pupil/pupil_src/shared_modules/circle_detector.py", line 431, in find_concentric_circles
edge, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_TC89_KCOS
ValueError: not enough values to unpack (expected 3, got 2)
world - [INFO] launchables.world: Process shutting down.
Do you have any advice?
I came across a similar problem but in Windows10 about this line of code:
_, contours, hierarchy = cv2.findContours( edge, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_TC89_KCOS )
you can check the current document of the function as it should be updated
@user-c5bbc4 @user-54376c Please run
python3 -c "import cv2; print(cv2.__version__)"
4.0.1
I have fixed this problem by simply delete '__'
Mine is 3.4.3
and it is working fine now
Alright, I'll try opencv 3.x.
@papr Thanks for the hint. It works with opencv version 3.4.5 (brew install [email removed] but needs also the additional pip3 opencv-contrib-python package with version 3.4.5.20! A couple of days ago a version 4.x was released, which is installed by default instead which causes a different error.
Thanks for letting us know. I will create a github issue for checking the opencv 4 compatibility.
Thanks. It would also be a help for newcomers to update the macOS part which is outdated in a few points (https://github.com/pupil-labs/pupil-docs/issues/233).
Hi guys. I am trying to use the developer version of pupil. I have installed all dependencies but I got this error when I ran run_service.bat service - [ERROR] launchables.service: Process Service crashed with trace: Traceback (most recent call last): File "C:\Users\abiola\Documents\whl files\Clone Repo\pupil\pupil_src\launchables\service.py", line 91, in service from file_methods import Persistent_Dict File "C:\Users\abiola\Documents\whl files\Clone Repo\pupil\pupil_src\shared_modules\file_methods.py", line 25, in <module> ), "msgpack out of date, please upgrade to version (0, 5, 6 ) or later." AssertionError: msgpack out of date, please upgrade to version (0, 5, 6 ) or later.
Hi guys, I solved the error above by uninstalling msgpack 0.6.1 which was installed by default. I installed version 0.5.6 and the error went away
Now I have the following error. I am trying my best to debug the error but no luck yet. I will appreciate any help you guys can render. Thanks.
cl : Command line warning D9025 : overriding '/W3' with '/w'
cl : Command line warning D9002 : ignoring unknown option '-std=c++11'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "C:\Users\abiola\Documents\whl files\Clone Repo\pupil\pupil_src\launchables\world.py", line 132, in world
import pupil_detectors
File "C:\Users\abiola\Documents\whl files\Clone Repo\pupil\pupil_src\shared_modules\pupil_detectors\__init__.py", line 18, in <module>
build_cpp_extension()
File "C:\Users\abiola\Documents\whl files\Clone Repo\pupil\pupil_src\shared_modules\pupil_detectors\build.py", line 32, in build_cpp_extension
ret = sp.check_output(build_cmd).decode(sys.stdout.encoding)
File "C:\Users\abiola\AppData\Local\Programs\Python\Python36\lib\subprocess.py", line 336, in check_output
**kwargs).stdout
File "C:\Users\abiola\AppData\Local\Programs\Python\Python36\lib\subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['C:\\Users\\abiola\\AppData\\Local\\Programs\\Python\\Python36\\python.exe', 'setup.py', 'install', '--install-lib=C:\\Users\\abiola\\Documents\\whl files\\Clone Repo\\pupil\\pupil_src\\shared_modules']' returned non-zero exit status 1.
world - [INFO] launchables.world: Process shutting down.
This error looks like you are not able to build the c++ pupil detector/calibration methods
I didn't get any error until when I ran run_services
Are you saying I should start over again?
@user-a39804 Just to clarify, by run_services
you mean run_service.bat
?
Yes. That's exactly what I mean
@user-a39804 Please follow the debugging hints in the docs regarding compilation issues with pupil detector/calibration methods. They are compiled on demand when running any of the run_*.bat
files. In order to debug the issue, it is recommended to compile them manually. See the docs for details.
Oh ok. I will take a look at that. Thanks
@user-a39804 Also, in your original question, you mentioned an error with your msgpack version. I ran into that myself, where it asked for version 0.5.6 or later, after having version 0.61. I just went into "file_methods.py" and changed the assert command to assert(msgpack.version[1] > 5 or (msgpack.version[1] == 5 and msgpack.version[2] >= 6))
@user-345bdf @user-a39804 the version check is there for a reason. There might be unforeseen issues if you use msgpack 0.6 or higher
Right, so I installed version 0.5.6
@papr thank you
@user-a39804 sorry, dont listen to me
I will try to @user-345bdf
Can we run the 3 camera streams (left, right, world) without using multiprocessing (one thread per camera) ? Anyone successful in doing that?
I want to blend the 3 frames into one and display on the fly but running them each on separate threads I am stuck on how to achieve it? Any suggestions anyone ?
@user-4a3b48 using the frame publisher plugin and its counter part pupil helper script, you can receive the frame data from all three cameras over the network api.
@papr let me look into it will get back to you
@papr Was able to solve it using Manager attribute of multiprocessing but still too slow. You mentioned "counter part pupil helper script". Could you redirect me to the link?
@user-4a3b48 https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
What type of maximum delay are you looking for? And how did you measure it?
I was expecting it to be without any delay but just by seeing the video a good amount of delay can be seen. Have not recorded any number for the delay to be specific. Let me measure it. Also I am kind of starting and terminating processes of each of the camera in an infinite loop which might be creating the lag. Let me do some study will keep you updated.
Yes, initializing the cameras is quite slow. And there is at least some delay for merging the images, right?
@papr exactly
@user-4a3b48 as a tip, use uvc.get_time_monotonic()
for measuring delays. The same time epoch is used for the uvc frame's timestamp.
Therefore, the baseline delay would be uvc.get_time_monotonic() - frame.timestamp
as soon as you received the image.
thanks for the headsup...will do it
I am trying to run this code but it get stock on this line https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
@user-a39804 what error do you see?
No error
Getting stuck on which line specifically in the notify
fn?
Not just moving past that line
I'm using Jupyter
I will use spyder and update you ASAp
Or just run locally from terminal if possible
Truth is Python is not my niche π¦
I am not sure why I am not getting any output...
@wrp what should I see when I run the code?
@user-a39804 just to clarify - you should also be running Pupil Capture at the same time as you run this helper script.
Yes, I am doing exactly that
I have Pupil Capture opened and I am running the script in the editor. I was hoping a popup will show or something but I am not getting any feedback
without the while loop as in the code here: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py#L66-L76 - nothing will happen
you can print the recent_world
or recent_eye0
to see the data
or add a line of code to use opencv to show the image frames
@user-a39804 to display the image with opencv from the script you could do something like the following:
while True:
topic, msg = recv_from_sub()
if topic == 'frame.world':
print("received world frame")
recent_world = np.frombuffer(msg['__raw_data__'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3)
cv2.imshow("World", recent_world)
cv2.waitKey(5)
this would use opencv to show the image in a new window called "world" - waitkey is required by opencv. see docs here: https://docs.opencv.org/3.0-beta/modules/highgui/doc/user_interface.html?highlight=imshow#cv2.imshow
note, this is not optimized example code, but just to show you how you can display the world image frame as a subscriber
Ok. That make sense
So help me understand something @wrp. I'm trying to display world camera feed remotely. Will this code point me in the right direction?
(you could take the print line out as well π )
this example does exactly what you want.
define "remotely"
Remotely in this case means I connect the add on to PC A and I want to show the world camera feed on PC B
if on the same computer you do not need to change the code at all in the example. If remote (as in another computer on the same network), then you would need to change the address in L12 of the recv_world_video_frames.py
In the example it is set to local host, but you need to set it to the IP address of "PC A" (this information is displayed in Pupil Capture's "pupil remote" plugin GUI
addr = '127.0.0.1' # remote ip or localhost
Right. I have to use the IP on the host PC right?
So do I need to run capture on the host PC or the remote PC ... Just to be sure
Pupil Capture is running on the machine that is connected to Pupil hardware. Pupil helpers script is running on the other machine not connected to Pupil hardware.
Now that make sense
I'm trying to use a Raspberry pi 3 B+ instead of a host PC due to my use case
Do you think it's feasible?
RPI is connected to Pupil hardware?
Yes. So I will bypass PC. We want to use this on a construction site
@user-a39804 I do not have experience running Pupil hardware on RPI. In the past I recall that the community reported this to be possible, but low frame rate due to limited CPU power.
Right. The limited CPU power is something I've been trying to account for
But this is just a prototype so in the future we can go full scale
Let me try your suggestions and give you feedback. I'm grateful
sounds good π
@user-a39804 in case that you want to just send images, it would be easier (for the PI in terms of CPU) to have just a script on the PI sending the images and have Capture receive them remotely.
I agree with @papr (this would make more sense for your use-case)
Hi Everyone! I am currently installing Windows dependencies as I must run the code from source. One problem I am getting is that
C:\work\boost\stage\lib
does not exist
So i cannot add it to path - am I missing something here?
@papr
@user-1582b2 Sorry, I do not what the issue could be. Maybe a wrong version of boost? Make sure to exactly follow the source instructions.
@wrp I figured out where the code is getting stuck. The code is not moving past '''sub_port = req.recv_string()
@user-a39804 did you adjust the IP?
No, I am doing a local connection (same PC) so I assume the IP should be localhost
So, why is the code not moving past that stage?
I think I have zmq installed
Did you verify that the port is equivalent to the port shown in the Pupil Remote menu?
Hmm, actually pupil remote menu port shows 62983
Restart with defaults in the general settings
You are a genius!
It's working now!!!!!!!!!!!!!!!
Not sure why the port changed. Great to hear
You guys rock
So I'm gonna do the remote now
Hi - Is it possible to be running the pupil capture software on windows and also access the world camera video stream in another program to perform for example computer vision techniques?
@user-1582b2 yes, this is definitely possible.
Just use the windows bundle (no source dependency installation) and use the network interface to receive the video data
Hi @papr thanks for this. I would like to design a plugin for the capture software on my windows PC that will that will pipe gaze coordinates out to a raspberry pi through USB. I am having difficulties running the capture software from source - is it essential to run from source when designing a plugin?
@user-1582b2 no, you can add plugins even if you run the bundled application
But I do not understand the setup. Your PI is connected to the PC via USB?
You can run a simple python script on the PI that receives the gaze data via the network api. https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
@papr My first idea was to use the Pupil on the Pi and get the gaze coordinates directly but after reviewing it could be difficult to do so. Now I will run the pupil on a laptop and project the gaze coordinates to the Pi somehow - maybe through USB?
What ever you want to do on the PI can be done on the laptop as well if it connected via USB. The advantage of the network connection is that you can connect via wifi and move the PI independently from the laptop.
What os do you use on the PI?
@papr Yes but this also means I must be connected to wifi which isnt 100% ideal for this project. The os I am using is Raspbian but I can also use Ubuntu on another SD card - maybe some background to the project can help - I would like to find where the user is looking using the windows laptop and give this info to the Pi - the Pi will then move some servo motors using PWM according to the gaze position
@user-1582b2 OK. Problem with USB is that you will have to implement your own serialization. Does your laptop have an ethernet port? You could directly connect the PI and the laptop via ethernet.
@papr That is a great idea, I did not think of that. I could use the network api to transfer the data via ethernet?
Yes, correct. I would always try to reduce the amount of custom code as much as possible.
@papr Fantastic, thank you for the help!
Just a quick one. I want to be sure if developer setup for linux ububtu 16 will also work for ubuntu core?
@user-a39804 I have not tried that yet. It might be required to install further dependencies.
Oh ok
I also noticed apt-get is not supported on core. So I have to use snap. Whew
@papr was able to solve the latency issue. Used a multiprocessing.Queue() to store frame outputs for every process separately and then having a separate function handling the concatenation by popping elements from queue
Hello, people, I want to subscribe to fixation topics. But it is not published. I used pupil capture to make sure that fixation happened and I subscribed the empty topic and fixations are not among them. Any idea about it?
@user-033747 Did you enable the fixation detector plugin? If yes, which version on which OS are you using?
Hi, @papr , I enabled fixation detector, I could see the fixation in pupil capture. And I have v1.9-7 running on ubuntu 16.04.
Can you receive anything else with the script? e.g. gaze?
yes. I can receive gaze, frame.world, frame.eye, pupil, etc
Mmh, interesting. Could you checkout v1.10 and see if the issue still exists?
Ok. I will try with v1.10.
Hi, @papr , I tried with v1.10, still can't get fixation topic
ok, I will try to replicate the issue
thanks.
Hi there, I have a question to the world camera image. Is it somehow possible to get access to the raw data of the image? even at a slower frame rate. I would be interested in a spectral analysis of the image.
@user-14d189 do you mean the rgb data? Yes, of course. Either write an plugin with direct access to the data or use the frame publisher plugin to access the image data over the network api
@papr raw as in the direct output of the recording ccd. this data is then converted to rgb and compressed video formats. Raw image formats are intended to capture the radiometric characteristics of the scene. It contains more information of the spectrum of the presented light. RGB are just the colors that humans are most sensitive for.
@user-14d189 ah, I understand. We only get mjpeg out of the cameras. As far as I know, there is no way to get the raw data unless with a custom firmware.
@papr I thought so. the translation of raw to mjpeg is done on camera level and what you get out of the USB is mjpeg, isn't it?
@user-14d189 correct, that is what I meant.
@papr I just found this here in the docs. "Our Pupil camera can supply frames in various resolutions and rates uncompressed (YUV) and compressed (MJPEG)." Ill have a read about the YUV if that is sufficient.
thank you
I think there is an old pupil or libuvc branch that implemented this. But yuv is similarly compressed as bgr. I don't think that it contains enough data for your type of analysis
But let me know if I am wrong. π
@papr looks like you are right. I might have read it before, is it possible to replace the world cam stream with a generic image and let pupil just report the gaze coordinates. Then try to access the camera with a custom firmware. I do not need world cam image content after calibration nor a high frequency of raw data. something like 5 per sec is heaps.
Hi, @papr , I made a stupid mistake in my script, everything is fine with subscribing fixation. Thanks for the help.
Hi, I found one issue. At file_methods.py (https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L23) assertion is performed if my msgpack package is out of date. But actually I have (0, 6, 1) version wich is newer than required. So maybe this line should be changed?
Thank you @user-5a3c68 , yes, you may have a look at https://github.com/pupil-labs/pupil/issues/1419. We still need to test the new version.
Hi, I am new to using pupil and am working on mapping the gaze positions to a 16 bit depth value and would like to obtain this as an output rather than the (x,y) coordinates in the RGB image. I am using a realsense d415 and wanted to know which file to edit these changes in, in order to get these outputs when i press capture? thank you for your help
alternatively can i record the 16bit depth stream rather than the 8 bit depth stream?
Hi, just have seen you released Pupil v1.1 beginning of Jan. I'm excited to try it. For Capture the integration of RealSense D400, is that the main new feature?
FYI PMD has some interesting products in their time of flight sensor range. Similar 3D data. Pico Flexx - nice and light, small, might be interesting for you! https://pmdtec.com/picofamily/
@user-a69044 By writing a custom plugin, you can access the 16bit depth values through the depth_frame
entry in the recent_events()
dictionary.
@user-14d189 From an end user's point of view, yes, the realsense d400 support is the most significant feature. The release also includes improvements to the network api.
We are looking into building a light weight integration for the Pico Flexx. If it comes to that, it will only supported as a third-party plugin since PMD does not allow the redistribution of their software, if I understand correctly.
Thank you @papr , just to confirm is it possible to do this without running from source as I am failing to install the boost dependencies?
@papr where would i find the recent_events() dictionary? thanks for your help once again π
@user-a69044 yes, check out the developer docs on how to write plugins. It will also tell you how to install a plugin when running the bundle.
@user-a69044 Turns out that I had a template for yoru exact use case lying around: https://gist.github.com/papr/0f13943e2aebd768ab6b1508d466caae
Hi, I have recorded data using pupil headset but with my own software so is it possible to import avi files into pupil player to apply object detection ?
@user-e34013 when you say object detection, are you referring to marker tracking/surface tracking?
You will need more than just video files in order to load recordings in Pupil Player
Yes, is it possible to import videos and add markers retrospectively to the videos? What would I need to do this if the recordings have not been captured in pupil capture or is that not possible?
@user-e34013 by adding markers, do you mean detecting markers?
Check out the documentation on the recording format. You should find everything you need to know to make your recording compatible with Player.
Ok will do thanks. We have already recorded so it may be too late for this data but we can check thanks.
If your recorded scene video does not contain any surface/square markers, then surface tracking will not work.
Yes thatβs fine thanks. We can do this going forward.
Hello,
I was reading about the calibration procedure and started looking into the source code, and I would like to better understand the calibration process. Calibration here could mean a few things since there is the eye camera and world camera. Let's first refer to the eye camera and calibration of the eye position. Correct me if I am wrong, but I believe calibration would involve mapping the position of the detected pupil in pixels to known positions of the eye. For example, 5-point and 9-point calibration procedures are popular. Here the center point should approximately correspond to the forward-looking center of gaze position of the subject (wearer of the goggles). If we know the distance between the points in physical space, and we know the distance from the center point to the center of the subject's eyes, then we know let's say in degrees the position the eye should be in, and we can measure the pupil center in each of these positions in pixels. In the end we could map the space from pupil position in pixels to position of the eye in degrees.
My question is ultimately what does the Pupil Labs system do for calibration? If using a manual calibration, unless I missed it, at no point is the system given the parameters I have mentioned in the previous paragraph. Unless I'm mistaken, someone would be moving the position of the marker to some "random" locations (at least random from the perspective of the camera system). This would have me ask how does it perform calibration in comparison to the method I've described? I was initially under the impression that it was using an eye globe model, but after looking at the source code, it seems that the system is using a 2D polynomial surface fitting model.
I'd appreciate clarification on this. Thanks in advance.
How can we setup a patient database with EventIDE?
@user-87fec3 The calibration reference locations can indeed be random within the world/scene camera's field of view. In 2d calibration/mapping, a 2d pixel position (pupil center) is mapped to a 2d pixel position in the world camera's frame using polynomial regression. In 3d mode, we build a 3d eye model from a series of pupil positions (eye cam coordinate system). During calibration, we learn the geometrical relationships between the eye cams and the world camera. Afterwards, we can map the 3d eye model vectors from the eye camera coordinate system into the world camera coordinate system.
Is it possible to stream pupil camera remotely so I can run the pupil wirelessly, and then run capture on a pc thats on the Wifi or somethin?
@papr
@user-21d960 Yes, this is possible using the NDSI protocol: https://github.com/pupil-labs/pyndsi/
Pupil Mobile makes use of this to remote control and stream Pupil headsets that are connected to Android phones.
Need help with setting up pupil lab with matlab
@user-c22e3a have you already read the readme here: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
Yes I did but since I'm running it on Windows I'm having an the following issue with ZMQ Invalid MEX-file 'D:\Tools\MATLAB Additional Toolbox\matlab-zmq-master\lib+zmq+core\ctx_new.mexw64': Missing dependent shared libraries: 'libzmq-v120-mt-4_0_4.dll' required by 'D:\Tools\MATLAB Additional Toolbox\matlab-zmq-master\lib+zmq+core\ctx_new.mexw64'
Looks like zmq might not be installed?
@wrp Yeah I already did but it wasn't working I managed to solve the issue by copying the missing dll from ZeroMQ to system files (System32 or SysWow 64)
Hi guys. I am compiling from source on ubuntu mate. I keep getting msgpack error even when I specify the ubuntu version to install. The error says msgpack out of date, upgrade to version 0,5,6 or later
@user-a39804 Please use pip3 install msgpack==0.5.6 -U
to install 0.5.6
Oh thanks
I was using pip instead of pip3
I have another error now regarding opencv but I think it's not properly installed on the Raspberry Pi
Thanks guys
@user-a39804 Ah, sure. I just added the 3
since the default ubuntu instructions do so as well.
Regarding opencv: Find the cv2.*.so
file and make sure that its containing folder is part of your python path. An easy way would be to create a symlink to your pip site-package folder
Oh. I will Google how to create symlink. I'm remaking opencv2. I will try that once it's done
@user-a39804 ln -s <original cv2 so file> <pip site packages folder>
Thanks man
hey there pupil-dev, i don't know if this is the right place or if should go directly to github issues. But today i tried to open a recordings folder which was on an external usb hdd and it would tell me the following: "player - [ERROR] launchables.player: Could not generate world timestamps from eye timestamps. This is an invalid recording.". When i put this folder on my desktop and try to open it, it works.... I'm running windows 10. Just wanted to share my experience very quickly
@user-e91538 Thank you for reporting, I can't reproduce in my macOS, @papr would you mind have a look at it.
@user-e91538 Would you please post a screenshot from the recording folder and its contents while it is on the usb hdd?
@user-f27d88 maybe it has something to do with the folder name? a part of it has square brackets around text (like F:[UNI]\project\recordings\008)
i also worked on a usb stick, where the executables of pupil player where located as well as the recordings. I did not have the problem there
@user-e91538 So it is only a problem if recording and executable are on two different drives?
@papr no, just tried it for that specific ext hdd...copied pupil player to the top level folder of all recordings, and it does not work...
Allright it is a problem with the folder name... As i mentioned above, a part of it uses square brackets. When i remove them it works.
Ok, good to know!
@user-e91538, just for curious, why the folder contains square brackets. Did you set it on purpose or it came with the USB hdd?
(seems like the user set this dir name)