I started working on issue #1986 (https://github.com/pupil-labs/pupil/issues/1986). When I try to run some tests, I get the error
Error with ./deployment/deploy_service/version.py: Could not import version_utils
Error with ./deployment/deploy_player/version.py: Could not import version_utils
Error with ./deployment/deploy_capture/version.py: Could not import version_utils
any tips on how to resolve it?
@user-5c1712 Hi, from where / in which context are you trying to run version.py
?
from the pupil folder
Any chance this software works with webcams or is this software only compatible with pupil products or dedicated eye tracking products?
@user-b24afd Please checkout this response and its corresponding question. I think it is related to yours. https://discord.com/channels/285728493612957698/285728493612957698/772742754047885323
@user-5c1712 Please be more specific. What is the command that you are executing that results in this error message.
Thanks I really appreciate your help. Have a great day!
$ python3 deployment/deploy_capture/version.py
Traceback (most recent call last):
File "deployment/deploy_capture/version.py", line 14, in <module>
from version_utils import *
ImportError: No module named version_utils
@user-5c1712 This file is meant to be executed within the deployment/deploy_capture
folder. If you check the file, the previous line says: sys.path.append(os.path.join("../../", "pupil_src", "shared_modules"))
which makes version_utils.py
available.
@user-5c1712 To be honest, I think the more cleaner version would be to get rid of version.py
here and replace any imports from it with the appropriate version_utils
import
exactly, okay I thought so too. I'm on it!
@user-5c1712 Great! Thank you for working on this!
@user-5c1712 In this case, I think the best way would be to run a static checker like https://github.com/Microsoft/pyright
And check for undefined variables
I think I'm done. I just checked my code locally with mypy and pyright and for undefined attributes it only returned:
Module 'player_methods' has no attribute 'is_pupil_rec_dir'
which I think has nothing to do with my changes
@user-5c1712 Sounds good. Please create a pull request for us to review.
Hi, I tried to build ffmpeg3 from jonathan's PPA but it throws the error:
"aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template for Raspbian/buster"
Is there anyway to circumvent this/ adapt it to raspbian?
@user-40de8a There is no need to use ffmpeg3 anymore. You can use ffmpeg 4 instead
Sorry, I meant to write ffmpeg4*
@user-40de8a Can you try to install ffmpeg via the apt without using jonathan's PPA? It looks like it does not providing builds for Raspbian/buster
when I run black locally it doesn't see a problem with
from gl_utils import A
from gl_utils import B
from gl_utils import C
the format you want is
from gl_utils import (
A, B, C,
)
or
from gl_utils import (
A,
B,
C,
)
with line length 88?
@user-5c1712
from gl_utils import A
from gl_utils import B
from gl_utils import C
is technically fine but not consistent with other multi imports in our code base
Just put multi imports from the same module in the same line and let black do the rest
@papr thx for all the help!!! All travis checks passed finally :) have a good evening!
Hi! Iβm fairly new to the pupil labs software as I am using it for an internship. Iβm trying to use the pupil vr add on for the htc vive with the unity sdk provided on the website of pupil labs and everything seems to work fine, except for the fact that the pupil software keeps losing connection of either the left or the right camera. Is anyone familiar with this problem or know a solution?
I think the issue is related to cpu usage as it can spike above 100% on both cameraβs..
except for the fact that the pupil software keeps losing connection @user-521355 Are you referring to the eye video freezing in the "eye" windows? Do both previews freeze at the same time, or do they freeze separately. Or do you refer to the eye video preview in Unity?
Yes, one of the eye windows keeps freezing and it spits out an error about losing the connection to the camera. After a few seconds it just reconnects and it happens again. The eye video preview in unity doesnβt even show up.
@user-521355 ok, please contact [email removed] in this regard.
Alright will do, thanks!
Morning, I am trying to run the software from source and try to do pupil detection on a recorded video. When I run main.py I get the following error (log file attached). Any idea how to solve this? Thanks!
@user-40de8a Looks like one of these window hits does not exist on your platform https://github.com/pupil-labs/pupil/blob/1ca13889d9c8acb4902f387c9caef7c20dd7c1ab/pupil_src/shared_modules/service_ui.py#L55-L57
@user-40de8a I suggest commenting them out one by one and check if the error goes away
Hi pupils lab team, I want to do pupil detection from a video that was streamed from a raspberry pi (it streams world view and eye view). So making a plugin would be that option. Can I show the streamed video in world view and launch eye detection plugin? Where do I start?
@user-d8853d Do you want to the detection on the realtime stream or post-hoc on the recorded video?
real time
@user-d8853d You will have to write a custom video backend, similar to our HMD Streaming backend, to receive and display the video streams: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/hmd_streaming.py
@user-d8853d Alternatively, you might be able to use it directly by using zmq to publish the data on the RPI.
I have used zmq to publish the world view and eye view in real time.
@user-d8853d Ah nice. Maybe you are already able to use the HMD_Streaming_Source
then. Fore reference, this is the hmd-eyes code that sends the data: https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/ScreenCast.cs
Ah that means I would be able to use detect pupil and do gaze estimation using hmd?
@user-d8853d You should be able to use the backend to receive the streamed via the existing hmd video backend. This does not require you to use an actual hmd. The detection and gaze mapping should work in the same way as if you connected the Pupil headset via USB to the computer.
oh nice. That is what I have been looking for. Thank you.
that means I need to find a way to replace uvc_backend.py with hmd_streaming.py
@user-d8853d You can start hmd_streaming via a notification. You mostly need to make sure that your video stream server behaves like hmd_streaming expects it. This is where the hmd-eyes (ScreenCast) implementation comes into play.
Also, you need to start the plugin 3x. once for each process and each time with a custom message topic
Default is topics=("hmd_streaming.world",)
. I suggest using hmd_streaming.eye0
and hmd_streaming.eye1
for the other two. You can pass this keyword argument as args in the start_plugin
/start_eye_plugin
notifications
is there a documentation where the video backend is already implemented. I see Dummy_Camera in hmd_streaming.py. I guess I need to replace the dummy_camera with the source to my zmq video source. I think I am going in wrong direction. I am new to pupils lab world.
@user-d8853d You don't need to modify any of the pupil source. You need to adjust the RPI side of the code.
I have installed pupils capture software in ubuntu
RPI/hmd-eyes are the sending components. Capture is the receiving component.
I'm sorry, what does RPI stand for? (Is it Raw Pupil Image)?
@user-40de8a Raspberry Pi. Not sure what the official abbreviation is.
I'm trying to run pupil player on my RPi but I get the following error: glfw.GLFWError: (65539) b'Invalid window hint 139276' I have attached the error log with this message.
@user-40de8a Please see my previous response to this issue: https://discord.com/channels/285728493612957698/446977689690177536/773851621187649577
This time you need to remove the according lines in player.py
Thanks a lot. I'm sorry I hadn't seen your previous response to this, my bad!
@user-40de8a No worries π
Now I am getting this error: player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "/home/pi/pupil/pupil_src/launchables/player.py", line 898, in player_drop content_scale = gl_utils.get_content_scale(window) File "/home/pi/pupil/pupil_src/shared_modules/gl_utils/utils.py", line 264, in get_content_scale return glfw.get_window_content_scale(window)[0] AttributeError: module 'glfw' has no attribute 'get_window_content_scale'
It indeed looks like glfw does not have this attribute. Did I install the package incorrectly?
@user-40de8a Mmh, I do not know. It is possible that the underlying libglfw library is not up-to-date. Could you let us know which glfw version you have installed?
I have the 2.0.0 version
@user-40de8a mmh, that should be correct. Looks like there are some parts of the software that are not support on the RaspPI. I fear you need to remove any content-scale related functionality. You should be able to use a content-scale of 1.0
to replace it.
Hello I have glasses of the type PUPIL W120 E200B And this morning I turned on the glasses and one eye camera does not work and the other does work what can we do?
@user-90a55d Please contact info@pupil-labs.com in this regard.
Hi @papr I implemented the video backend and the client to check if I am receiving anything for RPI. I used your zmq_tools.py for implementing. Here is the code: https://github.com/Lifestohack/pupil-video-backend
Now the problem is client is not receiving any thing. If block_until_connected = True in Msg_Receiver then I get "ZMQ connection failed" Exception.
am I missing something?
Even though block_until_connected = False, nothing is received.
@user-d8853d The issue is that you are connecting the Msg_Streamer (PUB socket) in backend to Pupil Remote, a REQ-REP
type socket. You need to request the pub port and connect to it instead. Checkout the Publisher
class in the hmd-eyes project. https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/ScreenCast.cs#L56
thanks for your time @papr . I am not connecting to Pupil Remote. I am making my own zmq server to publish the data from RPI as seen in video_backend.py . With video_backend_client.py I wanted subscribe to video published by video_backend.py.
Ah, ok I see the issue. One of both sides needs to bind to the port, normally the server.
And then the other can connect
Capture has a "message backend" that binds to :50020
(for Pupil Remote) and random ports for subscription and publishing, and allows all scripts to connect to these ports
oh jeah that works. danke:)
For me to understand: so that i can do pupil detection on my laptop with the video published from the RPI: Step1: Publish the video from raspberry pi (done) . Step 2: Install hmd_eyes and unity. Step 3: Change the code such that https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/ScreenCast.cs receives the video from RPI and change this to publisher.Send(topic, payload, frame_from_raspberrypi); Step 4: Pupil capture will automatically show the Video source as hmd streaming devices.
@user-d8853d Step 2 is not necessary. Your RPI code takes the role of hmd-eyes. The Hmd_Streaming Capture plugin takes the role of your video_backend_client
The hmd-eyes screencast is basically a reference implementation for your RPI code. You are not supposed to use it directly. π
oh it makes it easy. π
that means it should already work because I
because https://github.com/Lifestohack/pupil-video-backend/blob/master/video_backend.py is implemented.
Don't forget to start the plugin
in which port is hmd_streaming plugin listening to?
hmd_streaming plugin is same as hololens relay or?
It listens to the IPC backend. The backend is like an echo chamber. You can either yell something into it, or listen what comes out. The hmd_streaming plugin listens for what comes out. You need to yell into it. Request the PUB_PORT
from Pupil remote
The hololens relay is unrelated.
"PUB_PORT"
Ah something like this:
context = zmq.Context()
addr = '127.0.0.1' # remote ip or localhost
req_port = "50020" # same as in the pupil remote gui
req = context.socket(zmq.REQ)
req.connect("tcp://{}:{}".format(addr, req_port))
req.send_string('SUB_PORT')
sub_port = req.recv_string()
sub = context.socket(zmq.SUB)
sub.connect("tcp://{}:{}".format(addr, sub_port))
@user-d8853d PUB instead of SUB. You are on the sending side
oh jeah
danke π Let me try.
So, I implemented that. https://github.com/Lifestohack/pupil-video-backend/blob/master/video_backend.py
I get the port from the pupils remote and then the video is published to that port.
@user-d8853d I am just missing the start_plugin notification to start the hmd_streaming π
I added this:
# send notification:
def notify(notification):
"""Sends ``notification`` to Pupil Remote"""
topic = "notify." + notification["subject"]
payload = serializer.dumps(notification, use_bin_type=True)
pupil_remote.send_string(topic, flags=zmq.SNDMORE)
pupil_remote.send(payload)
return pupil_remote.recv_string()
# Start the annotations plugin
notify({"subject": "start_plugin", "name": "hmd_streaming", "args": {}})
When I start the video_backend.py, I see this message on pupil capture software:
"World: Attempt to load unknown plugin: hmd_streaming"
Use HMD_Streaming_Source
instead of hmd_streaming
"args": {"topics": ("hmd_streaming.world",)}
"args": {"topics": ("hmd_streaming.eye0",)}
"args": {"topics": ("hmd_streaming.eye1",)}
For the different video sources
Changed to :
notify({"subject": "start_plugin", "name": "HMD_Streaming_Source", "args": {"topics": ("hmd_streaming.world",)}})
@user-d8853d Looks good on the first sight. Does it work? If not, can you check the Capture logs if there is any info about it?
step 1: I open pupil capture software. Step 2: I start video_backend.py
if i am in for example in Plugin Manager then it automatically goes to Video source
but nothing happens
Looks like the plugin is being started correctly
Do I need pyrealsense2 for this to work?
@user-d8853d no, definitively not π
π€£
@user-d8853d Please check ~/pupil_capture_settings/capture.log
for possible failure reasons
Ok, streaming source is started correctly
@user-d8853d Oh, I know the issue. The HMD_Streaming_Source only supports rgb
as format
it works.
π
I will give you credits on my thesis.
This will probably be useful for other people.
@user-d8853d I agree! It might need some cleanup, but as a standalone python module with proper documentation, it could work wonders for people with custom cameras that are supported by opencv but not out pyuvc library! Feel free to add your project to https://github.com/pupil-labs/pupil-community
I will do that. I will add support for two cameras, one for world view and one for eye0. Do some clean up and add it to pupil community.
Thank you very much for your help.
@user-d8853d That would be great!
Hi all, I'm new here and I'm trying to map 2D coordinates on to the video exported by PupilPlayer. My validation here is whether the 2D coordinate matches the green dot recorded in the video output, and so far I haven't had luck finding a solution or a previous comment looking at the same thing.
Here is the code I'm using to translate normalized X Y back to pixel space. Since the eye tracker is recording multiple observations per frame, I'm averaging these observations across frames and then comparing to the frames of the exported world_clip.mp4
video. Here's my code and attached is the result. In the result images, the green circle is the gaze data from the video itself, and the blue dot is what I'm calculating from the 2D coordinates. Just as a sanity check, the first row of numbers in the result are the normalized 2D coordinates (straight from PupilPlayer) and the second row is my translation to pixels... hopefully I didn't do something dumb in the math. You can clearly see in the result that the user is making a saccade that's captured in the video data but not by the 2D coordinates.
def norm_to_pixel(i, axis):
axis = axis.lower()
if axis not in ['x', 'y']:
raise ValueError('axis must be "X" or "Y"')
if axis == 'x':
return i * 1280
## subtracting because cv2 starts pixel 0 at top of image
return abs((i*720)-720)
cap = cv2.VideoCapture('world_clip.mp4')
d = pd.read_csv('gaze_positions.csv')
by_frame = d[['world_index', 'norm_pos_x', 'norm_pos_y']].groupby('world_index').agg('mean')
for i in range(8):
ret, frame = cap.read()
X = norm_to_pixel(by_frame.loc[i, 'norm_pos_x'], 'X')
Y = norm_to_pixel(by_frame.loc[i, 'norm_pos_y'], 'Y')
print([by_frame['norm_pos_x'].values[i], by_frame['norm_pos_y'].values[i]])
print([X, Y])
implot = plt.imshow(frame)
plt.scatter([X], [Y])
plt.show()
I'm assuming software-dev is the right channel for this question but not sure so sorry if I get that wrong.
@user-90000e it is the right channel π unfortunately, Opencv does not extract frame-exact.
Please checkout our tutorial on how to extract frames https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
(and plot gaze)
Wow thank you very much-- it would've taken me forever to realize that Opencv was the problem π
@user-90000e yeah, it took us a while as well
Update on Pupil video backend. I am successful on publishing video to pupil capture software. But only world view. If I try eye0 and eye1 it doesnot work.
@user-d8853d You need to start the hmd_streaming_source as eye processes as well
Oh I am not finished explaining.π
The topic is start_eye_plugin
instead of start_plugin
. Also, you need to add a "target": "eye0"
field (and eye1
respectively)
@user-d8853d Oh, sorry. Bad habit of mine. Jumping to conclusions to quickly. π
I get the following error in pupil capture software: Eye0: Process Eye0 crashed with trace
but you are right. I looked at the logs and it says KeyError: 'target'
I have topic as start_eye_plugin for eye0 and eye1 but I didnot had target
{"subject": start_eye_plugin, "target": "eye0", "name": "HMD_Streaming_Source", "args": {"topics": (hmd_streaming.eye0,)}}
is this correct?
start_eye_plugin
and hmd_streaming.eye0
should be strings
but yes, generally, this looks correct
jeah that's true. I copy pasted wrong from my code.
Now I have some other error log.
ok, looks like the backend is not yet compatible with the eye process
@user-d8853d Are you running Capture from source or from bundle?
yes I downloaded the latest version from here: https://github.com/pupil-labs/pupil/releases/tag/v2.5#user-content-downloads
Version 2.5
@user-d8853d ok, I think I can get this fixed for our next release this week
Oh cool. Shall I create a new ticket on github?
Yes, please.
Also could you please add support for other format like yuv and not only rgb.
is it possible?
mmh, not necessarily for the next release
ok
what exact yuv format do you need?
Can you dump the byte buffer of two such yuv frames to a binary file and send it to me? This way I can test it locally
Because I am sending the image by wifi, I wanted to send the frames as fast as possible. Low latency and as much as fps as possible. For pupil tracking I don't think I need rgb. I was tinkering around and just the luma component (the brightness) Y from the YUV was enough for 2d pupil detection. I did the experiment here: https://github.com/Lifestohack/smartglass
so you want to send a uint8 buffer of shape height, width
?
Raspberry pi camera support YUV420 (planar). So if I could just send one channel out of the three and do the pupil detection on that would it be great.
so you want to send a uint8 buffer of shape
height, width
? @papr yes
@user-d8853d ok, I will have a look tomorrow
Cool, thanks bro.
when doing the pupil detection what is the size of the frame that pupil capture software uses? Does it use rgb or you also use something else?
I saw somewhere 192x192. Is it true?
We use the Y channel as well π
192x192 is the default for our 200hz cams, correct
Oh great.
thanks for the information.
New issue created: https://github.com/pupil-labs/pupil/issues/2049
step 1: started pupil capture software. Step2: clicked on "Detect eye 0" on pupil capture software. Small window pops up where eye0 video feed supposed to be shown. Step3: Started pupil video backend. But pupil detection eye 0 window closes automatically. and pupil capture shows the error: Process Eye0 crashed with trace.
this was regarding the ticket @papr opened.
@user-d8853d Let's try to keep log files and steps for reproducing issues in their corresponding Github issues. That should help us to keep track of them.
ofcourse. π
Created a issue for time sync. https://github.com/Lifestohack/pupil-video-backend/issues/3
Is this all for time sync? pupil_remote.send_string("T {}".format(time.time()))
@user-d8853d This sets Pupil time to unix time. But for best compatibility, you want to adopt Pupil time in your own application. In other words, the other wau around.
The easiest but least accurate way is pupil_remote.send_string("t")
You want to estimate the network delay as well, e.g. like in hmd-eyes https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/TimeSync.cs#L66-L71
Bascially calculating the offset between your own and the pupil clock, without the network delay. One network delay is easy but it would be better to do multiple ones and average them to account for variation in the network delay
@papr Thank you for your nice explanation. It is really thorough. I get monotonic time from pupil capture software. I calculated the mean monotonic offset and offset jitter, added absolute values of both and added to my local monotonic time. Also while sending the payload, I need to send the timestamp. Was I supposed to send the local monotonic time(actually it would be same as pupil capture monotonic time) after sync instead of my local epoch time?
I tested the version 2.6 for github issue #2049. I was not able to get it working.
@user-d8853d Do you have the detector debug window opened?
@user-d8853d Looks like your focal length is zero. You are sending invalid intrinsics https://github.com/Lifestohack/pupil-video-backend/blob/f8f4d9b3f104ae92cb9e84951c35d3bdd5397b47/payload.py#L5-L9
Actually, this explains an other issue that we encountered in pye3d the other day, while using your backend π
Oh I am sorry.
No worries. The software is a bit more stable thanks to your code now π
I need to find out intrinsics for my cameras.
let me try with dummy intrinsics.
No worries. The software is a bit more stable thanks to your code now π @papr nice to hear that. Glad I could support that.
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L735-L739
use something like this for now
ok let me try that.
it's working. Thank you @papr for your hard work.
I have a development challenge that (based on my reading of previous conversation here) I don't think can be solved from PupilLabs technology but I'm wondering if anyone's found a successful approach. I'm trying to do what I've seen referred to in this discord as "markerless area of interest tracking", where there's some data that's already been collected (ie, without using any surface tracking technology), and I want to ad-hoc compute whether fixations are landing in particular areas of interest (eg computer screens / controls). I've attached a sample image from our study. I've thought about two possible solutions: (a) trying to stitch images together to make some type of panoramic image, upon which I could define an absolute coordinate system and then try to do some sub-image identification to translate the fixation coordinates to absolute space, or (b) implementing some type of ML object detection to classify bounding boxes around objects in the images. Both of these approaches seem quite challenging and prone to error, and I'm just wondering whether anyone has come up with a better solution to this problem. Thanks!
Hi, I'm new to pupil and just checked out the repo. The readme says there is a cpp version as well but can't find them. Can you help me how to found them?
@user-f22600 Not exactly. Parts of the code is written in cpp. Most of the application is written in Python
@papr Thanks, can you direct me which part is written in cpp? there are only py files.
@user-f22600 I think the most prominent c++ usage is in here https://github.com/pupil-labs/pupil-detectors/
@papr Many thanks!!
Hi @papr is there a way to set hwm for eye process or world process while using ZMQ? Or have you guys implemented something where the video frames are not queued and if the frame is queued then just drop oldest frame?
{"subject": "start_eye_plugin", "target": "eye0", "name": "HMD_Streaming_Source", "args": {"topics": (hmd_streaming.eye0,), "hwm":"1"}}
Something like this to set the hwm?
@user-d8853d Unfortunately, zmq does not allow dropping the oldest frames. Once they are in the queue, they need to be processed. The HWM only affects newly added frames, i.e. if the HWM is one, and there is one frame in the queue already, all new frames will be dropped until the queued frame is processed.
Currently, HMD_Streaming_Source
does not implement custom HWMs but this can be easily done. Feel free to make a PR that forwards an HWM argument from HMD_Streaming_Source
to Msg_Receiver
(which has a hmw=None
argument already)
Thanks @papr for the guidance. Here I have updated the code on the branch hwmdev on my fork. https://github.com/Lifestohack/pupil/blob/hwmdev/pupil_src/shared_modules/video_capture/hmd_streaming.py#L132
Example of usages:{"subject": "start_eye_plugin", "target": "eye0", "name": "HMD_Streaming_Source", "args": {"topics": (hmd_streaming.eye0,), "hwm":1}}
I think for a merge into the main repository you would need to add two changes: 1) define the argument explicitly instead of using kwargs. Return it in the get_init_dict
You mean{"subject": "start_eye_plugin", "target": "eye0", "name": "HMD_Streaming_Source", "hwm":1, "args": {"topics": (hmd_streaming.eye0,)}}
Instead of passing it as an args define it explicitly
I pushed the change to returning hwm get_init_dict.
@user-d8853d No, the original notification was correct
I meant in the class __init__
definition
umm. I misunderstood. π
but without using kwargs I could not get the hwm value from notification
- def __init__(self, g_pool, topics=("hmd_streaming.world",), *args, **kwargs):
+ def __init__(self, g_pool, topics=("hmd_streaming.world",), hwm=None, *args, **kwargs):
jeah that is what I thought initially.
This equivalent to your implementation
but more explicit about the existence of this argument
cool @papr . But still the value is read from the kwargs.
I do not understand what you mean.
sorry for misunderstanding. Here I changed. https://github.com/Lifestohack/pupil/blob/hwmdev/pupil_src/shared_modules/video_capture/hmd_streaming.py#L127
You should be able to get rid of https://github.com/Lifestohack/pupil/blob/hwmdev/pupil_src/shared_modules/video_capture/hmd_streaming.py#L133-L134
As kwargs cannot include any arguments that are explicitly named in the signature
I removed it. Works like a magic. π
Shall I directly create a pull request to master?
to develop
please
ah one more change please
please change the argument order from hwm, topics to topics, hwm
Done
Btw, the implementation already tries to get the newest frame by recv all buffered frames and discarding all but the newest one
Oh that's a relief.
Setting the hwm to 1 makes sense in this case actually, if we discard all frames anyway. Feel free to make a second PR that changes the default HWM from None to 1
let me change that.
I changed it to 1. here is the PR: https://github.com/pupil-labs/pupil/pull/2058
Sorry didn't read your last message till last words. I did not make a second PR but pushed it to the same branch on the same PR.
don't worry
Would have been cleaner, but it is fine as it is
@user-d8853d Please apply black on the existing PR
I did it and pushed the change. Thanks for the heads up.
@user-d8853d Thank you for your contribution!
@papr Welcome. My pleasure.
I was running pupil video backend on the same computer as pupil capture sending the frame with shape:(720, 1280, 3). I see the CPU going over 180. Does CPU goes over 100? I thought 100 was max. π
@user-d8853d The theoretical maximum is 100% * number of CPU cores.
aah. it makes sense