software-dev


user-5c1712 02 November, 2020, 11:30:43

I started working on issue #1986 (https://github.com/pupil-labs/pupil/issues/1986). When I try to run some tests, I get the error

Error with ./deployment/deploy_service/version.py: Could not import version_utils 
Error with ./deployment/deploy_player/version.py: Could not import version_utils 
Error with ./deployment/deploy_capture/version.py: Could not import version_utils

any tips on how to resolve it?

papr 02 November, 2020, 11:39:47

@user-5c1712 Hi, from where / in which context are you trying to run version.py?

user-5c1712 02 November, 2020, 12:16:02

from the pupil folder

user-b24afd 02 November, 2020, 13:00:55

Any chance this software works with webcams or is this software only compatible with pupil products or dedicated eye tracking products?

papr 02 November, 2020, 13:03:33

@user-b24afd Please checkout this response and its corresponding question. I think it is related to yours. https://discord.com/channels/285728493612957698/285728493612957698/772742754047885323

papr 02 November, 2020, 13:03:56

@user-5c1712 Please be more specific. What is the command that you are executing that results in this error message.

user-b24afd 02 November, 2020, 13:04:59

Thanks I really appreciate your help. Have a great day!

user-5c1712 02 November, 2020, 16:06:53
$ python3 deployment/deploy_capture/version.py 
Traceback (most recent call last):
  File "deployment/deploy_capture/version.py", line 14, in <module>
    from version_utils import *
ImportError: No module named version_utils
papr 02 November, 2020, 16:11:44

@user-5c1712 This file is meant to be executed within the deployment/deploy_capture folder. If you check the file, the previous line says: sys.path.append(os.path.join("../../", "pupil_src", "shared_modules")) which makes version_utils.py available.

papr 02 November, 2020, 16:15:31

@user-5c1712 To be honest, I think the more cleaner version would be to get rid of version.py here and replace any imports from it with the appropriate version_utils import

user-5c1712 02 November, 2020, 16:16:55

exactly, okay I thought so too. I'm on it!

papr 02 November, 2020, 16:18:11

@user-5c1712 Great! Thank you for working on this!

papr 02 November, 2020, 20:16:41

@user-5c1712 In this case, I think the best way would be to run a static checker like https://github.com/Microsoft/pyright

papr 02 November, 2020, 20:16:55

And check for undefined variables

user-5c1712 02 November, 2020, 22:15:07

I think I'm done. I just checked my code locally with mypy and pyright and for undefined attributes it only returned:

Module 'player_methods' has no attribute 'is_pupil_rec_dir'

which I think has nothing to do with my changes

papr 02 November, 2020, 22:15:57

@user-5c1712 Sounds good. Please create a pull request for us to review.

user-40de8a 03 November, 2020, 14:48:30

Hi, I tried to build ffmpeg3 from jonathan's PPA but it throws the error:

"aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template for Raspbian/buster"

Is there anyway to circumvent this/ adapt it to raspbian?

papr 03 November, 2020, 14:48:56

@user-40de8a There is no need to use ffmpeg3 anymore. You can use ffmpeg 4 instead

user-40de8a 03 November, 2020, 14:50:14

Sorry, I meant to write ffmpeg4*

papr 03 November, 2020, 14:51:13

@user-40de8a Can you try to install ffmpeg via the apt without using jonathan's PPA? It looks like it does not providing builds for Raspbian/buster

user-5c1712 03 November, 2020, 17:02:54

when I run black locally it doesn't see a problem with

from gl_utils import A
from gl_utils import B
from gl_utils import C

the format you want is

from gl_utils import (
  A, B, C,
  )

or

from gl_utils import (
  A,
  B,
  C,
  )

with line length 88?

papr 03 November, 2020, 17:03:22

@user-5c1712

from gl_utils import A
from gl_utils import B
from gl_utils import C

is technically fine but not consistent with other multi imports in our code base

papr 03 November, 2020, 17:04:33

Just put multi imports from the same module in the same line and let black do the rest

user-5c1712 03 November, 2020, 17:39:07

@papr thx for all the help!!! All travis checks passed finally :) have a good evening!

user-521355 04 November, 2020, 10:49:52

Hi! I’m fairly new to the pupil labs software as I am using it for an internship. I’m trying to use the pupil vr add on for the htc vive with the unity sdk provided on the website of pupil labs and everything seems to work fine, except for the fact that the pupil software keeps losing connection of either the left or the right camera. Is anyone familiar with this problem or know a solution?

I think the issue is related to cpu usage as it can spike above 100% on both camera’s..

papr 04 November, 2020, 10:52:33

except for the fact that the pupil software keeps losing connection @user-521355 Are you referring to the eye video freezing in the "eye" windows? Do both previews freeze at the same time, or do they freeze separately. Or do you refer to the eye video preview in Unity?

user-521355 04 November, 2020, 10:55:08

Yes, one of the eye windows keeps freezing and it spits out an error about losing the connection to the camera. After a few seconds it just reconnects and it happens again. The eye video preview in unity doesn’t even show up.

papr 04 November, 2020, 10:56:59

@user-521355 ok, please contact [email removed] in this regard.

user-521355 04 November, 2020, 10:57:17

Alright will do, thanks!

user-40de8a 05 November, 2020, 08:52:59

Morning, I am trying to run the software from source and try to do pupil detection on a recorded video. When I run main.py I get the following error (log file attached). Any idea how to solve this? Thanks!

log_error

papr 05 November, 2020, 10:10:03

@user-40de8a Looks like one of these window hits does not exist on your platform https://github.com/pupil-labs/pupil/blob/1ca13889d9c8acb4902f387c9caef7c20dd7c1ab/pupil_src/shared_modules/service_ui.py#L55-L57

papr 05 November, 2020, 10:10:24

@user-40de8a I suggest commenting them out one by one and check if the error goes away

user-d8853d 09 November, 2020, 12:44:48

Hi pupils lab team, I want to do pupil detection from a video that was streamed from a raspberry pi (it streams world view and eye view). So making a plugin would be that option. Can I show the streamed video in world view and launch eye detection plugin? Where do I start?

papr 09 November, 2020, 16:12:13

@user-d8853d Do you want to the detection on the realtime stream or post-hoc on the recorded video?

user-d8853d 09 November, 2020, 16:42:49

real time

papr 09 November, 2020, 16:43:57

@user-d8853d You will have to write a custom video backend, similar to our HMD Streaming backend, to receive and display the video streams: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/hmd_streaming.py

papr 09 November, 2020, 16:44:48

@user-d8853d Alternatively, you might be able to use it directly by using zmq to publish the data on the RPI.

user-d8853d 09 November, 2020, 16:45:46

I have used zmq to publish the world view and eye view in real time.

papr 09 November, 2020, 16:48:32

@user-d8853d Ah nice. Maybe you are already able to use the HMD_Streaming_Source then. Fore reference, this is the hmd-eyes code that sends the data: https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/ScreenCast.cs

user-d8853d 09 November, 2020, 18:16:25

Ah that means I would be able to use detect pupil and do gaze estimation using hmd?

papr 09 November, 2020, 18:20:25

@user-d8853d You should be able to use the backend to receive the streamed via the existing hmd video backend. This does not require you to use an actual hmd. The detection and gaze mapping should work in the same way as if you connected the Pupil headset via USB to the computer.

user-d8853d 09 November, 2020, 18:47:48

oh nice. That is what I have been looking for. Thank you.

user-d8853d 09 November, 2020, 18:59:39

that means I need to find a way to replace uvc_backend.py with hmd_streaming.py

papr 09 November, 2020, 19:01:35

@user-d8853d You can start hmd_streaming via a notification. You mostly need to make sure that your video stream server behaves like hmd_streaming expects it. This is where the hmd-eyes (ScreenCast) implementation comes into play.

papr 09 November, 2020, 19:02:47

Also, you need to start the plugin 3x. once for each process and each time with a custom message topic

papr 09 November, 2020, 19:03:44

Default is topics=("hmd_streaming.world",). I suggest using hmd_streaming.eye0 and hmd_streaming.eye1 for the other two. You can pass this keyword argument as args in the start_plugin/start_eye_plugin notifications

user-d8853d 09 November, 2020, 20:30:37

is there a documentation where the video backend is already implemented. I see Dummy_Camera in hmd_streaming.py. I guess I need to replace the dummy_camera with the source to my zmq video source. I think I am going in wrong direction. I am new to pupils lab world.

papr 09 November, 2020, 20:31:30

@user-d8853d You don't need to modify any of the pupil source. You need to adjust the RPI side of the code.

user-d8853d 09 November, 2020, 20:31:31

I have installed pupils capture software in ubuntu

papr 09 November, 2020, 20:32:16

RPI/hmd-eyes are the sending components. Capture is the receiving component.

user-40de8a 09 November, 2020, 21:43:09

I'm sorry, what does RPI stand for? (Is it Raw Pupil Image)?

papr 09 November, 2020, 21:44:26

@user-40de8a Raspberry Pi. Not sure what the official abbreviation is.

user-40de8a 10 November, 2020, 09:19:38

I'm trying to run pupil player on my RPi but I get the following error: glfw.GLFWError: (65539) b'Invalid window hint 139276' I have attached the error log with this message.

log_error

papr 10 November, 2020, 09:27:10

@user-40de8a Please see my previous response to this issue: https://discord.com/channels/285728493612957698/446977689690177536/773851621187649577

This time you need to remove the according lines in player.py

user-40de8a 10 November, 2020, 09:33:39

Thanks a lot. I'm sorry I hadn't seen your previous response to this, my bad!

papr 10 November, 2020, 09:33:53

@user-40de8a No worries πŸ™‚

user-40de8a 10 November, 2020, 09:53:30

Now I am getting this error: player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "/home/pi/pupil/pupil_src/launchables/player.py", line 898, in player_drop content_scale = gl_utils.get_content_scale(window) File "/home/pi/pupil/pupil_src/shared_modules/gl_utils/utils.py", line 264, in get_content_scale return glfw.get_window_content_scale(window)[0] AttributeError: module 'glfw' has no attribute 'get_window_content_scale'

It indeed looks like glfw does not have this attribute. Did I install the package incorrectly?

papr 10 November, 2020, 09:55:54

@user-40de8a Mmh, I do not know. It is possible that the underlying libglfw library is not up-to-date. Could you let us know which glfw version you have installed?

user-40de8a 10 November, 2020, 09:59:34

I have the 2.0.0 version

papr 10 November, 2020, 10:02:59

@user-40de8a mmh, that should be correct. Looks like there are some parts of the software that are not support on the RaspPI. I fear you need to remove any content-scale related functionality. You should be able to use a content-scale of 1.0 to replace it.

user-90a55d 10 November, 2020, 10:24:23

Hello I have glasses of the type PUPIL W120 E200B And this morning I turned on the glasses and one eye camera does not work and the other does work what can we do?

papr 10 November, 2020, 10:39:59

@user-90a55d Please contact info@pupil-labs.com in this regard.

user-d8853d 10 November, 2020, 14:20:40

Hi @papr I implemented the video backend and the client to check if I am receiving anything for RPI. I used your zmq_tools.py for implementing. Here is the code: https://github.com/Lifestohack/pupil-video-backend

user-d8853d 10 November, 2020, 14:21:52

Now the problem is client is not receiving any thing. If block_until_connected = True in Msg_Receiver then I get "ZMQ connection failed" Exception.

user-d8853d 10 November, 2020, 14:23:00

am I missing something?

user-d8853d 10 November, 2020, 15:03:05

Even though block_until_connected = False, nothing is received.

papr 10 November, 2020, 15:43:12

@user-d8853d The issue is that you are connecting the Msg_Streamer (PUB socket) in backend to Pupil Remote, a REQ-REP type socket. You need to request the pub port and connect to it instead. Checkout the Publisher class in the hmd-eyes project. https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/ScreenCast.cs#L56

user-d8853d 10 November, 2020, 16:01:59

thanks for your time @papr . I am not connecting to Pupil Remote. I am making my own zmq server to publish the data from RPI as seen in video_backend.py . With video_backend_client.py I wanted subscribe to video published by video_backend.py.

papr 10 November, 2020, 16:04:23

Ah, ok I see the issue. One of both sides needs to bind to the port, normally the server.

papr 10 November, 2020, 16:04:29

And then the other can connect

papr 10 November, 2020, 16:05:02

Capture has a "message backend" that binds to :50020 (for Pupil Remote) and random ports for subscription and publishing, and allows all scripts to connect to these ports

user-d8853d 10 November, 2020, 16:20:45

oh jeah that works. danke:)

user-d8853d 10 November, 2020, 16:51:31

For me to understand: so that i can do pupil detection on my laptop with the video published from the RPI: Step1: Publish the video from raspberry pi (done) . Step 2: Install hmd_eyes and unity. Step 3: Change the code such that https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/ScreenCast.cs receives the video from RPI and change this to publisher.Send(topic, payload, frame_from_raspberrypi); Step 4: Pupil capture will automatically show the Video source as hmd streaming devices.

papr 10 November, 2020, 16:55:01

@user-d8853d Step 2 is not necessary. Your RPI code takes the role of hmd-eyes. The Hmd_Streaming Capture plugin takes the role of your video_backend_client

papr 10 November, 2020, 16:56:08

The hmd-eyes screencast is basically a reference implementation for your RPI code. You are not supposed to use it directly. πŸ™‚

user-d8853d 10 November, 2020, 18:55:46

oh it makes it easy. πŸ™‚

user-d8853d 10 November, 2020, 18:59:40

that means it should already work because I

user-d8853d 10 November, 2020, 18:59:49

because https://github.com/Lifestohack/pupil-video-backend/blob/master/video_backend.py is implemented.

papr 10 November, 2020, 19:09:57

Don't forget to start the plugin

user-d8853d 10 November, 2020, 19:09:57

in which port is hmd_streaming plugin listening to?

user-d8853d 10 November, 2020, 19:11:10

hmd_streaming plugin is same as hololens relay or?

papr 10 November, 2020, 19:11:12

It listens to the IPC backend. The backend is like an echo chamber. You can either yell something into it, or listen what comes out. The hmd_streaming plugin listens for what comes out. You need to yell into it. Request the PUB_PORT from Pupil remote

papr 10 November, 2020, 19:11:51

The hololens relay is unrelated.

papr 10 November, 2020, 19:12:36
  1. Connect to Pupil Remote
  2. Send "PUB_PORT"
  3. Receive port number
  4. Connect Streamer to pub port
user-d8853d 10 November, 2020, 19:14:46

Ah something like this:

context = zmq.Context()
addr = '127.0.0.1'  # remote ip or localhost
req_port = "50020"  # same as in the pupil remote gui
req = context.socket(zmq.REQ)
req.connect("tcp://{}:{}".format(addr, req_port))
req.send_string('SUB_PORT')
sub_port = req.recv_string()
sub = context.socket(zmq.SUB)
sub.connect("tcp://{}:{}".format(addr, sub_port))
papr 10 November, 2020, 19:15:13

@user-d8853d PUB instead of SUB. You are on the sending side

user-d8853d 10 November, 2020, 19:15:21

oh jeah

user-d8853d 10 November, 2020, 19:15:29

danke πŸ™‚ Let me try.

user-d8853d 10 November, 2020, 20:36:28

So, I implemented that. https://github.com/Lifestohack/pupil-video-backend/blob/master/video_backend.py

user-d8853d 10 November, 2020, 20:36:54

I get the port from the pupils remote and then the video is published to that port.

papr 10 November, 2020, 20:38:12

@user-d8853d I am just missing the start_plugin notification to start the hmd_streaming πŸ™‚

user-d8853d 10 November, 2020, 20:51:40

I added this:

user-d8853d 10 November, 2020, 20:52:10
# send notification:
def notify(notification):
    """Sends ``notification`` to Pupil Remote"""
    topic = "notify." + notification["subject"]
    payload = serializer.dumps(notification, use_bin_type=True)
    pupil_remote.send_string(topic, flags=zmq.SNDMORE)
    pupil_remote.send(payload)
    return pupil_remote.recv_string()

# Start the annotations plugin
notify({"subject": "start_plugin", "name": "hmd_streaming", "args": {}})
user-d8853d 10 November, 2020, 20:53:47

When I start the video_backend.py, I see this message on pupil capture software:

user-d8853d 10 November, 2020, 20:55:28

"World: Attempt to load unknown plugin: hmd_streaming"

papr 10 November, 2020, 20:56:11

Use HMD_Streaming_Source instead of hmd_streaming

papr 10 November, 2020, 20:57:30

"args": {"topics": ("hmd_streaming.world",)} "args": {"topics": ("hmd_streaming.eye0",)} "args": {"topics": ("hmd_streaming.eye1",)}

For the different video sources

user-d8853d 10 November, 2020, 21:08:07

Changed to :

user-d8853d 10 November, 2020, 21:08:26
notify({"subject": "start_plugin", "name": "HMD_Streaming_Source", "args": {"topics": ("hmd_streaming.world",)}})
papr 10 November, 2020, 21:09:28

@user-d8853d Looks good on the first sight. Does it work? If not, can you check the Capture logs if there is any info about it?

user-d8853d 10 November, 2020, 21:20:41

step 1: I open pupil capture software. Step 2: I start video_backend.py

user-d8853d 10 November, 2020, 21:21:22

if i am in for example in Plugin Manager then it automatically goes to Video source

user-d8853d 10 November, 2020, 21:21:25

but nothing happens

papr 10 November, 2020, 21:22:10

Looks like the plugin is being started correctly

user-d8853d 10 November, 2020, 21:22:43

Do I need pyrealsense2 for this to work?

papr 10 November, 2020, 21:23:08

@user-d8853d no, definitively not πŸ˜„

user-d8853d 10 November, 2020, 21:23:24

🀣

papr 10 November, 2020, 21:23:43

@user-d8853d Please check ~/pupil_capture_settings/capture.log for possible failure reasons

user-d8853d 10 November, 2020, 21:24:11

capture.log

papr 10 November, 2020, 21:25:57

Ok, streaming source is started correctly

papr 10 November, 2020, 21:28:25

@user-d8853d Oh, I know the issue. The HMD_Streaming_Source only supports rgb as format

user-d8853d 10 November, 2020, 21:28:51

it works.

user-d8853d 10 November, 2020, 21:29:06

😘

user-d8853d 10 November, 2020, 21:29:36

I will give you credits on my thesis.

user-d8853d 10 November, 2020, 21:34:25

This will probably be useful for other people.

papr 10 November, 2020, 21:36:04

@user-d8853d I agree! It might need some cleanup, but as a standalone python module with proper documentation, it could work wonders for people with custom cameras that are supported by opencv but not out pyuvc library! Feel free to add your project to https://github.com/pupil-labs/pupil-community

user-d8853d 10 November, 2020, 21:37:41

I will do that. I will add support for two cameras, one for world view and one for eye0. Do some clean up and add it to pupil community.

user-d8853d 10 November, 2020, 21:37:56

Thank you very much for your help.

papr 10 November, 2020, 21:38:27

@user-d8853d That would be great!

user-90000e 11 November, 2020, 19:50:03

Hi all, I'm new here and I'm trying to map 2D coordinates on to the video exported by PupilPlayer. My validation here is whether the 2D coordinate matches the green dot recorded in the video output, and so far I haven't had luck finding a solution or a previous comment looking at the same thing.

Here is the code I'm using to translate normalized X Y back to pixel space. Since the eye tracker is recording multiple observations per frame, I'm averaging these observations across frames and then comparing to the frames of the exported world_clip.mp4 video. Here's my code and attached is the result. In the result images, the green circle is the gaze data from the video itself, and the blue dot is what I'm calculating from the 2D coordinates. Just as a sanity check, the first row of numbers in the result are the normalized 2D coordinates (straight from PupilPlayer) and the second row is my translation to pixels... hopefully I didn't do something dumb in the math. You can clearly see in the result that the user is making a saccade that's captured in the video data but not by the 2D coordinates.

def norm_to_pixel(i, axis):
    axis = axis.lower()
    if axis not in ['x', 'y']:
        raise ValueError('axis must be "X" or "Y"')
    if axis == 'x':
        return i * 1280
    ## subtracting because cv2 starts pixel 0 at top of image
    return abs((i*720)-720)

cap = cv2.VideoCapture('world_clip.mp4')

d = pd.read_csv('gaze_positions.csv')
by_frame = d[['world_index', 'norm_pos_x', 'norm_pos_y']].groupby('world_index').agg('mean')

for i in range(8):
    ret, frame = cap.read()
    X = norm_to_pixel(by_frame.loc[i, 'norm_pos_x'], 'X')
    Y = norm_to_pixel(by_frame.loc[i, 'norm_pos_y'], 'Y')
    print([by_frame['norm_pos_x'].values[i], by_frame['norm_pos_y'].values[i]])
    print([X, Y])

    implot = plt.imshow(frame)
    plt.scatter([X], [Y])
    plt.show()

Chat image

user-90000e 11 November, 2020, 19:50:44

I'm assuming software-dev is the right channel for this question but not sure so sorry if I get that wrong.

papr 11 November, 2020, 19:51:38

@user-90000e it is the right channel πŸ‘ unfortunately, Opencv does not extract frame-exact.

papr 11 November, 2020, 19:52:15

Please checkout our tutorial on how to extract frames https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb

papr 11 November, 2020, 19:52:28

(and plot gaze)

user-90000e 11 November, 2020, 19:53:04

Wow thank you very much-- it would've taken me forever to realize that Opencv was the problem πŸ™

papr 11 November, 2020, 19:56:31

@user-90000e yeah, it took us a while as well

user-d8853d 11 November, 2020, 20:49:18

Update on Pupil video backend. I am successful on publishing video to pupil capture software. But only world view. If I try eye0 and eye1 it doesnot work.

papr 11 November, 2020, 20:50:00

@user-d8853d You need to start the hmd_streaming_source as eye processes as well

user-d8853d 11 November, 2020, 20:50:49

Oh I am not finished explaining.πŸ˜†

papr 11 November, 2020, 20:51:46

The topic is start_eye_plugin instead of start_plugin. Also, you need to add a "target": "eye0" field (and eye1 respectively)

papr 11 November, 2020, 20:52:31

@user-d8853d Oh, sorry. Bad habit of mine. Jumping to conclusions to quickly. πŸ™‚

user-d8853d 11 November, 2020, 20:52:33

I get the following error in pupil capture software: Eye0: Process Eye0 crashed with trace

user-d8853d 11 November, 2020, 20:53:03

but you are right. I looked at the logs and it says KeyError: 'target'

user-d8853d 11 November, 2020, 20:54:15

I have topic as start_eye_plugin for eye0 and eye1 but I didnot had target

user-d8853d 11 November, 2020, 21:01:01
{"subject": start_eye_plugin, "target": "eye0", "name": "HMD_Streaming_Source", "args": {"topics": (hmd_streaming.eye0,)}}
user-d8853d 11 November, 2020, 21:01:04

is this correct?

papr 11 November, 2020, 21:03:56

start_eye_plugin and hmd_streaming.eye0 should be strings

papr 11 November, 2020, 21:04:13

but yes, generally, this looks correct

user-d8853d 11 November, 2020, 21:04:26

jeah that's true. I copy pasted wrong from my code.

user-d8853d 11 November, 2020, 21:10:26

Now I have some other error log.

user-d8853d 11 November, 2020, 21:11:28

capture.log

papr 11 November, 2020, 21:15:11

ok, looks like the backend is not yet compatible with the eye process

papr 11 November, 2020, 21:15:32

@user-d8853d Are you running Capture from source or from bundle?

user-d8853d 11 November, 2020, 21:16:51

yes I downloaded the latest version from here: https://github.com/pupil-labs/pupil/releases/tag/v2.5#user-content-downloads

user-d8853d 11 November, 2020, 21:17:02

Version 2.5

papr 11 November, 2020, 21:17:24

@user-d8853d ok, I think I can get this fixed for our next release this week

user-d8853d 11 November, 2020, 21:17:41

Oh cool. Shall I create a new ticket on github?

papr 11 November, 2020, 21:17:53

Yes, please.

user-d8853d 11 November, 2020, 21:18:28

Also could you please add support for other format like yuv and not only rgb.

user-d8853d 11 November, 2020, 21:18:58

is it possible?

papr 11 November, 2020, 21:19:15

mmh, not necessarily for the next release

user-d8853d 11 November, 2020, 21:19:20

ok

papr 11 November, 2020, 21:19:34

what exact yuv format do you need?

papr 11 November, 2020, 21:20:33

Can you dump the byte buffer of two such yuv frames to a binary file and send it to me? This way I can test it locally

user-d8853d 11 November, 2020, 21:24:13

Because I am sending the image by wifi, I wanted to send the frames as fast as possible. Low latency and as much as fps as possible. For pupil tracking I don't think I need rgb. I was tinkering around and just the luma component (the brightness) Y from the YUV was enough for 2d pupil detection. I did the experiment here: https://github.com/Lifestohack/smartglass

papr 11 November, 2020, 21:25:52

so you want to send a uint8 buffer of shape height, width?

user-d8853d 11 November, 2020, 21:29:17

Raspberry pi camera support YUV420 (planar). So if I could just send one channel out of the three and do the pupil detection on that would it be great.

user-d8853d 11 November, 2020, 21:29:28

so you want to send a uint8 buffer of shape height, width? @papr yes

papr 11 November, 2020, 21:29:45

@user-d8853d ok, I will have a look tomorrow

user-d8853d 11 November, 2020, 21:29:58

Cool, thanks bro.

user-d8853d 11 November, 2020, 21:31:41

when doing the pupil detection what is the size of the frame that pupil capture software uses? Does it use rgb or you also use something else?

user-d8853d 11 November, 2020, 21:31:56

I saw somewhere 192x192. Is it true?

papr 11 November, 2020, 21:32:53

We use the Y channel as well πŸ™‚

papr 11 November, 2020, 21:33:14

192x192 is the default for our 200hz cams, correct

user-d8853d 11 November, 2020, 21:34:18

Oh great.

user-d8853d 11 November, 2020, 21:34:30

thanks for the information.

user-d8853d 11 November, 2020, 21:48:55

New issue created: https://github.com/pupil-labs/pupil/issues/2049

user-d8853d 12 November, 2020, 11:40:49

step 1: started pupil capture software. Step2: clicked on "Detect eye 0" on pupil capture software. Small window pops up where eye0 video feed supposed to be shown. Step3: Started pupil video backend. But pupil detection eye 0 window closes automatically. and pupil capture shows the error: Process Eye0 crashed with trace.

user-d8853d 12 November, 2020, 11:41:56

capture.log

user-d8853d 12 November, 2020, 12:02:15

this was regarding the ticket @papr opened.

papr 12 November, 2020, 12:24:19

@user-d8853d Let's try to keep log files and steps for reproducing issues in their corresponding Github issues. That should help us to keep track of them.

user-d8853d 12 November, 2020, 12:55:01

ofcourse. πŸ˜„

user-d8853d 13 November, 2020, 18:13:44

Created a issue for time sync. https://github.com/Lifestohack/pupil-video-backend/issues/3

user-d8853d 13 November, 2020, 20:18:58

Is this all for time sync? pupil_remote.send_string("T {}".format(time.time()))

papr 13 November, 2020, 20:19:58

@user-d8853d This sets Pupil time to unix time. But for best compatibility, you want to adopt Pupil time in your own application. In other words, the other wau around.

papr 13 November, 2020, 20:20:35

The easiest but least accurate way is pupil_remote.send_string("t")

papr 13 November, 2020, 20:21:41

You want to estimate the network delay as well, e.g. like in hmd-eyes https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/TimeSync.cs#L66-L71

papr 13 November, 2020, 20:22:46

Bascially calculating the offset between your own and the pupil clock, without the network delay. One network delay is easy but it would be better to do multiple ones and average them to account for variation in the network delay

user-d8853d 15 November, 2020, 00:38:28

@papr Thank you for your nice explanation. It is really thorough. I get monotonic time from pupil capture software. I calculated the mean monotonic offset and offset jitter, added absolute values of both and added to my local monotonic time. Also while sending the payload, I need to send the timestamp. Was I supposed to send the local monotonic time(actually it would be same as pupil capture monotonic time) after sync instead of my local epoch time?

user-d8853d 16 November, 2020, 18:44:41

I tested the version 2.6 for github issue #2049. I was not able to get it working.

user-d8853d 16 November, 2020, 18:44:48

capture.log

papr 16 November, 2020, 18:46:36

@user-d8853d Do you have the detector debug window opened?

papr 16 November, 2020, 18:50:08

@user-d8853d Looks like your focal length is zero. You are sending invalid intrinsics https://github.com/Lifestohack/pupil-video-backend/blob/f8f4d9b3f104ae92cb9e84951c35d3bdd5397b47/payload.py#L5-L9

papr 16 November, 2020, 18:50:45

Actually, this explains an other issue that we encountered in pye3d the other day, while using your backend πŸ˜„

user-d8853d 16 November, 2020, 18:52:05

Oh I am sorry.

papr 16 November, 2020, 18:52:22

No worries. The software is a bit more stable thanks to your code now πŸ™‚

user-d8853d 16 November, 2020, 18:52:33

I need to find out intrinsics for my cameras.

user-d8853d 16 November, 2020, 18:53:00

let me try with dummy intrinsics.

user-d8853d 16 November, 2020, 18:53:25

No worries. The software is a bit more stable thanks to your code now πŸ™‚ @papr nice to hear that. Glad I could support that.

papr 16 November, 2020, 18:53:49

use something like this for now

user-d8853d 16 November, 2020, 18:54:19

ok let me try that.

user-d8853d 16 November, 2020, 19:02:35

it's working. Thank you @papr for your hard work.

user-90000e 18 November, 2020, 16:09:30

I have a development challenge that (based on my reading of previous conversation here) I don't think can be solved from PupilLabs technology but I'm wondering if anyone's found a successful approach. I'm trying to do what I've seen referred to in this discord as "markerless area of interest tracking", where there's some data that's already been collected (ie, without using any surface tracking technology), and I want to ad-hoc compute whether fixations are landing in particular areas of interest (eg computer screens / controls). I've attached a sample image from our study. I've thought about two possible solutions: (a) trying to stitch images together to make some type of panoramic image, upon which I could define an absolute coordinate system and then try to do some sub-image identification to translate the fixation coordinates to absolute space, or (b) implementing some type of ML object detection to classify bounding boxes around objects in the images. Both of these approaches seem quite challenging and prone to error, and I'm just wondering whether anyone has come up with a better solution to this problem. Thanks!

Chat image

user-f22600 18 November, 2020, 19:00:31

Hi, I'm new to pupil and just checked out the repo. The readme says there is a cpp version as well but can't find them. Can you help me how to found them?

papr 18 November, 2020, 19:01:37

@user-f22600 Not exactly. Parts of the code is written in cpp. Most of the application is written in Python

user-f22600 18 November, 2020, 19:02:41

@papr Thanks, can you direct me which part is written in cpp? there are only py files.

papr 18 November, 2020, 19:03:50

@user-f22600 I think the most prominent c++ usage is in here https://github.com/pupil-labs/pupil-detectors/

user-f22600 18 November, 2020, 19:04:10

@papr Many thanks!!

user-d8853d 21 November, 2020, 18:54:45

Hi @papr is there a way to set hwm for eye process or world process while using ZMQ? Or have you guys implemented something where the video frames are not queued and if the frame is queued then just drop oldest frame?

user-d8853d 21 November, 2020, 18:55:44

{"subject": "start_eye_plugin", "target": "eye0", "name": "HMD_Streaming_Source", "args": {"topics": (hmd_streaming.eye0,), "hwm":"1"}}

user-d8853d 21 November, 2020, 18:55:51

Something like this to set the hwm?

papr 23 November, 2020, 10:22:56

@user-d8853d Unfortunately, zmq does not allow dropping the oldest frames. Once they are in the queue, they need to be processed. The HWM only affects newly added frames, i.e. if the HWM is one, and there is one frame in the queue already, all new frames will be dropped until the queued frame is processed.

Currently, HMD_Streaming_Source does not implement custom HWMs but this can be easily done. Feel free to make a PR that forwards an HWM argument from HMD_Streaming_Source to Msg_Receiver (which has a hmw=None argument already)

user-d8853d 23 November, 2020, 12:34:08

Thanks @papr for the guidance. Here I have updated the code on the branch hwmdev on my fork. https://github.com/Lifestohack/pupil/blob/hwmdev/pupil_src/shared_modules/video_capture/hmd_streaming.py#L132

user-d8853d 23 November, 2020, 12:34:59

Example of usages:{"subject": "start_eye_plugin", "target": "eye0", "name": "HMD_Streaming_Source", "args": {"topics": (hmd_streaming.eye0,), "hwm":1}}

papr 23 November, 2020, 12:36:59

I think for a merge into the main repository you would need to add two changes: 1) define the argument explicitly instead of using kwargs. Return it in the get_init_dict

user-d8853d 23 November, 2020, 13:21:28

You mean{"subject": "start_eye_plugin", "target": "eye0", "name": "HMD_Streaming_Source", "hwm":1, "args": {"topics": (hmd_streaming.eye0,)}} Instead of passing it as an args define it explicitly

user-d8853d 23 November, 2020, 12:53:26

I pushed the change to returning hwm get_init_dict.

papr 23 November, 2020, 13:21:55

@user-d8853d No, the original notification was correct

papr 23 November, 2020, 13:22:12

I meant in the class __init__ definition

user-d8853d 23 November, 2020, 13:47:30

umm. I misunderstood. πŸ˜…

user-d8853d 23 November, 2020, 13:47:53

but without using kwargs I could not get the hwm value from notification

papr 23 November, 2020, 13:48:38
- def __init__(self, g_pool, topics=("hmd_streaming.world",), *args, **kwargs):
+ def __init__(self, g_pool, topics=("hmd_streaming.world",), hwm=None, *args, **kwargs):
user-d8853d 23 November, 2020, 13:49:23

jeah that is what I thought initially.

papr 23 November, 2020, 13:51:16

This equivalent to your implementation

papr 23 November, 2020, 13:51:21

but more explicit about the existence of this argument

user-d8853d 23 November, 2020, 13:51:50

cool @papr . But still the value is read from the kwargs.

papr 23 November, 2020, 13:52:18

I do not understand what you mean.

user-d8853d 23 November, 2020, 13:54:54

sorry for misunderstanding. Here I changed. https://github.com/Lifestohack/pupil/blob/hwmdev/pupil_src/shared_modules/video_capture/hmd_streaming.py#L127

papr 23 November, 2020, 13:55:52

As kwargs cannot include any arguments that are explicitly named in the signature

user-d8853d 23 November, 2020, 14:01:41

I removed it. Works like a magic. πŸ˜€

user-d8853d 23 November, 2020, 14:04:17

Shall I directly create a pull request to master?

papr 23 November, 2020, 14:04:27

to develop please

papr 23 November, 2020, 14:04:43

ah one more change please

papr 23 November, 2020, 14:05:08

please change the argument order from hwm, topics to topics, hwm

user-d8853d 23 November, 2020, 14:07:27

Done

papr 23 November, 2020, 14:09:35

Btw, the implementation already tries to get the newest frame by recv all buffered frames and discarding all but the newest one

user-d8853d 23 November, 2020, 14:13:47

Oh that's a relief.

papr 23 November, 2020, 14:12:17

Setting the hwm to 1 makes sense in this case actually, if we discard all frames anyway. Feel free to make a second PR that changes the default HWM from None to 1

user-d8853d 23 November, 2020, 14:15:11

let me change that.

user-d8853d 23 November, 2020, 14:18:53

I changed it to 1. here is the PR: https://github.com/pupil-labs/pupil/pull/2058

user-d8853d 23 November, 2020, 14:24:14

Sorry didn't read your last message till last words. I did not make a second PR but pushed it to the same branch on the same PR.

papr 23 November, 2020, 14:34:25

don't worry

papr 23 November, 2020, 14:34:43

Would have been cleaner, but it is fine as it is

papr 23 November, 2020, 14:36:05

@user-d8853d Please apply black on the existing PR

user-d8853d 23 November, 2020, 14:48:10

I did it and pushed the change. Thanks for the heads up.

papr 23 November, 2020, 15:00:33

@user-d8853d Thank you for your contribution!

user-d8853d 23 November, 2020, 15:02:40

@papr Welcome. My pleasure.

user-d8853d 25 November, 2020, 02:14:02

I was running pupil video backend on the same computer as pupil capture sending the frame with shape:(720, 1280, 3). I see the CPU going over 180. Does CPU goes over 100? I thought 100 was max. πŸ˜†

Chat image

papr 25 November, 2020, 07:01:58

@user-d8853d The theoretical maximum is 100% * number of CPU cores.

user-d8853d 25 November, 2020, 12:03:03

aah. it makes sense

End of November archive