software-dev


user-100d6e 09 August, 2022, 14:17:05

Hi! Could you please release an updated client (https://github.com/pupil-labs/pupil-cloud-client-python) for api version2 (https://api.cloud.pupil-labs.com/v2) Or is it possible that I can generate the python client code myself with the swagger.json?

user-9429ba 09 August, 2022, 15:14:39

Hi @user-100d6e I have asked my colleague to reply over in the invisible channel

user-100d6e 09 August, 2022, 14:21:08

And is there any possibility to export videos via api? If not. Could you please implement this?

user-349f2c 10 August, 2022, 10:59:49

Hi, I have a question regarding blink correction of PupilCore data. I streamed the data via LSL and it contains NaNs. If I want to implement your offline blink detection I run into problems as the filter response is heavily influenced by NaNs. Any suggestions for a workaround?

user-9429ba 11 August, 2022, 13:25:59

Hi Jolius! The LSL relay for Pupil Core will produce NaNs for one eye where it is not possible to match pupil datums to gaze binocularly. The pairing of Pupil data to Gaze is accounted for directly in Pupil Capture. This could be affecting your use of the offline blink detector. You can read more about the matching process here: https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching

marc 10 August, 2022, 11:44:19

Hi @user-100d6e! Unfortunately we do not have the capacity to provide a client for v2 right now. Using the API via plain HTTP requests does work though.

What type of video are you interested in exporting? You probably have seen that exporting recordings including the scene video is possible. Are you looking for e.g. video with gaze overlay?

user-100d6e 11 August, 2022, 10:59:30

@marc Thank you for your feedback.

In fact as you suggested with your example: I would like to start the export for a video with gaze_overlay via API and download the file afterwards.

Basically I want to automate the process, so I don't need to log in to the frontend.

marc 11 August, 2022, 11:28:36

I see! This would require you to create a gaze overlay enrichment which currently is not possible via the API. We do have changes to how videos can be rendered and exported on the roadmap and I'd expect API access to become possible then, but its gonna be a while until we get to it I am afraid.

user-100d6e 11 August, 2022, 12:02:30

Thank you! Then we will continue to process the video export manually.

On the other hand I was thinking to download the raw files from cloud and export them via batch_exporter.py commandline. There shouldn't be any problem with that?

Therefor I also wonder if the export would be faster as in the cloud (oc depending on the local machine ressources)

marc 11 August, 2022, 12:45:49

Yes, using the raw files and the batch exporter should work. In terms of speed it should generally not be much faster. The computation that happens is basically the same in Player and Cloud, and in Cloud the degree of parallelization would be higher. For small job volumes things like initial waiting time in a queue in Cloud could make a difference though.

user-349f2c 11 August, 2022, 13:51:09

Thanks for the Info. How can the datums not be matched? I always expected them to the same recording onset and produce samples one after another

user-9429ba 11 August, 2022, 14:01:35

The cameras are free running. We use Pupil Time to assign video frames with a timestamp which is guaranteed to be monotonically increasing. https://docs.pupil-labs.com/core/terminology/#_2-pupil-time

user-74ef51 18 August, 2022, 09:27:15

Hi, in a PupilCore recording I took some time ago there is a synchronization error of about 120 ms between the right and left eye video. What could have caused this error? I do match the frames in eye0 and eye1 with the eye*_timestamps.npy files

papr 18 August, 2022, 09:31:54

What do you use to extract the images from the video? Note, that opencv is known to be inaccurate in that regard. See this tutorial for more details https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb

user-74ef51 18 August, 2022, 09:39:41

I will check that tutorial. However even thought I use opencv i can't see how that is the problem because my code simply selects the two frames that are closest to a specified timestamp according to the eye*_timestamps.npy files and reads them with opencv

papr 18 August, 2022, 09:40:21

Yes, exactly, opencv is known to not return the frames that you expect.

user-74ef51 18 August, 2022, 11:16:43

If i want to play the frames loaded with pyav as a video should i use cv2.imshow()? Or does pyav have any functionality for this?

papr 18 August, 2022, 11:20:04

pyav does not have image display functionality. You can continue using opencv for that. The ony thing you need pyav for is the frame extraction from the video file.

From the top of my head, the code should look like this

- frame.to_image()
+ img = frame.to_ndarray(format="bgr24")
+ cv2.imshow("Frame", img)
user-b91cdf 19 August, 2022, 07:50:02

Hi guys,

First of all I would like to thank you for the support during the time of my master thesis and the many questions you have answered @nmt @papr .

Since I am now working as a research assistant at the TU Darmstadt, the projects continue directly.

In a future project, I would like to track the subject's gaze on other moving ROIs during night driving.

Would it be possible to place AR markers in the car that are always visible and then define a ROI on the road that you can also change during the video ?

Cobe ✌🏻

user-74ef51 19 August, 2022, 09:16:31

Is the code in the tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb outdated? I can't get seek to work

papr 19 August, 2022, 09:17:39

No, it should be up-to-date. Which OS, Python and pyav version are you using?

user-74ef51 19 August, 2022, 09:18:28

My code

path = DATASETS_ROOT.joinpath("2022_03_04/019/eye1.mp4")

video = av.open(str(path)) eye0_timestamps = np.load(path.with_name("eye1_timestamps.npy")) pts_of_1200th_frame = eye0_timestamps[1200] print(pts_of_1200th_frame)

requires primitive Python int

video.seek(int(pts_of_1200th_frame), stream=video.streams.video[0]) frame = next(video.decode()) frame.to_image()

papr 19 August, 2022, 09:28:04

There is a difference between pts and timestamps. You are attempting to use a timestamps to seek, instead of a pts. Timestamp refers to the wall-clock time recorded by Capture. pts are presentation timestamps and refer to the media file's internal timestamps. You can easily extract them from the video using something like this:

with av.open(str(path)) as video_pts_extraction:
  pts = [
    packet.pts for packet in video_pts_extraction.demux(video=0)
    if packet is not None and packet.pts is not None
  ]
user-74ef51 19 August, 2022, 09:19:26

Ubuntu 22.04, Python 3.10.4, pyav latest

user-74ef51 19 August, 2022, 09:20:43

My timestamps are in .npy format not .csv

user-74ef51 19 August, 2022, 09:21:40

I always get the first frame no matter what

papr 19 August, 2022, 09:28:33

and then use

- pts_of_1200th_frame = eye0_timestamps[1200]
+ pts_of_1200th_frame = pts[1200]
papr 19 August, 2022, 09:30:11

Using marker usually only makes sense if they do not move in relation to your ROI. You will need some other kind of way to track your ROI. Maybe there is software to detect street outlines in a video already?

user-b91cdf 19 August, 2022, 09:39:01

Ok i'll have a look !

user-74ef51 19 August, 2022, 10:24:54

Is this not a problem From the docs:

After seeking, packets that you demux should correspond (roughly) to the position you requested. In most cases, the defaults of backwards = True and any_frame = False are the best course of action, followed by you demuxing/decoding to the position that you want. This is becase to properly decode video frames you need to start from the previous keyframe.

papr 19 August, 2022, 10:31:31

Can double check on Monday if this is an issue

papr 19 August, 2022, 10:30:58

Core recordings use mjpeg format and in mjpeg every frame is a key frame to my knowledge.

user-74ef51 19 August, 2022, 10:33:43

When I iterate over frames by seek as a test nothing happens for a few values and then there is a jump

papr 19 August, 2022, 11:23:59

Could you share the code with us?

nmt 19 August, 2022, 11:08:46

Hi @user-b91cdf 👋. That's no problem at all, and it's great to hear you're continuing the projects! Just to unpack this a bit, is the aim to have ROIs 'earth-fixed' on the road, or have the ROIs moving with the car, e.g. covering a given portion of the road ahead? I know that both cases are of interest in this kind of research

user-b91cdf 19 August, 2022, 11:45:41

Could be both. I'll write again after the kick off meeting with the customer. Currently i am thinking of doing a video annotation in the world video.

user-74ef51 19 August, 2022, 12:16:24

Wait, I noticed that this is not a problem on the original video only on the same video formatted with the recommended settings ffmpeg -i world.mp4 -vf format=yuv420p world_compatible.mp4

papr 19 August, 2022, 13:09:14

The converted video is h264 and has key frames. So that file might suffer from the issue note that you copied from the pyav docs.

user-74ef51 19 August, 2022, 12:18:00
from pathlib import Path
import numpy as np
import pandas as pd
import cv2
# This hack prevents av from freezing
cv2.imshow("temp", np.zeros((4,4,3), dtype=np.uint8))
cv2.waitKey(1)
cv2.destroyAllWindows()
import av

video_dir = Path("formated/rig_data/2022_03_04/019") # Formated video
video_dir = Path("sources/rig_data/2022_03_04/019")  # Or original video

with av.open(str(video_dir.joinpath("eye1.mp4"))) as video:
    eye1_pts = [
        packet.pts for packet in video.demux(video=0)
        if packet is not None and packet.pts is not None
    ]

eye1_video = av.open(str(video_dir.joinpath("eye1.mp4")))


def add_text(frame, text, pos):
    return cv2.putText(frame, text, pos, cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA)


pts_idx = 0
while True:
    pts = eye1_pts[pts_idx]
    eye1_video.seek(int(pts), stream=eye1_video.streams.video[0])

    eye1_frame = next(eye1_video.decode())
    if eye1_frame is not None:
        frame = np.array(eye1_frame.to_image())


    add_text(frame, f"pts_idx: {pts_idx}", (4, 40))
    add_text(frame, f"pts: {pts}", (4, 80))
    cv2.imshow("frame", frame)

    key = -1
    while key < 0 and cv2.getWindowProperty("frame", cv2.WND_PROP_VISIBLE):
        key = cv2.waitKey(1)
    if key & 0xFF == 27 or key == ord("q") or not cv2.getWindowProperty("frame", cv2.WND_PROP_VISIBLE):
        break
    elif key == ord("a"):
        pts_idx = max(pts_idx - 1, 0)
    elif key == ord("d"):
        pts_idx = min(pts_idx + 1, len(eye1_pts))
    elif key == ord("z"):
        pts_idx = max(pts_idx - 10, 0)
    elif key == ord("c"):
        pts_idx = min(pts_idx + 10, len(eye1_pts))

cv2.destroyAllWindows()
papr 19 August, 2022, 13:13:07

And by nothing happens, you mean that video.decode() does return None, correct?

user-74ef51 19 August, 2022, 13:19:04

No, it just returns the same frame many many times

papr 19 August, 2022, 13:20:18

Interesting. Could you share the video with [email removed]

user-74ef51 19 August, 2022, 13:23:31

Sure, I experimented a bit too and adding the flag -g 1 seams to solve the issue: ffmpeg -i eye1.mp4 -vf format=yuv420p -g 1 eye1_compatible.mp4 but I don't understand what that flag does

papr 19 August, 2022, 13:29:27

That flag defines the key frame interval. With an interval of 1, every frame is a key frame. So the issue that you are experiencing is definitely key frame related.

user-74ef51 19 August, 2022, 14:11:54

Do you not have this problem?

papr 19 August, 2022, 14:17:20

I didn't have any issues with it in the past. But I will revisit the tutorial on Monday and make sure it works as expected for both of your shared videos

papr 22 August, 2022, 10:14:37

Seeking within h264 videos

user-3cff0d 25 August, 2022, 16:52:54

Hi @papr , I appear to be having an issue with the calibration sequence in my Pupil Labs Core command line pipeline. I have each component of PLC functioning in a way where I can feed in VR headset eye videos, pass through custom preprocessing steps, apply timestamped calibration points from HMD-Eyes, and get gaze data... but there is a strange accuracy problem that arises no matter how tight we can get the precision.

The image attached is a visualization of gaze data gotten with the HMD-Eyes calibration sequence, where the big blue dots are the calibration points and the smaller, colored dots are the calculated gaze locations associated with each calibration point. Each small colored dot represents one timestamp that was actually used for the calibration sequence. (Also, the two numbers above each large blue calibration point dot represent accuracy in degrees and precision in degrees respectively, from top to bottom.)

Do you know why there might be such an accuracy issue, despite this being the calibration sequence itself? As I understand it, this should be highly accurate, since it's the actual calibration sequence.

Chat image

papr 26 August, 2022, 07:17:09

Pupil Core Calibration Pipeline

user-3cff0d 26 August, 2022, 16:59:45

Also, for HMD-Eyes, was there a particular reason you chose the calibration point locations that you did?

papr 29 August, 2022, 06:01:54

We felt like they would cover the HTC Vive FOV fairly well. I don't recall any other specific reason.

End of August archive