def get_timing_info(file_path):
container_in = av.open(file_path, mode="r", timeout = None)
time_base_out = container_in.streams.video[0].time_base
num_frames = container_in.streams.video[0].frames
average_fps = container_in.streams.video[0].average_rate
container_out = av.open('vid_out.mp4', mode="w", timeout = None)
stream = container_out.add_stream("libx264", framerate = average_fps)
stream.options["crf"] = "20"
stream.pix_fmt = container_in.streams.video[0].pix_fmt
stream.time_base = container_in.streams.video[0].time_base
packet_pts_out = []
pts_out = []
dts_out = []
frame_time_base_out = []
relative_time_out = []
count = 0
#for raw_frame in container_in.decode(video=0):
for raw_frame in tqdm(container_in.decode(video=0), desc="Working.", unit= 'frames',total = num_frames):
pts_out.append(raw_frame.pts)
dts_out.append(raw_frame.dts)
frame_time_base_out.append(raw_frame.time_base)
relative_time_out.append( np.float32(raw_frame.pts * raw_frame.time_base) )
for packet in stream.encode(raw_frame):
packet_pts_out.append((raw_frame.pts, packet.pts))
# container_out.mux(packet)
count = count+1
The source vid, just in case.
Gotta run, but I'll check in later. Thanks again, Pablo.
hey, is there any way to get gaze location on 2D scene in unity? Everything I could find was about 3D scene in VR.
I don't work with Plabs, but I'll try to help. I assume you're referring to a panorama. Is this a static image, or a movie? If it's static, then you might consider https://docs.unity3d.com/ScriptReference/RaycastHit-textureCoord.html . You will have to figure out the relationship between UV (texture) coordinates and image location, because they are dependent on a few settings. If you're unsure if your solution is accurate, you might try it out on a test texture with a few fixation targets on a solid color background.
If it's a movie, then you can take the same approach, but will need to be careful to account for timing. Perhaps save out unity timestamps, pupil timestamps, and movie timestamps on every frame, so you can sync gaze to unity, and then unity to the movie.
Hi @user-89d824 Since your question was related to HMD-eyes, let's move it to the VR-AR channel to keep this tidy.
The exception you are showing in the logs, is from NetMQ (a ZMQ client written in C#) and it occurs when you attempt to send several send requests without receiving a response. In principle, both sockets need to go through the full cycle of sending/receiving as a part of the communication. In simpler words, Capture and Unity need to reply to each other before another signal is sent.
I recommend that you look at the SetupOnSceneManagement.cs
in the Scene Management Demo. This shows you how gazeCtrl
gameObject is enabled once the new scene is loaded. The SubscriptionsController shows you how this is connected OnEnable()
and disconnected on Disable. Therefore, disconnecting and connecting through NetMQ(zmq) properly between scenes.
Why it fails sometimes for you? Without context it is hard to say, since I don't know your configuration, which SceneManager you use , or how do you check for button click. My guess is that the click button is checked in an Update function, and since the RecordingController checks also in Update if the Boolean has changed, you need couple of frames to trigger stop the recording. The scene manager will load the next scene ASAP and you might be changing the scene without having received the response from Capture.
In general, you can wait to check if the recording has stopped, with IsRecording
, then disable the gameObjects to ensure that the sockets close, and then change the scene and enable them to reconnect.
Thank you for your reply.
After I posted that question, the error appeared during the first attempt at recording when I ran my next experiment... that was new.
I'm currently working on something else but will drop a question (or two) again if I need more help.
Thanks again!
Hey @papr , have you moved the postHocVr gazer to the dev branch?
yes
I stepped away from running from source for a bit, and now see the issue with Py 3.6 deprecation is preventing me from running the post-hoc-vr-gazer branch.
Thanks π
I recommend using the latest develop branch + creating a new virutal environment based on the requirements.txt file
Trying with 3.9
Traceback (most recent call last):
File "D:\Github\pupil\pupil_src\main.py", line 36, in <module>
from version_utils import get_version
File "D:\Github\pupil\pupil_src\shared_modules\version_utils.py", line 37, in <module>
ParsedVersion = T.Union[packaging.version.LegacyVersion, packaging.version.Version]
AttributeError: module 'packaging.version' has no attribute 'LegacyVersion'
@papr Looks like they removed packaging.version.LegacyVersion
I changed to ParsedVersion = T.Union[packaging.version.Version, packaging.version.Version]
and it runs. Hope that doesn't screw anything up π
Eww. Not working, but for unrelated reasons. I'll take this over to π» software-dev
Your develop branch is not up-to-date. Make sure to pull the latest changes and try again
The fix for this issue hast been added already https://github.com/pupil-labs/pupil/blob/develop/requirements.txt#L14
HRUM. I thought I was up to date. Thanks!
@papr
FileNotFoundError: Could not find module
'C:\Users\gabri\anaconda3\envs\PupilLabs_2\li
b\site-packages\pupil_apriltags\lib\apriltag.
dll' (or one of its dependencies). Try using
the full path with constructor syntax.
Pupil_apriltags 1.0.4 has been installed
Try pip install pupil-apriltags -U
It looks like apriltag.dll exists in that location.
...so, it must be missing a dependency. Requirements.txt has been installed.
Just going to go ahead and comment out the related lines in player.py
@papr Getting a few of these with a file:
is this from an unmodified recording?
is this something I can ignore?
yes
This is about 50% through finding pupils.
these are decoding errors, prob. save to ignore
thx. So, these are from the same file I have been trying to transcode
Well, same pupil folder. The errors here are related to the eye video, not to the world video.
FWIW, I've given up on my efforts to view the optic flow transcode in pupil player. It would be nice, but... oh well.
I'll assume gaze timings are consistent relative to the start of the video and will overlay gaze using my own code, independent of PPlayer. My only fear is that temporal mismatches will accrue over the 20 minute recording.
That is - the gaze overlay won't align with the visual informatoin that was visible at the time the eye movements were made.
Hi. I did a sample test using one of the demo scenes in the starter project with Pupil Capture, and I found that the recorded data contained some data with a sampling frequency higher than the advertised 200Hz. For many readings it was ~360Hz sampling frequency. Is this possible for the hardware, or is it likely an error?
Hi @user-d0c376 π. The VR Add-on has free running eye cameras that has two implications for sampling frequency: Pupil data - contains datums from both the left and right eye. If you don't filter for a specific eye, you can end up with more samples than you might initially have expected Gaze data - We employ a pupil matching algorithm that tries to match eye image pairs for gaze estimation. Read more about that here: https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching
Hello. The L/R diameter_3d values in our data output alternate but are rarely reported for the same timestamp. In other words, L eye timestamps are inconsistent with the R eye timestamps. Is there any way to set it up such that L/R diameter_3d values are both given for each sample with the exact same timestamp? More or less, we are wondering if itβs possible to make both eye cameras capture data at the exact same time, which would result in the two circled numbers (and each subsequent pair of rows) being the identical timestamp? This is from the HTC Vive Binocular Add-on hardware running through pupil capture. Thanks!
The eye cameras are free running, i.e. not perfectly in sync. Read more about the data matching algorithm in my message above π
Is there any impact of putting high values of gravity and mass in to the virtual object ?
Is there anyone have experience on preprocessing and ploting pupil diameter with VR eye tracker?
Hey! Recommend checking out this message: https://discord.com/channels/285728493612957698/285728493612957698/869558681363177472
@nmt Thanks!
I can double check that, but I don't believe it was.
My default is always to opt for the larger recording
Sorry - you sent the message to an account I don't use much anymore.
I mostly use this one. bok_bok is on vacation.
Hope I'm asking the right chat room, I'm looking to acquire the HTC Vive Pro 2 system and integrate eye tracking within the hardware. Is your HRC Vive add on be compatible with the Vive Pro 2 system? (https://docs.pupil-labs.com/vr-ar/htc-vive/)
Hi @user-31ef4c π This is the right channel! Unfortunately, our existing Vive add-on is not compatible with Vive Pro 2. We can offer a de-cased version of our Add-on with cameras and cable tree for custom prototyping. If you are interested, please send an email to [email removed]