Hi, I request a consult regarding eye segmentation
Hi @user-97f0f6 If you are interested in having a research consultancy, please visit our webpage at https://pupil-labs.com/products/support/ to learn more and to schedule a consultation.
For HTC vive adds-on, We are using it to design a VR-set with this adds-on to detect the position of pupil, where we don't have the world camera so can't do the calibration. In this case, the accuracy from the adds-on is still good enough, right? We can base on the confidence to know how accurate the data is. Am I understanding this right?
Hi, in the VR use-case, you would need to perform a "HMD Calibration". That means that the VR application displays reference points to the subject and sends these to Pupil Capture. The HMD-eyes project provides a Unity plugin that implements this procedure. https://github.com/pupil-labs/hmd-eyes/
Without a calibration, you only get pupil data, monocular data within the independent eye cameras' coordinate systems.
And If we want collect the gaze data, do I have to have a world camera?
Folks, I'm computing optic flow on a world vid, and I would like the output to match the format of the original world vid so that I can rename it world.mp4 and view it in pupil player. I am using pyAV. Can you help me ID the proper encoding settings?
Relevant code:
''' container_out = av.open(os.path.join(self.video_out_path, video_out_name), mode="w")
stream = container_out.add_stream("libx264", rate=average_fps) stream.options["crf"] = "0"
for frame in container_in.decode(video=0):
# Here is where the optic flow is calculated. This is pseudocode. The full code is visible in the repo above. It's concise.
frameout = compute_optic_flow(frame)
for packet in stream.encode(frameout):
container_out.mux(packet)
'''
Hi! Are you able to view the generated file with ffplay or VLC?
Hi! Yep. I can share a video if you would like.
Have you tried renaming and adding it to Player already? If so, what is the issue that you are encountering?
Thanks for asking. I'm being a bit proactive - my other work with eye videos has taught me how sensitive the framework can be to encoding/decoding settings - small issue will yield differences in frame counts etc. I was a bit wary because one of my recent computations increased the video size <2x. That may be due to the suitability of the video data for compression. Still, I had assumed that, because you guys also use pyav, the settings were straightforward. If that is true, then I can simply use the same settings and rule out the possibility that they are introducing issues.
I should add that, ideally, the encoding/decoding process would be lossless.
The only possible issue that I see is that you do not set any packet/frame pts. It is possible that Player handles this though if you delete the corresponding lookup.npy file before opening the file in Player
Hurmn. Unfortunately, I'm not knowledgeable enough about video encoding to understand the issue here. I will eventually want to reprocess the video in a gaze-contingent manner. If you ahve any pointers, even if it's to 3rd party material, or just google search terms, I'm willing to see what I can learn.
I am still not sure what the issue is exactly. What happens if you rename the file to world.mp4, replace it in a recoding, and open the recording in Player?
Ok, it loaded in without issue :).
Let me try this. You may be right that I'm creating an issue where there should be none. I'll let you know how it goes.
This will take some time - I've been processing small chunks of the video before.
in the meantime, you can use this script to generate a Player-compatible recording for the smaller generated files https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a
Still having an issue here. I'm somewhat new to video decoding/encoding, so bear with me. My goal is to replace the world.mp4 with a modified version, and retain the mapping from gaze samples to (now modified) video frames. My assumption is that the frame rate of world.mp4 is dynamic, and so if I'm decoding frames from the vid, processing them, and then adding them to a new av container & stream, then I need each new frame to have the same pts and time_base as the original frame that I decoded and then modified. However, when trying to do that, I got this error:
"Application provided invalid, non monotonically increasing dts to muxer"
That's strange. Any idea why a Pupil Labs video would have invalid, non-montonicaly increasing pts? ...or do you have another idea?
Why only smaller files? Memory issues?
I mean the small chunks that you were referring to
Thanks 🙂