I downloaded native data and i noticed - compared to other data the manifest.json and manifest.json.crc are missing. Does this influence anything of my data?
Hi @user-f0b6e1 ! Those are only internal files to guarantee data integrity when uploading to Cloud. You can completely ignore them.
(for a Neon Recording)
Hi everyone. I am currently using Neon Player and have a few questions regarding the usage. Surface definition: I want to define a screen as a surface, while the player does recognize the apriltags, it is very laggy when I try to move the position of the markers or does not work at all. Do I have to change something in regards to height and width? I already have downloaded the newest version so it cannot be that. Events: i have commuicated certain time stamps to the eye tracker. However I can only see this time stamp when being in the exact time frame. Is there another way to jump between events other than the annotation time line?
Hi @user-85f075 , thanks, we received your email.
That is not exactly lag. When you edit a surface and move a corner, the system needs to re-process the Surface Definition and save it to disk. You might notice a brief message that it displays in the viewport, which says Surfaces changed. Saving to file. This process just takes a moment whenever you move a corner.
Hi @user-85f075 , could you share a video demonstrating the lagginess that you experience? You can share this via DM or with [email removed]
WIth respect to Events, when you activate the Annotation Player plugin, you will see a high-level overview of the Events in the Annotations timeline. Do you mean that you only see the Event name when being in the exact frame?
Thank you for your quick reply. Yes, I do not have an overview of the events (or better, I can't find one) π
You may need to pull up the timeline. You can drag the divider at the bottom of the viewport.
There is a little tab that allows this.
Is it possible to programmatically pull in the eye videos using the v2 api?
Hi @user-3ff3c1 , the Pupil Cloud API does not currently provide that level of granularity. It is necessary to pull the whole recording file over the API and then extract the eye video from the resulting ZIP file.
If you'd like to see this added, please make a π‘ features-requests .
Thanks for the response. I'm currently pulling the whole recording zip but the only video in the zip is the recording, I don't see videos of the eyes themselves.
Is there a way to use an API to automatically export recordings from pupil_player? We have the 000 folder and can open pupil_player to export, but we have hundreds of files and want to automate that process instead of opening the gui. I can't find any documentation on the website. Thanks for any help
Hi @user-d34f33 π. Please see this message for reference on batch exporting data: https://discord.com/channels/285728493612957698/285728493612957698/1254146039515189278 π
Hi @user-3ff3c1 , it sounds like you are using the Timeseries CSV endpoint. You rather want to use the Native Recording Data endpoint.
That is the endpoint we're using. We're only receive one mp4 file for each recording, the video that the glasses are recording. We're looking for the video of the eyes themselves that appears as a smaller video when watching the recordings on https://cloud.pupil-labs.com/
Hi @user-3ff3c1 , apologies for the delay.
If you just want the eye videos, then you can use this endpoint:
https://api.cloud.pupil-labs.com/v2/workspaces/{WORKSPACE_ID}/recordings/{recording_id}/files/{filename}
where filename can be replaced with eye.mp4.
You could wrap it up in a Python function like this:
import requests
def get_individual_file(filename):
url = f"https://api.cloud.pupil-labs.com/v2/workspaces/{WORKSPACE_ID}/recordings/{recording_id}/files/{filename}"
response = requests.get(url, stream=True, headers={"api-key": API_TOKEN})
save_path = f"{filename}"
chunk_size = 128
with open(save_path, "wb") as fd:
for chunk in response.iter_content(chunk_size=chunk_size):
fd.write(chunk)
return response.status_code
And then call it as follows:
def get_eye_video(recording_id):
get_individual_file("eye.mp4")
Thank you very much!! Tried that out and it works
Hello, I am using the latest Pupil Labs Realtime API for Python. I am enjoying the recently added audio support. Upon integrating audio support into my code base, I couldnβt find a reliable method of detecting when the microphone is muted in the Companion mobile application. Thus, I implemented a timeout on the audio queue to detect this case. β’ Am I correct that there isnβt a way to reliably detect the microphone is muted in the Companion mobile app? β’ Looking at models.py, it appears as though if the world sensor is connected, the audio sensor connected status is unconditionally set to True. β’ If my assumptions are true, can a check be added to the API that reports the microphone status or the connected status of the audio sensor be updated accordingly?
Hi @user-937ec6 , there is no way to know if the microphone is muted via the standard API routines. Rather, if you try to request audio when the microphone is disabled, then it should return an error. If you would like a change to the API, feel free to open a π‘ features-requests .
To add a bit to my colleague's response, the audio is multiplexed on the video RTSP, but on our client we generate a separate stream as streaming only audio is a valid scenario.
The microphone status is not unconditionally set to True, but as audio has not it's own RTSP stream and the status does not report the mic's , we check the SDP of the RTSP to check whether it contains the audio signal.