There's a second delay between my Azure Camera and Pupil Core (I noticed the pupil capture is 1 second ahead of the Azure camera), I am using the Network API to send commands.
It's inevitable. The process of starting any recording is going to take some amount of time, and exactly how long it will take is going to be inconsistent and impossible to predict precisely. This is just the nature of multitasking operating systems, and there really isn't any way around it. You can try to time the recording commands so they align, and you might get it close enough for your needs, but it will never be consistently, perfectly aligned.
Any help would be greatly appreciated!
I have a problem accessing the Core front camera using pyuvc. Sometimes I get a "TimeoutError" that stops the execution of the file. I tried to comment the line 705 (i attach image of the error) but as it's a .pyx file i cannot do it. I want to use the eyes and the front cameras without having this error since it stops the execution. how can i solve the problem? The most interesting thing is that I get this kind of error when I try to access the front camera, the eye cameras I don't get this error. Just in case I installed pyuvc following the steps of github.(https://github.com/pupil-labs/pyuvc)
Rather than commenting out the line that raises the exception, you may want to catch exception in 3d_gaze_mapping_blink.py:314
by wrapping that function call in a try-catch
block
thank you!
Hello, I keep getting an error of an "unclosed client session" when attempting to connect with the glasses and starting a recording using the API code. Here is a screenshot. Can you help me with this? Thank you so much.
Hi, @user-86163a 👋🏽 - would you be able to share the code you're using? Also, can you tell us a little more about your network configuration (e.g., wifi at home, university network, work/corporate network, etc)? Lastly, which glasses are you using?
I tried using both a university network and personal hotspot, with both I get the error of an unclosed session and the recording never starts. I am using the Neon glasses.
Fixed this!
Is there any annotated dataset for pupil captured from the invisible,core and neon devices? @user-53a8c4
Hi @user-25fb27! What do you mean by annotated?
Hi, I just wanted to follow up in this question since I didn't get a response: https://discord.com/channels/285728493612957698/446977689690177536/1124066479294398536
Essentially, I am looking for a way to use the Cloud API to download the files corresponding to "Download Timeseries Data + Scene Video" in the web interface. How do I go about that using the Cloud API?
Hi @user-5fd308! Sorry for the delayed response! Yes, there is an endpoint for it!
https://api.cloud.pupil-labs.dev/v2/workspaces/<workspace_id>/recordings:raw-data-export?ids=<recording_id_1>&ids=<recording_id_2>
Hi 💻 software-dev , I'm looking for any information on using c++ to interact with the Neon real time API? Is there any documentation of such or anyone who has experience using C++ as opposed to python? Python may not be sufficient for our needs.
Hi @user-ac085e! We don't have any documentation on using C++ with Neon's realtime API. It would be worth reading this section of the docs which explains what goes on 'under the hood', or in other words, what we've built our Python client around: https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html#. That said, may I ask why Python may not be sufficient for your needs?
Thank you I will check it out. It is mostly due to Python not being a real time language, we are concerned using Python with the API to connect to our simulators will not offer accurate or high enough resolution timescales.
Bear in mind that our Python Client is just a client. It doesn't really handle the heavy lifting. For example, RTSP is used for real-time streaming of gaze data. What functionality of the API do you need for integration with your simulators?
My simulator team is going to reach out via email to discuss further as I do not have the depth of understanding their concerns.
You of course have to insert the corresponding workspace and recording IDs!
Dear Pupil team, I just updated my pupilplayer to v3.5.1. Unfortunately, when exporting my videos, there is no fixations.csv as in the software version I used before. Can you please help me finding/creating/exporting the fixations.csv? Thanks in advance!
Please double-check that the fixation detector plugin is loaded otherwise the fixations export will be empty
I'm getting this error message when using the software. After Googling, my understanding is that it is caused by a VPN software modifying winsock. However, I'm currently visiting family in China and VPN is a must have. So resetting winsock is a no go. Is there anything I could do on the Pupil or Python end to make it work? (And reversible, too - as I will return to the U.S. in three weeks). Thanks!
Bad address (C:\projects\libzmq\src\epoll.cpp:100)
Hey @user-5346ee 👋. Would you be able to confirm what software you're using?