Is there a way to bypass the connection requirement or trick the companion device to have an IP? My goal for my app is to be self contained and used on the go without a dedicated connection.
Hi @user-d801e5 , will you be carrying a personal cellphone with you?
Ideally not. Are you suggesting using a hotspot?
Yes, this is one solution. The hotspot would not need Internet access. I can check with the team if this is not applicable in your case.
The context is that we would want to run this data viz program out in the field, and we want to minimize the number of components.
Hi @user-d801e5 , the team is now aware of your request. We could have a response to you by next week about this.
Happy to give more details about the project as needed
Hi, I had a question. I am using the pupil labs spftware. more specifically pupill capture. I am using a raspberry pi and I am trying to stream the feeds to pupil capture, but I am running into an issue where pupil capture is unable to detect my cameras. I was wondering if you had any suggestions on how I can do this. Thank you!
Hi @user-28b638 , apologies for the delay.
May I ask if you are using the pupil-video-backend? Or, do you have the Pupil Capture software running on the Raspberry Pi?
Short question, how is the boundary box for the face calculated for the face enrichment? There is a p1 x and p1 y for the starting point (which equals which one? right top?) and the ending point p2 x and p2 y [px]. What point sdo they equal? Does the enrichment just assumes symmetry? So no facial marks for the boundary box and only for the facial landmarks such as eyes/mouth/nose?
Hi @user-5ab4f5!
The output of the face mapper includes the boundaries of the box (i.e., p1 x/y = x/y-coordinates of the starting point of the bounding box rectangle, and p2/y = x/y-coordinates of the ending point of the bounding box rectangle), along with the coordinates of the main facial landmarks. This data is provided in pixels in the coordinate system of the forward facing camera.
In terms of computation, under the hood the tool leverages the RetinaFace algorithm. You can find more details here: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/face-mapper/#face-mapper
May I ask what's your intended analysis?
@user-480f4c I simply wanted to plot the face coordinates myself using a python code: plotting the face boundary. and then the eye, mouth and nose coordinates within
thanks for clarifying @user-5ab4f5! This should be straightforward. In fact, we have a tutorial on our Alpha Lab page that maps gaze data on individual facial landmarks and draws the bounding boxes around the detected faces, along with areas of interest around each facial landmark: https://docs.pupil-labs.com/alpha-lab/gaze-on-face/
Specifically, for rendering the box around the face, this function might be helpful: https://github.com/pupil-labs/gaze-on-facial-landmarks/blob/main/src/pupil_labs/gaze_on_facial_landmarks/map_on_landmarks.py#L15
Thank you so much @user-480f4c !
let me know if you need any further clarifications on the code - I'd be happy to help ๐
Hey as an update for this, I was reading the under the hood doc, which says "The necessary connection information is made available via the Sensor model as part of the Get Current Status and Websocket API," regarding the streaming API. Is there a code example of how to do this (i.e. without the python package)?
Hi @user-d801e5 , can you rephrase the question? I am not sure I follow. You want to stream the data over WebSockets?
I'm happy to stream over any protocol, I just want to understand how the API is operating so I can use it in Kotlin.
I see. At it's core, it is RTSP and RTCP. The WebSocket endpoint for device status is not necessary for receiving the data.
Rather, you want to use a TCP socket to initiate the connection with the RTSP server and then send RTSP commands, such as DESCRIBE, SETUP, and PLAY. During this process, you provide UDP ports for sockets that are bound to and over these sockets are sent the RTP and RTCP packets with data & timing info.
We don't have a Kotlin example to share, but okhttp is also not relevant for the above process. The Python package and its Under the Hood
Documentation would be the defacto place to reference. You can also reference our Unity C# implementation, which is a bit more trimmed down. Please note, that Unity code uses RTSP over Websockets, which is not necessary in your case, but the process for initiating the RTSP session is what you want to check.
I have been trying to use the okhttp package to connect to the API through websocket, but it seems I need more than just that to stream the data according to the documentation
Happy to provide more details
Okay thank you I will take a look
This might be a silly question but I am having trouble with path(s) in /api I am interested in. I think /api/status is relevant, but not sure how to use what it's returning (https://pupil-labs.github.io/realtime-network-api/#/status/get_status)
Hi @user-d801e5 , please see this message (https://discord.com/channels/285728493612957698/1395347903488200805/1395691362879541288) for a Kotlin implementation of a RTSP client for Neon's gaze signal.
Please note that it is not official code and is only lightly tested, but should be enough to get you and others up and running.
Hi @user-d801e5 , could you share the code for this step? To clarify, you don't need to use the /api
HTTP endpoints to establish the RTSP connection and receive RTP/RTCP packets.
[email removed] , could you share the code
Hello, I'm making good use of your core productโthank you for providing it.
Iโm writing to report an issue related to using your open-source Pupil code with pyinstaller to generate a .exe file.
Our goal is to send the gaze point (x, y, z) vector values via Windows Messages and receive them back to be used in a display module. The main Python script runs perfectly fine in our virtual environment, where the glfw module is properly installed using pip.
However, once we build the .exe file with pyinstaller, running it results in an error stating that the glfw module cannot be imported. It seems the .exe file cannot locate the glfw module, even though it works without issues in the Python environment.
Since the team working on the display module does not have experience with software development, we need to deliver the functionality as a .exe file. This is for academic research purposes in a graduate school setting.
We would really appreciate your kind help and advice on resolving this issue.
Best regards,
Hi @user-fce73e , please note that you need to use Wix Installer on Windows, and this script.
@user-fce73e , my colleague, @user-cdcab0 , has reminded me that you can also simply make a Pupil Capture plugin to send the Windows Messages. Then, development should be easier and you can use the default, bundled EXEs on our webpage, without needing to deal with WiX or pyinstaller.
Thank you, I'll try it
Hello! I am running into this error: ImportError: cannot import name 'builder' from 'google.protobuf.internal' (/usr/lib/python3/dist-packages/google/protobuf/internal/init.py) [ros2run]: Process exited with failure 1 In the past, upgrading protobuf worked: ' pip install --upgrade "protobuf>=3.20,<5"'. However, this time, when I followed the same step, the error persisted. Any suggestions?
Hi @user-2d2017 , this is to use the Real-time API? Can you try:
pip uninstall protobuf
pip install protobuf
and let us know how it goes. If it does not work, what version of Python and what version of the Real-time API are you using?
Yup, this is to use the Real-time API. I uninstalled and reinstalled, but I'm still experiencing the same issue. My Python version is 3.12.9
I was using conda with ROS2, and apparently, that caused an issue, although I was able to locate protobuf while being in the conda environment. Deactivating the conda env seemed to make it work.
@user-2d2017 do I correctly understand that you have resolved the issue? Or, do you mean that previously, deactivating Conda was the solution and that also no longer works?
Yes I have resolved the issue by deactivating the Conda environment!
Hi there, I'm using Neon Player v5.0.4 to process data collected with Pupil Labs Neon child glasses, and weโve encountered several issues that prevent us from opening certain sessions. Hereโs a breakdown: 1. Corrupted Scene Camera Video File Causing Crash Some folders contain a corrupted Neon Scene Camera v1 ps1.mp4 file, which crashes Neon Player upon opening. Error message: Invalid data found when processing input: 'F:\03140324\2025-03-22\T16\neon\2025-03-22-18-04-27\Neon Scene Camera v1 ps1.mp4'; last error log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found This file cannot be opened in any video player either. Is there any recovery method for this file, or a workaround to process the session without it?
2.eye0.mp4 File Too Large โ Neon Player Crashes In the folder: 2025-03-15-19-01-20\neonplayer\eye0.mp4, the file size seems to be extremely large, and Neon Player crashes instantly when attempting to open the folder. Is there a known file size limit? How should we handle this?
AttributeError on Scene Camera Matrix โ File Wonโt Open One folder fails to open with this error: AttributeError: 'NoneType' object has no attribute 'scene camera matrix' What does this error indicate? Is it related to missing calibration or corrupt metadata?
ValueError: cannot convert float NaN to integer โ Crash at the End of Download During the download of one session, Neon Player crashes near the end with: ValueError: cannot convert float NaN to integer This seems to happen after all video and data files are nearly done downloading. Is this a known bug?
Let me know if you need access to the specific folders or logs โ Iโd be happy to share more details. Thank you in advance for your support!
Hi, @user-03bf4a - sorry to here you're having some troubles with your recordings. My responses are below
ffmpeg
to re-encode the video (and thus, hopefully reducing its size), but if you're seeing large eye videos on many recordings we may want to troubleshoot the causeno attribute 'scene camera matrix
error) It could be missing or corrupt data, but it's hard to say. We'd probably need the full log and/or a copy of your recording to troubleshoot. I have never encountered this error myself.If your recordings are on Pupil Cloud, you can invite [email removed] to your workspace and let us know the recording IDs we should be looking at. Otherwise, you can upload any recordings you'd like to share to a file sharing service of your choice and share them with [email removed]
Hi, @user-cdcab0 , thank you so much for the detailed response โ itโs super helpful! 1. Thanks for the suggestion! Iโll try using it to recover the file and report back if it works. Fingers crossed. 2. The file is 1.28GB. Other recordings from the same batch have even larger eye0 files, but donโt trigger this issue, so weโre not sure why this one is causing a crash. 3. Got it. I'll share the files later. 4. Apologies for the confusion earlier โ the crash happens during the gaze video export process in Neon Player. Just when it's about to finish (with only a few seconds left), it throws the following error and crashes: ValueError: cannot convert float NaN to integer
Weโre not using Pupil Cloud at the moment, so Iโll prepare the files and send a download link to data@pupil-labs.com soon. Thank you again for your support โ really appreciate it!
I asked this in the thread I opened, but maybe someone else can help too:
Hi, so I've gotten the RTSP streaming up, but only when connected to a network. I believe the network requirement can be bypassed by turning on the device's hotspot, but when I do that, the companion app just says waiting for dns service. Am I doing something wrong? Do I need to enable anything?
Hi @user-d801e5! Yes, we have replicated this. We're looking into it and will update you we have something. Thanks for your patience!
Pupil Capture in Ubuntu with multiple screen display
Saving the default settings on pupil capture for the experiment