Hi, I'm porting the pupil source code into Raspberry PI 5. I was able to open the SONY PS3 camera using opencv on both VNC and MobaXterm, but when I ran pupil capture, neither MobaXterm nor VNC captured the camera. Where MobaXterm displays the framework of the software, VNC directly reports errors. Can someone help me?
The following error occurs when MobaXterm runs pupil capture: (pupil_env) [email removed] $ python main.py capture [15:44:18] ERROR world - video_capture.uvc_backend: Could not connect to device! No images will be supplied. uvc_backend.py:133 WARNING world - camera_models: No camera intrinsics available for camera (disconnected) at resolution [1280, 720]! Loading dummy intrinsics, which might decrease accuracy! Consider selecting a different resolution, or running the Camera Instrinsics camera_models.py:532 Estimation!
The following error occurs when VNC runs pupil capture: [15:37:18] ERROR world - launchables.world: Process Capture crashed with trace:
Traceback (most recent call last): File "/home/xhw/pupil_env/pupil/pupil_src/launchables/world.py", line 547, in world cygl.utils.init() File "src/pyglui/cygl/utils.pyx", line 63, in pyglui.cygl.utils.init File "src/pyglui/cygl/utils.pyx", line 80, in pyglui.cygl.utils.init Exception: GLEW could not be initialized!
Hi @user-78da25 π. I've moved your message to this channel. Just a quick note: Core software isn't designed to run on ARM processors. However, some users have used the Pi to stream camera feeds over a network to a dedicated computer running the Capture software. There are several previous posts on that if you're interested.
That said, it sounds like you've had some initial success with a port. So you should know that the Capture video backend only supports cameras that are UVC compliant, as detailed here: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
Hey! I managed to run capture on a Pi4! (no sony ps3 camera though). It caused a complete crash of the pi when I tried to run the calibration but you with your powerful pi5 might be better off π
You can DM me if you want and I can explain what I did ... might just be a few days till I relpy cause I'm on a short holiday π
In case you wanna try it sooner check out my fork of the pupil repo (https://github.com/paulijosey/pupil)
Thank you very much for your replyοΌ
Hello, Iβve been using the Pupil Labs Core product for academic purposes, and itβs been great. Iβm currently examining the source code from Pupil Labs, specifically looking into how the center coordinates and normal vector of eye0 and eye1 in camera coordinates are transformed into world coordinates. I was wondering if there is any example code or reference I could use for this process. At the moment, Iβm going through the calibrate_3d.py file in the shared module and hardcoding my way through it. I would like to debug this process and visualize it. Looking forward to your kind response! π
Hi @user-fce73e! Glad to hear you're enjoying Core! When using the 3D pipeline, we use bundle adjustment to find the physical relation of the cameras to each other, and use this relationship to linearly map pupil vectors that are yielded by the 3D eye model. The code for this is indeed found in the calibrate_3d.py file. We don't have an easy way to visualise this process, though. What did you have in mind for 'debugging'?
Hello, we are using Pupil Neon. By emulate the data, we mean create fake data to be able to test our software even when we dont have access to the device.
Our actual solution is to use the pupil cloud files instead of the API, but we dont know if there are other methods.
Hi @user-cf2c3e , do I understand correctly that your software makes exclusive use of the Real-time API? In other words, do you plan to simulate the real-time API in full, as in, simulate the streaming aspects?
Otherwise, using data from Pupil Cloud, as you are doing, can be handy for testing. There is also a sample recording in Native Recording Data format.
You can also write some code that emulates the data formats; they are specified here. I cannot really say if there is a best method with this last approach. So long as you can generate sensible data in the appropriate format, you should be good for running batches of tests.
Hello,
I'm looking to streamline the process of exporting trimmed segments from the same 'world' recording based on fixations. Specifically, is there an existing plugin or feature in Pupil Labs that allows batch exporting of multiple trimmed videos centered around these fixation events? Additionally, is there a way to automate the trimming process around fixations, or is this something that requires manual setup?
Thanks for any guidance!
Hi @user-d7eed4! May I ask which eye tracker you're using?
Yep, Neon.
The lab uses it with VR, so I'm almost certain it's the Neon module. I reached out to them a few hours ago but haven't received a response yet. It's definitely used in a VR environment, though. Iβm sorry if this doesnβt provide much detail, but I thought that knowing it's used in VR might help identify which eye tracker it is?
Hello!
I am encountering a problem when trying to use pupil capture on a remote device (I do not need visualization and only want to run this to process data on a SBC)
It seem like that even though I use the "--hide-ui" flag the program still tries to do some visualization using the glfw library. This causes a crash cause there is no X-org server running (or this is my guess).
Has anyone encountered a problem like this or better, has anyone managed to run this on a remote machine?
The exect error I get is:
[09:06:33] ERROR world - launchables.world: Process Capture crashed with trace: world.py:837
Traceback (most recent call last):
File "/pupil_dev/pupil/pupil_src/launchables/world.py", line 525, in world
glfw.window_hint(glfw.SCALE_TO_MONITOR, glfw.TRUE)
File "/usr/local/lib/python3.10/dist-packages/glfw/__init__.py", line 1236, in window_hint
_glfw.glfwWindowHint(hint, value)
File "/usr/local/lib/python3.10/dist-packages/glfw/__init__.py", line 689, in errcheck
_reraise(exc[1], exc[2])
File "/usr/local/lib/python3.10/dist-packages/glfw/__init__.py", line 70, in _reraise
raise exception.with_traceback(traceback)
File "/usr/local/lib/python3.10/dist-packages/glfw/__init__.py", line 668, in callback_wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/glfw/__init__.py", line 912, in _handle_glfw_errors
raise GLFWError(message, error_code=error_code)
glfw.GLFWError: (65537) b'The GLFW library is not initialized'
Hi @user-b96a7d , if I remember correctly, your setup is:
Can I briefly confirm that first?
and maybe I should mention that it is working if I connect the Pi do a display and start everything from the Pi ... the problem seems to be the ssh connection
yes π
Ok, this is a rarely used combination, so just wanted to be sure π So, I cannot promise that I can give the correct answer in this case.
The Pupil Capture software itself is not specifically designed for ARM processors. It is similar to the Apple Silicon note in the README. So, in that respect, you might experience overall more success with a LattePanda (x86-64 SBC), although I cannot make any guarantees about that.
Otherwise, yes, Pupil Capture must run in a graphical desktop environment. If you need to run it headless, then check out Pupil Service.
You can tunnel X through SSH though, but I cannot promise that it will work as expected. You also lose some of the security provided by SSH, if that is important
Yeah the X11 tunneling through ssh I already do ... no effect sadly.
Pupil service seems promising though! Thanks! One question... can you also use plugins with this?
With Pupil Service, you could edit the source to try to do that. More info is in the documentation and the corresponding source code
Thanks for confirming, @user-d7eed4! One more question, are your recordings stored in Pupil Cloud?
The data is for medical research purposes and therefore wasn't allowed to be stored in Pupil Cloud. However, if storing it there would help achieve the desired outcome, I can explore the possibility of storing only the VR world recordings in the cloud. Would that work?
Thanks for helping!
Edit: I just read the announcement about event automation using GPT, and I can see why using Pupil Cloud would be a better approach. I could use it later to crop and trim the videos for export. I havenβt had a chance to explore the repository yet, but Iβll look into whether thereβs a way to do it without relying on Pupil Cloud. In the meantime, Iβll also check if I can use Pupil Cloud. Is that what you had in mind? Either way, Iβd love to hear any additional insights or ideas you might have. Thanks!
Could you elaborate on what you mean by VR world recordings? Where are these data stored, and in what format exactly? It's just that Pupil Cloud is only compatible with recordings made on the Companion Device and uploaded directly from the phone. For example, it wouldn't be possible to upload something that was recorded on a VR platform like Unity.
They are stored on a regular hard drive (on my physical SSD and on the labs' one as well) as mp4 files. Yes, they were recorded using unity VR platform.
Hi all! When trying to work with simple_api I receive the following error with Jupyter Notebook and Spyder: "asyncio.run() cannot be called from a running event loop"
I guess this means with those two IDEs I have to revert to async API, since they use some sort of "interactive" kernels?
If so I'd like to suggest to add this information to the documentation, as those are two beginner-friendly IDEs. At least when I was searching for either in the docs, I couldn't find it.
Even though you have a working solution already, do you mind trying something for me?
Put %autoawait asyncio
in the first cell, run it, and then try to run your cell(s) again
I figured that it can be circumvented by allowing nested even loops with
import asyncio import nest_asyncio nest_asyncio.apply()
...which has solved the problem for me π
Sure,
I opened a new notebook and put
%autoawait asyncio from pupil_labs.realtime_api.simple import discover_devices, discover_one_device print(discover_one_device(max_search_duration_seconds=10.0))
however in this case, the error "asyncio.run() cannot be called from a running event loop" appears again.
Hello, I was wondering if there is any way I can use the pupil core network api from Unity? I was able to find example script for python but couldn't find anything for C#.
Hi @user-4ba9c4 , I've moved your message to the π» software-dev channel.
May I ask if you have seen hmd-eyes? While designed with VR/AR in mind, it shows how to receive Pupil Core's gaze data over the network in Unity with C#. Having said that, are you using a VR headset?
I haven't looked into hmd-eyes. I am working without any VR. So will that work without VR?
It works under the assumption that you are using a VR headset, but the code for receiving the gaze data could be reused for other purposes. May I ask what your exact plans are? What is the overall experimental design?
Sure, my project is to use fixation on a computer screen to select objects in the virtual environment. Right now, I have a python script that extracts the fixation values over network API and then broadcasts it to Unity over lab streaming layer. But this is not ideal as it is hard to get the most current value of fixation and also I would prefer to contain everything within Unity.
Ok, I see. So, the end goal is to cast the gaze ray into a 3D scene that is presented on your monitor?
I see. So you have mp4 files of the VR recording (made by unity), and I presume those also have a gaze point showing where the participants were looking in the VR scene. It won't then be possible to upload these to Pupil Cloud, as per my previous message. In terms of batch exporting of multiple trimmed videos centred around fixation events, we don't have a turnkey tool that could achieve this. I see you already found the activity recognition Alpha Lab article. This could do some of what you want - a bit of hacking would be necessary, though. It's currently designed to download scene video frames from Pupil Cloud, and then upload events that correspond to your prompted input back into Pupil Cloud to be used with enrichments. So ideally you'd want to somehow use the events generated to mark sections of your recordings. The next step would be to render out those sections. How many recordings do you have? I wonder whether doing this manually with video editing software would be more time efficient π€
Iβm working with a large collection of videosβ98 GB in total. There are around 100 test applicants, each with 5 videos, all just a few minutes long. This article has been really helpful. I plan to use the suggested method, either with an offline folder or by exploring a suitable cloud platform. Iβll determine the best option. Thanks so much for your time and patience!
Hi pupil developers, I'm working on porting this project to the Raspberry Pi 5, and I'm in the final stages of it, but I've run into a problem. On GitHub I found the files for 3D printing, but they were only part of the headset, and I was planning to buy one directly, but due to the exchange rate, I simply can't afford it. So I can only ask you for help: is it possible to stick the camera directly to the glasses without using the headset?
By the way, the section on changing the IR blocking filter on Eye Camera in https://docs.pupil-labs.com/core/diy/#camera-assembly should refer to World Camera and not Eye Camera
Hi @user-78da25 , that DIY Pupil Core step, which refers to removing the IR blocking filters, is actually for the eye cameras. IR light is not visible, so using IR LEDs to illuminate & make images of the eyes does not disturb one's vision while wearing the eyetracker. It also enables eyetracking when it is dark, for example.
Removing those filters from the World Camera, while certainly possible if your use case necessitates that, would typically lead to undesired distortions in images of the scene.
Referring to your previous question, as long as you have a way to stably mount the cameras and each eye camera has a good view of the eyes, then you should be good to go.
Depending on what you plan to do, you might need to account for any differences in eye + world camera placement from the standard setup, though.
Thank you for reminding me, but I can't find a video of the HD6000 removing the infrared blocking filter. The prompt in the DIY documentation is to unscrew the lens from the bayonet. In this step, when I removed the C525 earlier, I was indeed able to unscrew the lens from the monitor, but I couldn't unscrew the HD6000, which is very sturdy. I think I might need to remove this shell further?
Hi @user-78da25 , that video is here. The IR filter is not in the mount, which you already de-cased and show in your photos here, but rather the IR filter is in the cylindrical lens housing that you removed from the mount.