Hi there everyone! I am working on a student based project for school, and I was interested in using Pupil Capture as the basis for my gaze detection. However, I am having a couple of issues with Pupil Capture finding and utilizing my cameras. I've poured over threads and can't find anything to help me, so I was wondering if someone could provide me a little bit of assistance with setting up my Pupil Capture. If you have any questions or would like to help please contact me! Thanks so much.
Hi @user-d6d8fd , are you using standard Pupil Core hardware or are you building a DIY headset?
hello pupil labs, im trying to integrate egocentric video output from insta 360 with eyetracker video output, so iam working with the google colab notebook shared b pupil neon, ihave one doubt do i need to record both vdeo at same start and end time
Hi @user-b4808a , they do not need to record at exactly the same start & end time, but they should overlap. As mentioned in this section of the guide, it is easiest to do as follows;
Also, just for clarity, the user needs to wear both Neon and the Insta 360 on their head, as shown here.
okey i attached the insta 360 on cap and recorded it, it is giving almost similar frame of eyetracker
while running last cell this errror is coming
Ok, I would double-check that these two initial cells at the top of the notebook were run as described in the instructions. The error indicates that not all of the necessary packages were installed into the notebook's virtual environment.
i run all these cells as they given didnt make any changes
@user-b4808a Could you try adding the following line to the first code cell, before the last line:
!pip3 install kornia==0.8.1
as shown in the attached image.
Make sure to also click Cancel when the warning appears. That should fix it.
Hi @user-b4808a , I just gave it a try here and I think I see the issue. I will raise it with my colleague.
i will try
Before running the Egocentric Video Mapper,let's check the alternative egocentric video orientation. this cell give output orientation like this. so i selected 90 degree clockwise rotation option in next cell is it right way
Hi @user-b4808a , as detailed in the instructions, it is easiest to record the Neon scene video and the Alternative Egocentric video with the same orientation. If you already made your recordings, then in your case, it looks like you need 90 degree counter-clockwise.
Hey there, so I'm currently working on a student led project to create a DIY headset that utilizes gaze analysis. I decided on using pupil labs open source software, but I am having immense difficulty with pupil labs detecting my cameras. It usually says that the camera is either in use else where or blocked. I've made sure that it isn't being used elsewhere so my only guess is that it's blocked, but I am unsure of how to resolve that issue. Thank you for any help you can offer!
Hi @user-d6d8fd , thanks for the clarification. It is important to confirm that your cameras are UVC-compatible before they will be detected by Pupil Capture.
Both of the cameras I'm testing with are UVC-compatible, but they still aren't being detected. One is my laptop camera, while the other is just an USB camera.
Do you mean the camera built into the laptop screen?
Yes
Ok. Please note that the Pupil Capture software is designed with the expectation that the two eye cameras are head mounted and close to the eyes. It is unlikely that the camera built into the laptop screen will work as expected. If you are trying to use the laptop camera as a world camera, then having no experience with that, I also cannot confirm if it will work as expected.
With respect to the USB camera not being properly recognized, you might want to try running a basic pyuvc example to see if you can narrow down the root cause, since that is the code that is producing the error you are seeing.
That makes some more sense then. I want to ask, in the future I plan on using two ESP32 cameras, which would produce a web feed for the camera. Is it possible to route the web feed into pupil capture so that it acts as the two pupil cameras? Furthermore I wanted to use a raspberry pi to transmit another camera feed over WiFi, but will pupil capture be able to use that either?
You could try making a Pupil Capture plugin that receives the streams and passes them onto the Pupil Capture estimation pipeline in the right format. Others have made custom video backends as plugins, which you could reference.
Okay awesome, thank you so much for the help!
And, at the least, you could acquire & try the cameras that we list in the DIY Pupil Core documentation.
You are welcome. For the Raspberry Pi, you may want to reference this work.
Hi Pupil Labs Team, I wanted to test the new segment anything 2 integration and I am having trouble with the last step (Launch and Segment). The first two cells of the code work completely fine, but when launching the last cell, I get the following error: Traceback (most recent call last): File "/usr/local/lib/python3.12/dist-packages/gradio/queueing.py", line 759, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/gradio/route_utils.py", line 354, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/gradio/blocks.py", line 2202, in process_api data = await self.postprocess_data(block_fn, result["prediction"], state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/gradio/blocks.py", line 1924, in postprocess_data self.validate_outputs(block_fn, predictions) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/gradio/blocks.py", line 1879, in validate_outputs raise ValueError( ValueError: A function (load_recording) didn't return enough output values (needed: 9, returned: 6). Output components: [image, slider, textbox, textbox, textbox, state, state, textbox, state] Output values returned: [None, {'type': 'update'}, {'type': 'update'}, {'type': 'update'}, "Failed to initialize SAM3 predictor: name 'build_sam3_video_predictor' is not defined", <main.Session object at 0x7d840df5b470>]
Do you have any suggestions on what I can do?
Thank you for your help!
thanks for reaching out @user-880443 and for reporting this. I'm having a look now and will follow up as soon as possible when I fix this!
@user-880443 - this should be now fixed, I just tested it with a sample recording. Can you give it another try and let me know if you experience any issues?
Hi Nadia, thank you for the quick response! I think the issue is fixed now, but I will test it more in the next days.