💻 software-dev


Year

Month (messages)

user-d6d8fd 06 January, 2026, 15:50:10

Hi there everyone! I am working on a student based project for school, and I was interested in using Pupil Capture as the basis for my gaze detection. However, I am having a couple of issues with Pupil Capture finding and utilizing my cameras. I've poured over threads and can't find anything to help me, so I was wondering if someone could provide me a little bit of assistance with setting up my Pupil Capture. If you have any questions or would like to help please contact me! Thanks so much.

user-f43a29 06 January, 2026, 15:51:08

Hi @user-d6d8fd , are you using standard Pupil Core hardware or are you building a DIY headset?

user-b4808a 07 January, 2026, 08:59:04

hello pupil labs, im trying to integrate egocentric video output from insta 360 with eyetracker video output, so iam working with the google colab notebook shared b pupil neon, ihave one doubt do i need to record both vdeo at same start and end time

user-f43a29 07 January, 2026, 09:02:45

Hi @user-b4808a , they do not need to record at exactly the same start & end time, but they should overlap. As mentioned in this section of the guide, it is easiest to do as follows;

  • Start the Neon recording first.
  • Then, start the Insta 360 recording.
  • Later, stop the Insta 360 first.
  • Then, finally, stop the Neon recording.

Also, just for clarity, the user needs to wear both Neon and the Insta 360 on their head, as shown here.

user-b4808a 07 January, 2026, 09:10:36

okey i attached the insta 360 on cap and recorded it, it is giving almost similar frame of eyetracker

user-b4808a 07 January, 2026, 09:12:11

while running last cell this errror is coming

message.txt

user-f43a29 07 January, 2026, 09:14:58

Ok, I would double-check that these two initial cells at the top of the notebook were run as described in the instructions. The error indicates that not all of the necessary packages were installed into the notebook's virtual environment.

Chat image

user-b4808a 07 January, 2026, 09:23:52

i run all these cells as they given didnt make any changes

user-f43a29 07 January, 2026, 11:28:41

@user-b4808a Could you try adding the following line to the first code cell, before the last line:

!pip3 install kornia==0.8.1

as shown in the attached image.

Make sure to also click Cancel when the warning appears. That should fix it.

Chat image Chat image

user-f43a29 07 January, 2026, 09:29:30

Hi @user-b4808a , I just gave it a try here and I think I see the issue. I will raise it with my colleague.

user-b4808a 07 January, 2026, 11:41:49

i will try

user-b4808a 07 January, 2026, 11:45:17

Before running the Egocentric Video Mapper,let's check the alternative egocentric video orientation. this cell give output orientation like this. so i selected 90 degree clockwise rotation option in next cell is it right way

Chat image

user-f43a29 07 January, 2026, 13:24:56

Hi @user-b4808a , as detailed in the instructions, it is easiest to record the Neon scene video and the Alternative Egocentric video with the same orientation. If you already made your recordings, then in your case, it looks like you need 90 degree counter-clockwise.

user-d6d8fd 07 January, 2026, 15:39:19

Hey there, so I'm currently working on a student led project to create a DIY headset that utilizes gaze analysis. I decided on using pupil labs open source software, but I am having immense difficulty with pupil labs detecting my cameras. It usually says that the camera is either in use else where or blocked. I've made sure that it isn't being used elsewhere so my only guess is that it's blocked, but I am unsure of how to resolve that issue. Thank you for any help you can offer!

user-f43a29 07 January, 2026, 15:40:28

Hi @user-d6d8fd , thanks for the clarification. It is important to confirm that your cameras are UVC-compatible before they will be detected by Pupil Capture.

user-d6d8fd 07 January, 2026, 15:48:15

Both of the cameras I'm testing with are UVC-compatible, but they still aren't being detected. One is my laptop camera, while the other is just an USB camera.

user-f43a29 07 January, 2026, 15:51:57

Do you mean the camera built into the laptop screen?

user-d6d8fd 07 January, 2026, 15:54:11

Yes

user-f43a29 07 January, 2026, 15:58:26

Ok. Please note that the Pupil Capture software is designed with the expectation that the two eye cameras are head mounted and close to the eyes. It is unlikely that the camera built into the laptop screen will work as expected. If you are trying to use the laptop camera as a world camera, then having no experience with that, I also cannot confirm if it will work as expected.

With respect to the USB camera not being properly recognized, you might want to try running a basic pyuvc example to see if you can narrow down the root cause, since that is the code that is producing the error you are seeing.

user-d6d8fd 07 January, 2026, 16:02:28

That makes some more sense then. I want to ask, in the future I plan on using two ESP32 cameras, which would produce a web feed for the camera. Is it possible to route the web feed into pupil capture so that it acts as the two pupil cameras? Furthermore I wanted to use a raspberry pi to transmit another camera feed over WiFi, but will pupil capture be able to use that either?

user-f43a29 07 January, 2026, 16:04:34

You could try making a Pupil Capture plugin that receives the streams and passes them onto the Pupil Capture estimation pipeline in the right format. Others have made custom video backends as plugins, which you could reference.

user-d6d8fd 07 January, 2026, 16:05:59

Okay awesome, thank you so much for the help!

user-f43a29 07 January, 2026, 16:07:12

And, at the least, you could acquire & try the cameras that we list in the DIY Pupil Core documentation.

user-f43a29 07 January, 2026, 16:06:19

You are welcome. For the Raspberry Pi, you may want to reference this work.

user-880443 13 January, 2026, 10:08:38

Hi Pupil Labs Team, I wanted to test the new segment anything 2 integration and I am having trouble with the last step (Launch and Segment). The first two cells of the code work completely fine, but when launching the last cell, I get the following error: Traceback (most recent call last): File "/usr/local/lib/python3.12/dist-packages/gradio/queueing.py", line 759, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/gradio/route_utils.py", line 354, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/gradio/blocks.py", line 2202, in process_api data = await self.postprocess_data(block_fn, result["prediction"], state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/gradio/blocks.py", line 1924, in postprocess_data self.validate_outputs(block_fn, predictions) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/gradio/blocks.py", line 1879, in validate_outputs raise ValueError( ValueError: A function (load_recording) didn't return enough output values (needed: 9, returned: 6). Output components: [image, slider, textbox, textbox, textbox, state, state, textbox, state] Output values returned: [None, {'type': 'update'}, {'type': 'update'}, {'type': 'update'}, "Failed to initialize SAM3 predictor: name 'build_sam3_video_predictor' is not defined", <main.Session object at 0x7d840df5b470>]

Do you have any suggestions on what I can do?

Thank you for your help!

user-480f4c 13 January, 2026, 10:40:23

thanks for reaching out @user-880443 and for reporting this. I'm having a look now and will follow up as soon as possible when I fix this!

user-480f4c 13 January, 2026, 10:59:36

@user-880443 - this should be now fixed, I just tested it with a sample recording. Can you give it another try and let me know if you experience any issues?

user-880443 13 January, 2026, 13:29:44

Hi Nadia, thank you for the quick response! I think the issue is fixed now, but I will test it more in the next days.

End of January archive