💻 software-dev


user-e962d7 09 June, 2025, 06:46:21

hi, I have been working on DIY projects recently. now I can see my eyes on the screen with eye-camera by Pupil Capture, however, the both welded IR-led (4050-z)seem DO NOT work well on Microsoft-HD6000 I have not detected any 850nm IR light when the camera is working , which really confused me because i have followed all the procedures from buying products to welding IR-led.

user-1391e7 10 June, 2025, 12:51:33

I'm not sure if this is the right place to ask, but in the cloud-service, can I move recordings between work spaces? if I were to create a new one, could I move an existing recording from one workspace to another?

user-d407c1 10 June, 2025, 12:54:22

Hi @user-1391e7 👋 ! Moving recordings between workspaces is not currently possible. But you can upvote this feature request here: ⁠https://discord.com/channels/285728493612957698/1212410344400486481

user-1391e7 10 June, 2025, 12:57:20

thank you very much for the quick info! I have to say, you're one of the most helpful companies out there, truly.

user-065afe 11 June, 2025, 09:24:20

Hello, we are using the invisible glasses for recordings. But we are having problems downloading the videos in the program. We can download videos but then you can't see the markers of fixations anymore. Is there a special setting we have to think of? Thank you in advance!

user-f43a29 11 June, 2025, 09:28:27

Hi @user-065afe , when you say "downloading the videos in the program", do you mean downloading the videos from Pupil Cloud or loading them into Pupil Player?

user-3c0808 12 June, 2025, 13:58:27

Hello, is it possible to trim videos directly in Pupil Cloud in order to create an "enrichment" using only certain portions? I noticed that there is a video duration limit (3 minutes) for processing.. thanks!

user-480f4c 12 June, 2025, 14:35:45

Hi @user-3c0808! Yes, using Events you can apply your enrichment only on specific sections of your recordings.

Simply add events to your recording(s), and when starting the enrichment, use the Temporal Selection option to specify the events that should define the time window for the enrichment to be applied.

Please note that the 3-minute duration limit you referred to applies to the "scanning recording" needed for applying a Reference Image Mapper enrichment. As a reminder, the Reference Image Mapper (RIM) needs 3 things: - A Scanning Recording, where you can hold the glasses to easily make a scan of your scene from various viewpoints. This recording must be up to 3 minutes long. - An Eyetracking Recording, where a person wore the glasses and did a task. This recording can be as long as you wish. - A Reference Image, which is a snapshot of your scene from a viewpoint of interest, including the region where the participant looked during the task

user-065afe 15 June, 2025, 10:54:16

I mean downloading the videos from Pupil Cloud:).

user-f43a29 16 June, 2025, 07:58:18

Hi @user-065afe , thanks for the clarification. Then, you want to use the Video Renderer Visualization first and download the result.

user-065afe 19 June, 2025, 09:41:11

Thank you, I can use the video renderer but can't save the video in the next step. I can't click on it (it's grey).

user-f43a29 20 June, 2025, 08:00:03

Hi @user-065afe , would you be willing to invite us as Collaborators on your Workspace [email removed] Then, we can take a closer look and provide better feedback. Just note that then we can see all data in that Workspace, until the investigation is complete.

user-d801e5 20 June, 2025, 13:58:38

Hi, I am using the Neon glasses and real-time API to try and create an android app to visualize/record coordinate and IMU data. I and trying to deploy python script directly on the Kotlin app, but there is a version conflict with pydantic and chaquopy—chaquopy doesn't support pydantic v2.x / pydantic core. But it can support pydantic v1.x, so I am wondering if there is a version of the real-time API that is compatible with Neon and pydantic v1.x

user-f43a29 20 June, 2025, 14:03:08

Hi @user-d801e5 , ah, no, there is not.

May I ask what the end goal is, so that I can better assist? If it helps, Python is not necessary to use the Real-time API. The API is programming language agnostic and Kotlin is fully capable of receiving the data over the standard RTSP protocol.

user-d407c1 20 June, 2025, 14:17:12

@user-d801e5 here stepping in for @user-f43a29

The use of pydantic on the API is minimal, mainly for the templates feature, so if you want to keep using chaquopy you can fork it and modify the models.py

user-d801e5 20 June, 2025, 14:26:40

Thanks for the response, I will follow up if I continue to run into problems!

user-d801e5 20 June, 2025, 14:43:16

I'll also take a suggestion here: For now, I want to make a real-time data viz mobile application to be run on the companion device that displays gaze and IMU data as the Neon is being used. Right now, I have a setup that relies on a LAN and python client on my desktop. But I want to create a version that's independent and can run with just the companion device. I'm leaning to forking the SDK since it might be valuable to use python if the project becomes more complicated.

user-065afe 23 June, 2025, 10:29:14

Yes, I invited you in our workspace. Thank you!

user-f43a29 23 June, 2025, 10:48:06

Thanks, @user-065afe , I've taken a look and sent a DM to clarify one part. From what I can see, it looks like Video Renderer has run successfully and the videos with gaze overlay are ready to download. It can take a variable amount of time for Video Renderer to complete, depending on number & length of recordings, so when the button was still grey, it was probably still processing. For future reference, the processing status can be seen in the general Visualizations Tab.

Since you mentioned "seeing fixations" in your original request, please note that you also need to activate Show fixations before running Video Renderer, as shown in the attached image.

Chat image

user-c29368 24 June, 2025, 00:59:54

Hello,

Currently, I’m attempting to implement a method of streaming the video feed from the Neon glasses over long distances between two devices. Currently, the application I’m working on has support for local communication already using the Neon API.

I was able to use ZeroTier, a service that allows long-distance communication between devices, to assign a new IP address to the phone using the Neon companion app, as well as the laptop running the app I’m working on, and they are able to communicate the streaming data between each other when they are both on the same network using the local HTTP port 8080 with these new IP addresses. This falls apart, however, when the devices are on two separate networks, where they are unable to connect with each other. My concern is that the Neon companion app is broadcasting the connection request on the specific local port, and the laptop that is attempting to connect on a different network is unable to access this local port, which is required when using the Neon API to manually set up a device without using the discover_devices function.

Does anyone know if it is possible to allow this local port to be accessed remotely, and if not, is there any known alternative way to go about this that is supported by Pupil Labs?

user-d407c1 24 June, 2025, 09:17:16

Hi @user-c29368 ! May I ask is Zero Tier something a like a VPN?

user-c29368 24 June, 2025, 11:45:09

It’s pretty similar to one, except it doesn’t rely on a central server, it connects the devices directly peer-to-peer.

user-5ab4f5 25 June, 2025, 06:22:54

Hi guys, i am using (out of an emergency) a script with different Eye Trackers. One is the Neon Tracker and one the Invisible. I am tracking eye movements of two people playing the piano.

Device Serial for Neon: 321970 (Phone Name: Neon Companion - Companion Device: rtwo_g)

Device Serial for Invisible: qc49m (One Plus 8T)

My python script connects via port 8080 (for both) and different IP adresses to the respective eye tracker. Clicking on space i start recording and clicking space again i stop the recording. In some cases i have no problems. In other cases i get the issue that Neon does not stop and i get the error: AttributeError: 'tuple' object has no attribute 'tb_frame' regarding „device2.recording_stop_and_save()“. I need to stop the script manually then by going to the phone and pressing stop. This is not only inconvenient but problematic since i disrupt the experimental processes.

Another issue that happens very often is the invisible simply does not record. I sometimes get a „400 already recording“ in python. Thus the script is not throwing an error. I opened the real time API with pi and neon (8080) port to monitor. If i manually click on record for invisible it repeats "already running!" but on the phone there is no sign to indicate recording. I have to restart the program then. For every run i restart the kernel to be sure. It worked for example well yesterday. But todays recording were working absoluetly horrendous. I am not sure how much its my scripts fault and how much its a network issue or a network card issue from my laptop running the script. I'll send you my script attached.

Music_execute_2tracker.ipynb

user-f43a29 25 June, 2025, 06:59:10

Hi @user-5ab4f5 , the error:

AttributeError: 'tuple' object has no attribute 'tb_frame' 

appears to come from IPython, so from the process related to the Jupyter notebook.

Would it be possible to transfer your script to a standard .py Python script and see if that improves things?

Regarding the Pupil Invisible message, that does not seem to be so much an error, but probably a result of stopping the script manually, without fully closing the connection for the Pupil Invisible (device.close()). This can leave the device in a state that will report 400 already recording when you then try to start a new recording.

user-5ab4f5 25 June, 2025, 07:11:56

Thank you! I will try it out

user-5ab4f5 25 June, 2025, 09:26:24

Oddly this issue occured again. Less often than before but i got the "already running"

user-5ab4f5 25 June, 2025, 09:26:30

for the invisible only however

End of June archive