💻 software-dev


user-7ba095 04 August, 2025, 10:42:56

Hi. Fairly new to eye tracking systems. I'm looking for a mobile setup that can be deployed on a host like eg. a nvidia jetson. Do pupil labs solutions benefit from GPUs in the first place? Any pointers to documentation that shows the hardware requirements for reliable eye tracking in such a usecase?

user-f43a29 04 August, 2025, 10:48:36

Hi @user-7ba095 , are you looking to use our original eyetracker, Pupil Core, or our latest deep-learning powered eyetracker, Neon?

user-7ba095 04 August, 2025, 10:51:27

@user-f43a29 Haven't taken a decision on products yet to be honest. I would like to understand the whole system, pros / cons beforehand.

user-f43a29 04 August, 2025, 10:55:56

Then, I would recommend considering Neon. Pupil Core is our original eyetracker, released 10 years ago, and requires calibration. Neon is calibration-free and headset slippage resistant.

Neon takes everything we learned from Pupil Core and is an improvement on it in every way, being also significantly easier to use and offering more data streams. You can see more details in these 2 messages:

If you are looking to integrate Neon with your own systems, then please see the Integrators page, which contains some of the details you are interested in.

Otherwise, another powerful feature of Neon is that it is designed to be mobile & modular. It by default attaches to an Android smartphone that we provide and you can mount it in a variety of pre-made frames that we offer or build your own, since we open-sourced the module geometry. You can then stream all of the data in real-time to a client, such as your Nvidia Jetson.

user-7ba095 04 August, 2025, 10:52:24

deep learning powered eyetracker definitely sounds like something that could benefit from a host like nvidia jetsons?

user-7ba095 04 August, 2025, 11:05:08

Ok, thanks, will have a look at all those infos. Regarding Neon: Where is the eye tracking logic processed? On the android phone I assume?

user-f43a29 04 August, 2025, 11:14:16

Yes, NeonNet runs on the Android phone; i.e., the Companion Device.

user-7ba095 04 August, 2025, 11:13:42

@user-f43a29 regarding "Integration -> Our sensors, your hardware". What are the possibilities / specs for " receive a NeonNet library optimized for your SOC". what kind of hardware are we talking about?

user-f43a29 04 August, 2025, 11:17:17

If you are interested in that option, then please use the Schedule a call button at that link.

user-f43a29 04 August, 2025, 11:14:55

Hi @user-7ba095 , that is listed at the top right of that page under Platform compatibility:

  • Qualcomm XR & Snapdragon, Nvidia GPU & Jetson
user-7ba095 04 August, 2025, 13:54:55

Regarding: "Adverse lighting NeonNet is invariant to lighting conditions, ensuring accurate tracking indoors, outdoors, and in low-light environments." are you using IR illumination or some kind of "special" sensor? As far as I understand pupil labs core does use IR leds, but I don't see any such leds with neon platform

user-d407c1 04 August, 2025, 13:59:51

Hi @user-7ba095 ! There are IR illuminators, you can see their location in this overview.

user-741071 04 August, 2025, 14:28:10

Hi, I am trying to connect to neon gaze tracking via Python Simple API

user-741071 04 August, 2025, 14:29:21

I am struggling to connect to the glasses. Both my computer (which runs the python script), and the phone with Neon Companion are connected to the same local wifi router (I've double checked). Edit : and the gaze tracking device is connected to the phone, and running well (I've checked via the companion app)

user-741071 04 August, 2025, 14:29:29

are there known issues ?

user-741071 04 August, 2025, 14:30:39

this line doesn't discover anything : " device = discover_one_device(max_search_duration_seconds=20)"

user-f43a29 04 August, 2025, 14:47:14

Hi @user-741071 , can you try using the IP address listed in the Companion app? Then, you instead want to use the Device constructor, imported with:

from pupil_labs.realtime_api.simple import Device

user-741071 04 August, 2025, 14:47:55

I'll try it. is it nkown to be more stable

user-f43a29 04 August, 2025, 14:49:20

It’s not so much about stability but rather that certain networks don’t always support mDNS and device auto-discovery correctly. You might need to check with the router’s settings or manufacturer about that.

user-741071 04 August, 2025, 14:48:09

I have additionnal information which might be usefull :

user-741071 04 August, 2025, 14:50:32
  • the information button (i) on top of neon companion list glasses serial number and other information
  • the connect panel shows the IP, but list "0 Devices" as connected Devices
  • I sometimes doesn't manage to connect to the web page given_IP_adress:8080 => connexion failed
user-f43a29 04 August, 2025, 14:52:49

Thanks. The test with the Device constructor, as listed above, will be more definitive, as it will display a proper error message, if it does not work.

With respect to access in the browser, what Operating System and browser are you using?

user-741071 04 August, 2025, 14:54:09

Linux, Firefox. But I guess it is more like a network conneciton problem. I'll try from the Device conctructor, and check my router parameters

user-741071 04 August, 2025, 14:54:28

Have you some advice for the router parameter ?

user-f43a29 04 August, 2025, 14:58:52

That will need to be clarified with the router manufacturer, as there is so much variability between devices. At the least, you want these ports open locally and not blocked by a firewall:

  • TCP - 8080 and 8086
  • UDP - 5353
user-f43a29 04 August, 2025, 14:55:19

Can you try with a Chromium based browser instead?

user-741071 04 August, 2025, 14:56:47

Chrome doesn't find it either. I have not chromium actually installed

user-741071 04 August, 2025, 15:00:49

with the Device constructor :

user-741071 04 August, 2025, 15:00:55

File "/usr/lib/python3.10/asyncio/selector_events.py", line 541, in _sock_connect_cb raise OSError(err, f'Connect call failed {address}') OSError: [Errno 113] Connect call failed ('192.168.1.29', 8080)

user-f43a29 04 August, 2025, 15:02:11

Are you potentially using a university or work WiFi? Can you also try with a phone hotspot as a test?

user-741071 04 August, 2025, 15:01:49

I'll try as your advice : open the ports, and come back to you

user-741071 04 August, 2025, 15:02:01

thanks for your help

user-741071 04 August, 2025, 15:07:57

it is a personnale box (Orange Livebox in France, not known to be open 😉 )

user-741071 04 August, 2025, 15:08:19

I can use another phone as a router as well. Does it work usually ?

user-f43a29 04 August, 2025, 15:20:30

I can’t speak for all phones, but it should.

user-f43a29 04 August, 2025, 15:42:47

it seems it doesn't on mine. Damned. I

user-7ba095 06 August, 2025, 13:26:41

Hi. I have a question regarding https://github.com/pupil-labs/pyuvc/tree/master/examples is there any example available on how to set uvc controls? Like the ones referenced in the pupil core code?

    controls_dict["Auto Exposure Priority"].value = 0
except KeyError:
    pass

try:
    controls_dict["Auto Exposure Mode"].value = 1
except KeyError:
    pass

try:
    controls_dict["Saturation"].value = 0
except KeyError:
    pass

try:
    controls_dict["Absolute Exposure Time"].value = 63
except KeyError:
    pass

try:
    controls_dict["Backlight Compensation"].value = 2
except KeyError:
    pass

try:
    controls_dict["Gamma"].value = 100
except KeyError:
    pass

I can iterate over controls but I'm having a hard time setting any values eg. "Absolute Exposure Time" to 63

Absolute Exposure Time was: 32 Could not set Value. 'Absolute Exposure Time'.

user-f43a29 06 August, 2025, 13:44:31

Hi @user-7ba095 , we can provide details on this, may I first ask what the end goal is?

user-7ba095 06 August, 2025, 13:28:08

camera being used is a "Pupil Cam2 ID1"

user-7ba095 06 August, 2025, 13:28:38

host platform is osx

user-7ba095 06 August, 2025, 13:47:58

@user-f43a29 I'm evaluating a number of eye tracking solutions including yours ( core, neon ). endgoal in this case is being able to change controls of uvc cameras; mainly testing with one of yours at the moment but not exclusive.

user-f43a29 06 August, 2025, 14:03:38

Thanks, just to help, what is the purpose of changing the UVC controls from the default behaviour? Then, I can provide the best feedback & support.

user-7ba095 06 August, 2025, 14:07:18

@user-f43a29 it's really more of a technical question. While checking out your impressive products I found your pyuvc fork github repository and in the readme it said:

pyuvc Python bindings for the Pupil Labs fork of libuvc with super fast jpeg decompression using libjpegturbo (utilizing the tubojpeg api).

cross platform access to UVC capture devices. Full access to all uvc settings (Zoom,Focus,Brightness,etc.) Full access to all stream and format parameters (rates,sizes,etc.) Enumerate all capture devices with device_list() Capture instance will always grab mjpeg conpressed frames from cameras.

user-f43a29 06 August, 2025, 14:10:49

Ok, I ask, because not all UVC cameras respond to all UVC commands, including when used in certain combinations. For example, in the case of Pupil Core, the Auto Exposure is controlled by a separate software routine running in the Pupil Capture software. See this message for a link to the relevant commit with more details: https://discord.com/channels/285728493612957698/446977689690177536/1356372095646437506

user-f43a29 06 August, 2025, 14:11:49

Since Pupil Capture already handles it automatically for you, it sounds rather like you are trying to implement a custom pipeline?

user-7ba095 06 August, 2025, 14:15:49

At this point I'm not thinking about implementing a custom pipeline; again I'm evaluating some of your products. Mainly looking for a way to write uvc control settings. It would be quite useful

user-9d4c5e 07 August, 2025, 07:52:07

Hi , I’m a Master’s student working with the Pupil Labs Neon eye tracker. I’d like to track the gaze point on the Unity screen, but I’m not using a VR headset.

I’m currently facing an issue — I haven’t been able to match the gaze point correctly with the Unity screen coordinates.

Any advice or suggestions would be greatly appreciated!

user-d407c1 07 August, 2025, 08:13:24

Hi @user-400de7 👋 ! I moved your question to 💻 software-dev as it's not specifically related to 🤿 neon-xr . I have also replied to your email, but in a nutshell, and such that others can benefit from the discussion here:

🤿 neon-xr was designed with XR applications in mind, where the relationship between Neon's module and the display is fixed, like inside a VR headset. The library includes tools to receive gaze data in real time in Unity, and utilities to calibrate the mount so that you can map gaze onto Unity’s world coordinates accurately. Outside of XR, however, that kind of setup doesn’t hold. Since the wearer can move or tilt their head freely, the spatial relationship between Neon's scene camera and the screen isn’t fixed anymore. This means the mount calibration routine won’t work, and there aren’t any ready-made prefabs or scripts in the XR library for that kind of workflow.

What we proposed for your use case was, similar to what was discussed here (https://discord.com/channels/285728493612957698/446977689690177536/1248580630430875661), to place Apriltag markers on your screen corners, stream gaze data and the scene camera video to your computer and remap gaze coordinates to screen coordinates, much like our Python library https://github.com/pupil-labs/real-time-screen-gaze or the surface mapper does.

If you were to do this in Unity entirely, you could use NeonXR to receive gaze in scene camera coordinates, but you would need to implement streaming of scene camera video and mapping to surface yourself, as these are not included in NeonXR.

user-c0d491 08 August, 2025, 23:07:50

Hey, I'm a kind of newbie and have been lurking on the forums. I'm considering purchasing your neon glasses, but I was wondering if the above design is available in your Alpha Labs or if I would need to create my own version using the real-time API? Any chance this would be on Unity and is this using Three.js? (Thought this feature was cool AF and want to try it out)

Chat image

nmt 11 August, 2025, 04:34:57

Hi @user-c0d491 👋. These visualisations are built on our real-time API. Aside from the scene video on the left, they're not publicly available. We do have an Alpha Lab in the works that will show users how to generate similar visualisations using Python tools, though. I expect that to be out in the next couple of weeks 🙂

user-3bcb3f 19 August, 2025, 12:24:30

Choosing specific screen for calibration

user-ffc425 21 August, 2025, 17:32:33

Hi, I have a setup controlling a pupil labs core arm from a low-powered, offline machine using the pupil labs Python library to do so. I am simply capturing .mp4 videos with this camera. I was wondering if there was any way I could use Pupil Player (or something equivalent) to do eye feature tracking post-hoc instead of live? Then, I would like to export these features to another file. Ideally, I would like to also control this process programmatically. Naturally, the files I have do not fit the format to be dragged into Pupil Player, but I am open to any equivalents or any solutions that I can do to make it play nicely

user-f43a29 22 August, 2025, 08:39:30

Hi @user-ffc425 , that would require replicating Pupil Capture's file format outputs & timestamp conventions. You can inspect a standard Pupil Capture recording to see how those are structured, or even inspect the source code of the Pupil Player software, to see how they are loaded, if you want to go this route.

I am not exactly sure what is meant by "export the features to another file" or "control the process programmatically", but you can also try applying our Python library, pye3d directly to your videos and see if that is sufficient for your use case. For example, this script.

user-ffc425 22 August, 2025, 17:00:50

@user-f43a29 One follow up. This was a great help and got me to my next point. Is there any way using pupil labs code to get eye opening / eyelid angle? And any way to identify glint of the IR LED on the eye?

user-f43a29 22 August, 2025, 17:03:32

In principle, this can be done, but we only provide eye openness measures for Neon.

To identify the glint is technically also possible, but it will require looking at the literature and third-party open-source implementations to see how others have achieved that.

user-ffc425 22 August, 2025, 17:03:54

Okay! Thank you so much for your fast response

user-f43a29 22 August, 2025, 17:05:13

No problem. Perhaps something like a blob or circle detector, after passing the image through a binary threshold stage would be a start, but I am unable to provide more dedicated assistance on implementing glint detection.

user-eef9c9 27 August, 2025, 13:24:06

Hi, I've got a question regarding the eye gaze module cameras. We noticed after analyzing recordings made, there are quite a few dropped frames reported during a 20 minute video. Is this normal? And would this interfere with the gaze data that is being calculated from the cameras?

user-09b7e2 27 August, 2025, 13:24:19

Hello, @user-23177e! Can you share some more details about the frame drops. Are you referring to gaze data?

user-bd860d 27 August, 2025, 13:24:36

Yes, referring to the gaze data camera recordings. Running an analysis on the recorded gaze data video with ffprobe reports several dropped frames throughout the recording. If this is indeed correct, is this being logged by the companion app somewhere with the gaze data?

user-484d65 27 August, 2025, 13:24:54

@nmt this is a report made from an analysis with ffprobe

analysis-director_gaze040725.json

nmt 27 August, 2025, 13:26:05

@user-23177e - moving to this channel. This doesn't look like a standard ffprobe output. Can you please elaborate your methodology?

user-23177e 27 August, 2025, 14:22:05

I've created a script that includes ffprobe for reporting:

user-23177e 27 August, 2025, 14:22:06

"ffprobe", "-v", "0", "-select_streams", "v:0", "-show_entries", "stream=r_frame_rate,avg_frame_rate,duration", "-of", "default=noprint_wrappers=1:nokey=1", video_file

user-23177e 27 August, 2025, 14:22:39

Then I try to extract timestamps:

user-23177e 27 August, 2025, 14:22:40

"ffprobe", "-v", "error", "-select_streams", "v:0", "-show_entries", "frame=pts_time", "-of", "csv=p=0", video_file

user-23177e 27 August, 2025, 14:24:52

and that list of timestamps is then being analysed for valid intervals

user-23177e 27 August, 2025, 14:25:05

Full script (with some help of chatgpt):

user-23177e 27 August, 2025, 14:26:14

if this is not a correct way to determine frame drops, please let me know

process_video-frame_analysis.py

nmt 27 August, 2025, 15:01:10

Thanks for sharing. Using ffprobe is valid. However, it looks like you have some incorrect assumptions baked into the Python script. Briefly, the script calculates the time difference (interval) between each consecutive frame. It then analyses these intervals to find anomalies. The key one I think is that it determines the "normal" time between frames by calculating the median interval. It then flags any interval that is significantly longer than the expected one (e.g. 1.5x longer), which indicates one or more frames are missing. Actually, you might find a frame-to-frame variability of ~0.2 ms. So if some fast frames come in, the median will drop and valid frames will be incorrectly flagged as dropped. I think your logic should be adapted to account for this variability to yield a more accurate outcome.

user-ebd8d5 27 August, 2025, 22:00:12

Hi, sorry. I am trying to run the different scripts in the pyflux project. However, I am not able to determine which library the ExportPoissonMesh function in script run_nerfstudio is from. I assume it is from colmap. The installation of colmap using the conda forge command did not work, so I tried to install both pycolmap and colmap using vcpkg. Neither of them are working. Could you let me know how to fix this issue?

user-ebd8d5 27 August, 2025, 22:24:15

Edit: I managed to fix the import issues and run it. facing other issues now, but thank you. It was from nerftsudio folder

user-23177e 28 August, 2025, 09:00:21

@nmt After some adjustments regarding the interval, the amount of dropped frames is reduced, but still apparent. For the same video, there are now 371 frames reported as dropped from a total of 510384. Is this expected within margins for the gaze module cameras?

nmt 28 August, 2025, 09:54:15

Can you please share your updated method?

user-23177e 28 August, 2025, 10:44:00

process_video-frame_analysis_updated-framecalc.py

nmt 01 September, 2025, 10:33:28

HI, @user-23177e! Yes, I think this is closer to an accurate measurement and is within expected tolerance

End of August archive