Hi. Fairly new to eye tracking systems. I'm looking for a mobile setup that can be deployed on a host like eg. a nvidia jetson. Do pupil labs solutions benefit from GPUs in the first place? Any pointers to documentation that shows the hardware requirements for reliable eye tracking in such a usecase?
Hi @user-7ba095 , are you looking to use our original eyetracker, Pupil Core, or our latest deep-learning powered eyetracker, Neon?
@user-f43a29 Haven't taken a decision on products yet to be honest. I would like to understand the whole system, pros / cons beforehand.
Then, I would recommend considering Neon. Pupil Core is our original eyetracker, released 10 years ago, and requires calibration. Neon is calibration-free and headset slippage resistant.
Neon takes everything we learned from Pupil Core and is an improvement on it in every way, being also significantly easier to use and offering more data streams. You can see more details in these 2 messages:
If you are looking to integrate Neon with your own systems, then please see the Integrators page, which contains some of the details you are interested in.
Otherwise, another powerful feature of Neon is that it is designed to be mobile & modular. It by default attaches to an Android smartphone that we provide and you can mount it in a variety of pre-made frames that we offer or build your own, since we open-sourced the module geometry. You can then stream all of the data in real-time to a client, such as your Nvidia Jetson.
deep learning powered eyetracker definitely sounds like something that could benefit from a host like nvidia jetsons?
Ok, thanks, will have a look at all those infos. Regarding Neon: Where is the eye tracking logic processed? On the android phone I assume?
Yes, NeonNet runs on the Android phone; i.e., the Companion Device.
@user-f43a29 regarding "Integration -> Our sensors, your hardware". What are the possibilities / specs for " receive a NeonNet library optimized for your SOC". what kind of hardware are we talking about?
If you are interested in that option, then please use the Schedule a call
button at that link.
Hi @user-7ba095 , that is listed at the top right of that page under Platform compatibility
:
Regarding: "Adverse lighting NeonNet is invariant to lighting conditions, ensuring accurate tracking indoors, outdoors, and in low-light environments." are you using IR illumination or some kind of "special" sensor? As far as I understand pupil labs core does use IR leds, but I don't see any such leds with neon platform
Hi @user-7ba095 ! There are IR illuminators, you can see their location in this overview.
Hi, I am trying to connect to neon gaze tracking via Python Simple API
I am struggling to connect to the glasses. Both my computer (which runs the python script), and the phone with Neon Companion are connected to the same local wifi router (I've double checked). Edit : and the gaze tracking device is connected to the phone, and running well (I've checked via the companion app)
are there known issues ?
this line doesn't discover anything : " device = discover_one_device(max_search_duration_seconds=20)"
Hi @user-741071 , can you try using the IP address listed in the Companion app? Then, you instead want to use the Device
constructor, imported with:
from pupil_labs.realtime_api.simple import Device
I'll try it. is it nkown to be more stable
It’s not so much about stability but rather that certain networks don’t always support mDNS and device auto-discovery correctly. You might need to check with the router’s settings or manufacturer about that.
I have additionnal information which might be usefull :
Thanks. The test with the Device constructor, as listed above, will be more definitive, as it will display a proper error message, if it does not work.
With respect to access in the browser, what Operating System and browser are you using?
Linux, Firefox. But I guess it is more like a network conneciton problem. I'll try from the Device conctructor, and check my router parameters
Have you some advice for the router parameter ?
That will need to be clarified with the router manufacturer, as there is so much variability between devices. At the least, you want these ports open locally and not blocked by a firewall:
Can you try with a Chromium based browser instead?
Chrome doesn't find it either. I have not chromium actually installed
with the Device constructor :
File "/usr/lib/python3.10/asyncio/selector_events.py", line 541, in _sock_connect_cb raise OSError(err, f'Connect call failed {address}') OSError: [Errno 113] Connect call failed ('192.168.1.29', 8080)
Are you potentially using a university or work WiFi? Can you also try with a phone hotspot as a test?
I'll try as your advice : open the ports, and come back to you
thanks for your help
it is a personnale box (Orange Livebox in France, not known to be open 😉 )
I can use another phone as a router as well. Does it work usually ?
I can’t speak for all phones, but it should.
it seems it doesn't on mine. Damned. I
Hi. I have a question regarding https://github.com/pupil-labs/pyuvc/tree/master/examples is there any example available on how to set uvc controls? Like the ones referenced in the pupil core code?
controls_dict["Auto Exposure Priority"].value = 0
except KeyError:
pass
try:
controls_dict["Auto Exposure Mode"].value = 1
except KeyError:
pass
try:
controls_dict["Saturation"].value = 0
except KeyError:
pass
try:
controls_dict["Absolute Exposure Time"].value = 63
except KeyError:
pass
try:
controls_dict["Backlight Compensation"].value = 2
except KeyError:
pass
try:
controls_dict["Gamma"].value = 100
except KeyError:
pass
I can iterate over controls but I'm having a hard time setting any values eg. "Absolute Exposure Time" to 63
Absolute Exposure Time was: 32 Could not set Value. 'Absolute Exposure Time'.
Hi @user-7ba095 , we can provide details on this, may I first ask what the end goal is?
camera being used is a "Pupil Cam2 ID1"
host platform is osx
@user-f43a29 I'm evaluating a number of eye tracking solutions including yours ( core, neon ). endgoal in this case is being able to change controls of uvc cameras; mainly testing with one of yours at the moment but not exclusive.
Thanks, just to help, what is the purpose of changing the UVC controls from the default behaviour? Then, I can provide the best feedback & support.
@user-f43a29 it's really more of a technical question. While checking out your impressive products I found your pyuvc fork github repository and in the readme it said:
pyuvc Python bindings for the Pupil Labs fork of libuvc with super fast jpeg decompression using libjpegturbo (utilizing the tubojpeg api).
cross platform access to UVC capture devices. Full access to all uvc settings (Zoom,Focus,Brightness,etc.) Full access to all stream and format parameters (rates,sizes,etc.) Enumerate all capture devices with device_list() Capture instance will always grab mjpeg conpressed frames from cameras.
Ok, I ask, because not all UVC cameras respond to all UVC commands, including when used in certain combinations. For example, in the case of Pupil Core, the Auto Exposure
is controlled by a separate software routine running in the Pupil Capture software. See this message for a link to the relevant commit with more details: https://discord.com/channels/285728493612957698/446977689690177536/1356372095646437506
Since Pupil Capture already handles it automatically for you, it sounds rather like you are trying to implement a custom pipeline?
At this point I'm not thinking about implementing a custom pipeline; again I'm evaluating some of your products. Mainly looking for a way to write uvc control settings. It would be quite useful
Hi , I’m a Master’s student working with the Pupil Labs Neon eye tracker. I’d like to track the gaze point on the Unity screen, but I’m not using a VR headset.
I’m currently facing an issue — I haven’t been able to match the gaze point correctly with the Unity screen coordinates.
Any advice or suggestions would be greatly appreciated!
Hi @user-400de7 👋 ! I moved your question to 💻 software-dev as it's not specifically related to 🤿 neon-xr . I have also replied to your email, but in a nutshell, and such that others can benefit from the discussion here:
🤿 neon-xr was designed with XR applications in mind, where the relationship between Neon's module and the display is fixed, like inside a VR headset. The library includes tools to receive gaze data in real time in Unity, and utilities to calibrate the mount so that you can map gaze onto Unity’s world coordinates accurately. Outside of XR, however, that kind of setup doesn’t hold. Since the wearer can move or tilt their head freely, the spatial relationship between Neon's scene camera and the screen isn’t fixed anymore. This means the mount calibration routine won’t work, and there aren’t any ready-made prefabs or scripts in the XR library for that kind of workflow.
What we proposed for your use case was, similar to what was discussed here (https://discord.com/channels/285728493612957698/446977689690177536/1248580630430875661), to place Apriltag markers on your screen corners, stream gaze data and the scene camera video to your computer and remap gaze coordinates to screen coordinates, much like our Python library https://github.com/pupil-labs/real-time-screen-gaze or the surface mapper does.
If you were to do this in Unity entirely, you could use NeonXR to receive gaze in scene camera coordinates, but you would need to implement streaming of scene camera video and mapping to surface yourself, as these are not included in NeonXR.
Hey, I'm a kind of newbie and have been lurking on the forums. I'm considering purchasing your neon glasses, but I was wondering if the above design is available in your Alpha Labs or if I would need to create my own version using the real-time API? Any chance this would be on Unity and is this using Three.js? (Thought this feature was cool AF and want to try it out)
Hi @user-c0d491 👋. These visualisations are built on our real-time API. Aside from the scene video on the left, they're not publicly available. We do have an Alpha Lab in the works that will show users how to generate similar visualisations using Python tools, though. I expect that to be out in the next couple of weeks 🙂
Choosing specific screen for calibration
Hi, I have a setup controlling a pupil labs core arm from a low-powered, offline machine using the pupil labs Python library to do so. I am simply capturing .mp4 videos with this camera. I was wondering if there was any way I could use Pupil Player (or something equivalent) to do eye feature tracking post-hoc instead of live? Then, I would like to export these features to another file. Ideally, I would like to also control this process programmatically. Naturally, the files I have do not fit the format to be dragged into Pupil Player, but I am open to any equivalents or any solutions that I can do to make it play nicely
Hi @user-ffc425 , that would require replicating Pupil Capture's file format outputs & timestamp conventions. You can inspect a standard Pupil Capture recording to see how those are structured, or even inspect the source code of the Pupil Player software, to see how they are loaded, if you want to go this route.
I am not exactly sure what is meant by "export the features to another file" or "control the process programmatically", but you can also try applying our Python library, pye3d directly to your videos and see if that is sufficient for your use case. For example, this script.
@user-f43a29 One follow up. This was a great help and got me to my next point. Is there any way using pupil labs code to get eye opening / eyelid angle? And any way to identify glint of the IR LED on the eye?
In principle, this can be done, but we only provide eye openness measures for Neon.
To identify the glint is technically also possible, but it will require looking at the literature and third-party open-source implementations to see how others have achieved that.
Okay! Thank you so much for your fast response
No problem. Perhaps something like a blob or circle detector, after passing the image through a binary threshold stage would be a start, but I am unable to provide more dedicated assistance on implementing glint detection.
Hi, I've got a question regarding the eye gaze module cameras. We noticed after analyzing recordings made, there are quite a few dropped frames reported during a 20 minute video. Is this normal? And would this interfere with the gaze data that is being calculated from the cameras?
Hello, @user-23177e! Can you share some more details about the frame drops. Are you referring to gaze data?
Yes, referring to the gaze data camera recordings. Running an analysis on the recorded gaze data video with ffprobe reports several dropped frames throughout the recording. If this is indeed correct, is this being logged by the companion app somewhere with the gaze data?
@nmt this is a report made from an analysis with ffprobe
@user-23177e - moving to this channel. This doesn't look like a standard ffprobe output. Can you please elaborate your methodology?
I've created a script that includes ffprobe for reporting:
"ffprobe", "-v", "0", "-select_streams", "v:0", "-show_entries", "stream=r_frame_rate,avg_frame_rate,duration", "-of", "default=noprint_wrappers=1:nokey=1", video_file
Then I try to extract timestamps:
"ffprobe", "-v", "error", "-select_streams", "v:0", "-show_entries", "frame=pts_time", "-of", "csv=p=0", video_file
and that list of timestamps is then being analysed for valid intervals
Full script (with some help of chatgpt):
if this is not a correct way to determine frame drops, please let me know
Thanks for sharing. Using ffprobe is valid. However, it looks like you have some incorrect assumptions baked into the Python script. Briefly, the script calculates the time difference (interval) between each consecutive frame. It then analyses these intervals to find anomalies. The key one I think is that it determines the "normal" time between frames by calculating the median interval. It then flags any interval that is significantly longer than the expected one (e.g. 1.5x longer), which indicates one or more frames are missing. Actually, you might find a frame-to-frame variability of ~0.2 ms. So if some fast frames come in, the median will drop and valid frames will be incorrectly flagged as dropped. I think your logic should be adapted to account for this variability to yield a more accurate outcome.
Hi, sorry. I am trying to run the different scripts in the pyflux project. However, I am not able to determine which library the ExportPoissonMesh function in script run_nerfstudio is from. I assume it is from colmap. The installation of colmap using the conda forge command did not work, so I tried to install both pycolmap and colmap using vcpkg. Neither of them are working. Could you let me know how to fix this issue?
Edit: I managed to fix the import issues and run it. facing other issues now, but thank you. It was from nerftsudio folder
@nmt After some adjustments regarding the interval, the amount of dropped frames is reduced, but still apparent. For the same video, there are now 371 frames reported as dropped from a total of 510384. Is this expected within margins for the gaze module cameras?
Can you please share your updated method?
HI, @user-23177e! Yes, I think this is closer to an accurate measurement and is within expected tolerance