Using ROI and Algorithm model to less noisy and stable eye data
Hello, is it possible to get the core in separate parts to attach to sports equipment? I would like to mount the eye cameras to a helmet instead of having subjects wear the helmet with my neon underneath. Also, does the core output a similar file type to the neon so that gaze can be mapped on a separate pov camera using your guys script? Thanks
Hi @user-f0ea5b π! We can provide a disassembled Pupil Core, please write an email to [email removed] with that query and they would be able to provide you with a quote.
That said, I wouldnβt recommend it for sports usage. Pupil Core needs to be tethered to a laptop or single-board computer (SBC), itβs not robust against outdoor environments or slippage as Neon, it also lacks sensors like the IMU (which Neon uses in its fixation detector to account for head-movements), and it requires to be calibrated.
Mapping gaze to an egocentric camera is technically possible with Pupil Core too, but the existing code was developed specifically for Neon.
Could you share a bit more about your use case? My impression is that Neon might be a better overall fit, and we might have a frame for your needs:
- The Ready-Set-Go frame can be worn under a helmet
- The Bare Metal frame allows for custom mounting
Alternatively, if you need different camera positioning and plan to go beyond prototyping, our integration page may be worth a look
This way, we can better guide you toward the setup that balances flexibility with reliability.
Also, does the core only run with a laptop or can it run on the neon companion app?
Hello @user-d407c1. Tethering the unit to a laptop is not a problem. My lab at the university currently has a neon that I have been using in my setup but the 1.8 degrees of error are something I would like to reduce. Does the core function under the principle that the eyes do not move after calibration? If so that would be the only problem I would see but it should not be too difficult to prevent slippage in my setup. I am operating in an identical environment to the paper written by Kishita et al., 2020 (head and eye movements of elite baseball players) but I am using the advances in dynamic roi mapping to automate gaze tracking error. It works fine using the core and my setup but the accuracy of 1.8 degrees will be the biggest bottleneck. Please let me know what you think.
With Neon, by performing the offset correction you can typically bring the general accuracy down to ~1.3Β°. Have you already tried that?
If your target is indeed, tracking a baseball, Iβd say the bigger bottleneck may not be accuracy but rather the scene camera frame rate and the size of your stimuli on the FOV. At 30 FPS, the ballβs motion at higher speeds may appear blurred and quite small to be tracked by traditional computer vision models, unless youβre already remapping onto an external action camera.
Regarding Pupil Core, like any other wearable eye tracker that requires calibration, it assumes a relatively constant relationship between the eyes and the scene camera. It can compensate slightly (thanks to pye3d), but it wonβt be as robust as NeonNet. Core can achieve better accuracy under highly controlled conditions, but those are usually difficult to achieve and maintain in dynamic setups like yours.
Side note: Kishita et al., 2020 seem to have used a regular Pupil Core and Optitrack to record head movements
I replaced the scene camera with a GoPro so the yolo model is pretty effective at detecting the baseball. I havenβt done the constant offset yet so Iβll try it out. Is there any information on how much slippage the pye3d approach can compensate for?
Great to hear the YOLO model works well with the GoPro! Would you be able to share some of the footage?
I wanted to chime in on the Core for baseball topic, having used Core quite a bit over the years.
It's true that under ideal conditions, Core can be calibrated to within 0.6 degrees. However, that's using the 2D pipeline, and the results would likely become meaningless for baseball since there's no slippage compensation.
In practice, the 3D calibration pipeline (with slippage compensation) gives a more realistic accuracy of 1.5 to 2.5 degrees. It would still degrade over time with the dynamic movements of batting, so repeated calibrations would be necessary. You can read more about Core's accuracy in this section of the docs.
Using Neon on the other hand already gets you better gaze accuracy of 1.3 degrees, and this accuracy will persist throughout your entire batting session.
On top of that, you'd have to rewrite the ego-centric video mapper Alpha Lab to be compatible with Core recordings. This seems like a lot of effort for no real gain.
If you're still keen on testing the Core pipeline, you could try running Neon with it using this Alpha Lab guide.
Hello Neil, thanks for the pointers. Itβs always great to talk to people who share excitement for eye tracking. What would be the best way to send you the videos? I spent several months coming up with a protocol to get the recordings and yolo detections so Iβm a bit hesitant to just show them to anyone.
No problem! If you don't want to share them publicly, you could always send them via DM
Hi, I am working with Pupil Core to get the eye's 3D pose. When I enable pupil capture to access it, the gaze vector appears to be highly dependent on the eye camera's position and orientation. How could I access the true eye 3d pose based on its own eyeball coordinate instead of the camera's coordinate?
Hi @user-ff00f7 π. If you're working with Pupil data, these are indeed relative to the eye camera coordinate system. I'm not 100% sure what you mean by eyeball's coordinate system. Perhaps you could elaborate. In any case, you can read more about Core's coordinate systems in this section of the docs, and you might wish to look at a description of all the data streams here if you haven't already.
Hi, why do camera modules that are UVC compatible end up not working on Pupil Capture? Especially at higher resolutions (e.g., 1920 x 1080p). The app keeps throwing an error stating there are no camera intrinsics available
Hi @user-4a1dfb , to clarify, are you not seeing any video feed at all in Pupil Capture?
The missing camera intrinsics message is not so much an error as a diagnostic warning message. It will fallback to default assumed intrinsics in that case. Just note that this will have an effect on data quality, so you will probably want to run the Camera Intrinsics Estimation plugin first.
May I also ask how you are using the device in your custom scripting situation? Do you mean change the source code of Pupil Capture to support other camera types?
Additionally is it possible to script in our own camera solution or no?
Hello everyone. I am currently performing data collection for a study with the pupil core system. I observe that in some subjects, calibration and data collection runs perfectly, and in some subject, no matter what I try the calibration does not improve. I tried instrucing the subjects to move their eyes, that usually narrows down the blue circle around the eye but often does not help with the calibration, I tried re-adjusting the glasses and cameras as well as the screen... does anyone have any advice for me? In those subjects, I observe up to 70% NaNs. In the subjects that calibrate nicely, it'S 4-6%, which is totally fine.
Hi @user-138bf5 , would you be able to share an example recording of poor data quality with [email removed] Then, we can provide more direct feedback. You can share it via Google Drive, for example.
[email removed] , to clarify, are you not
@user-f43a29 thanks you guys again for the chat today at the ECVP. And in general thanks for the active support here on discord ππ
Nice meeting you, @user-84387e! Good luck with your project π