Hi there, I have used pyuvc to record egocentric and eye videos from the Pupil Core glasses. The recordings are simply the 3 videos and I have saved the original frame timestamps of each camera. Is there any way to now use the gaze tracking algorithms from pupil labs on these videos? I need to get the 2d normalized gaze positions. I was thinking of uploading them to Pupil Player and doing the post-hoc ccalibration but since they were not recorded with Pupil capture, I was not sure if I can. Thank you in advance!
Hi @user-0b1050 👋 ! You won't be able to load it onto Pupil Player, you can modify Capture to trick it, but perhaps the easiest way would be for you to try https://github.com/pupil-labs/pupil-detectors
Thank you @user-d407c1! Just to make sure I understand. This pupil detector will provide me with the detected pupils but I would still need a gaze mapper to map the detected pupils from the two eyes to a final 2D coordinate point. Is this gaze estimation part of the algorithm available?
Yes, the whole code of Pupil Capture and Player is open source, you can modify it to detect the markers on the scene camera, build the calibration and use a gaze mapper https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/gaze_mapping
But at that point, I may wonder if you would not rather prefer to collect the data with Capture
Thanks! We were running a multimodal data colection for a research study and we didn't have enough power on the wearable PC to run Capture so we decided to just record the videos with pyuvc. Now I am trying to figure out the easiest way to extract the gaze coordinates from thos. You also mentioned modifying Capture to trick it. Could you elaborate on that process?
You can fake the video cameras using the ones recorded, it's involved as you would need to modify the code to allow it, but possible. The only concern here, is whether you collected the data with proper timestamp and sync.
If the resources are a concern Pupil Capture can disable all plugins while keeping the minimal ones or you can stream to a computer running Pupil Capture.
Thank you although I'm afraid it is too late for this now. I think I will try to use the gaze_mapping code to process the videos for the gaze mapping. It just didn't look straightforward so I was wondering if there is an easier approach.
Hi @user-0b1050 👋. If you haven't seen it already, this section of the docs details the files Pupil Player will expect, if you want to use those videos with Pupil Player.
Thank you @user-4c21e5! Indeed, I think I found a way to load my videos into Pupil Player by creating all the required files.
Is it not possible to pull in info.json and labels.csv through this endpoint? I'm getting 404 for both
Hi @user-3ff3c1 , the team reports thatlabels.csv is not available through this endpoint, but info.json should be. Could you open a Support Ticket in 🛟 troubleshooting ?
I will check with the team about that.
Hi I am developing an app with the goal of automating pulling recorded videos from the app to a research server. Is it possible to do this from outside the PupilLabs app? Would there be any restrictions I need to be aware of?
Hi @user-bbbd3b , what kind of restrictions might you be referring to? The data are open and saved on the Android phone's harddrive as normal files, and our principle is that your data is your data. We already offer an export option for saving the data locally, if you want to save development effort.
You can work with them locally with either Neon Player or our Python library, pl-neon-recording.
These solutions still seem to require going in the app and hitting export. Is is possible to get access to the files as soon as they are done being recorded, while they are still in the app so a user doesn't have to hit export?
Hi @user-bbbd3b! Hitting the export button is currently required. However, you can select multiple recordings and do a batch export to speed things up.