Hey @papr , we've noticed that the 3d model fit to eye0 (the right eye) is a consistently worse fit for many subjects. We're using HMD data, and I'm not sure if the same is the case for data collected with the core. This could be an camera-specific issue (aberration?) ...but, it seems unlikely, because the 2d pupil segmentation looks great, and the issue even persists when we use a more reliable custom ML segmentation algorithm, like RITNet or Ellseg. I wonder, have you noticed this issue yourself?
Because I'm convinced it's not an issue with 2D pupil segmentation, I'm currently digging around in pye_3dplugin.py to see if any properties passed into py3d are off.
To back the claim that pupil fits are great. I believe the long term model fit filters 2D pupils <0.8. Well, the avg of pupil conf > 0.8 is 0.935 for eye 0 and 0.925 for eye 1.
This suggests that the input data is comparable or better for eye0.
Despite that, the model confidence is lower.
Calculating those vals now, if that would help...
@user-b14f98 The code does not differentiate between left and right eye camera. One reason I see might be different eye camera positions (relative to average viewing direction)
I know that fitting the eye model is more difficult if there are many pupil ellipses with high circularity
Right.
Perhaps I can check the aspect ratio.
Strange thing, papr. I just calculated the model confidence, and it's ...
0.1 for eye 0, and 0.99 for the other eye.
Getting data from pupil_positions.csv.
Let me try that again to make sure there isn't human error.
That's zany, though.
Yeah, that's the result.
In what code is this confidence value calculated?
Which pye3d version are you using?
I pulled the most recent version of pupil core about two days ago, and installed the latest dependencies, I believe.
Looking for version info
version_installed = getattr(pye3d, "version", "0.0.1") version_supported = "0.3.0"
That seems old. š Running off of a fresh conda env., and I used the Windows requirements.txt to install dependencies.
@user-b14f98 https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/detector_3d.py#L560-L573
Is that version number the latest? 0.0.1 ?
Latest is 0.3.0 https://github.com/pupil-labs/pye3d-detector/blob/master/CHANGES.rst#030-2021-11-10
Alright, I'll update.
Yep, pip says that's what I have
Strange that the code still has that line, "version_installed = getattr(pye3d, "version", "0.0.1")"
That code is pupil code and is not pye3d specific
in any case, I'll check the pupil aspect ratio
Well, there you go.
Have you guys ever speculated on a fix for this? Sadly, you can't update the model on earlier fames based on data from later in the recording
Usually, the recommendation is to have an explicit model fitting phase to sample ellipses with sufficiently different aspect ratios to fit a good model.
You can always write your own post-processing where you hand select the pupil data and fit a single model.
But I could double check that on the whiteboard
A test we could do is to swap eye0 and eye1's files' names, so 0 is processed as 1 and vice versa. Then we could see if it actually is a hardware issue
I would be very surprised if you didn't get the same results as before.
Do that and maybe the vertical flip, but I share papr's instinct here.
Just as a visualization, I would like to see all high confidence 2d ellipses rendered together, each with very slight alpha, into one frame.
Asssy. Gotta run - being rude to company
@user-b14f98 @user-3cff0d You have been using the post-hoc hmd gazer branch extensively, correct? Can you confirm that it is working better than the default post-hoc gazer for VR recordings?
We can confirm that its still not great. Happy to say more. Kevin has put a lot of time in to that,
Just as a visualization I would like to
Hey All, I have a question regarding the gaze estimation after the pupil detection step. Is there a way for me to import a custom pupil_positions file that can then be used for the gaze mapping? I can create a file identical to the pupil_positions.csv that is exported by the player.
Yes. There are multiple options here. Can you tell us more about how you do pupil detection?
I have a pretrained neural network model https://github.com/openPupil/PyPupilEXT that I am using to detect the pupil positions for both eye0.mp4 and eye1.mp4 that was recording. All of the processing is post hoc. Real time is not a requirement.
The proper way to integrate your data would be to write a pupil detection plugin that wraps the neural network and performs the detection within Capture or Pupil Player https://docs.pupil-labs.com/developer/core/plugin-api/#pupil-detection-plugins
If you want to load the data from disk, then you have two options: 1. (recommended) Save the data correctly in the native format 2. Write a plugin data provider plugin that loads the custom csv pupil data, similar to this https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_producers.py#L349 (This class is not a good example on what needs to happen as the data transformation is fairly implicit)
To make use of the custom depth data, you will need write a custom gazer class that configure the calibration estimation accordingly to your needs. You can find the gazer classes here https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/gaze_mapping
Instead of performing the calibration in Player, you might also be able to use this instead https://github.com/papr/pupil-core-pipeline
The model gives me pupil center, diameter, axis, angle and confidence. I am hoping this is sufficient. I also have a depth estimate for every world view frame. That is, I know what depth the two gaze vectors must be triangulated onto. If possible I would like to use this depth instead of the fixed depth that was set in pupil capture.
Thanks a lot for your help @papr I will look into these methods and get back to you if I get stuck. For clarification, by native format you mean the pupil.pldata file needs to be replaced? If that is done can I run the pupil player as is with the new data?
pupil.pldata
and pupil_timestamps.npy
, but yes
Thanks a lot. š So I can use this to generate a new gaze_positions.csv using only pupil player?
Yes, that should work
Great. Thank you so much. Will try this. š
_video_fra
If I was interested in researching, gathering data, only on pupil diameter changes and not the object the viewer is fixated on, which product should I get? Pupil Core might do too much for what Iām looking for, but add-on may not be enough.
The only differences between the add-on and the Core headset are: 1. The Core headset has an additional scene camera 2. The add-on needs a VR headset to attach to. The Core headset can be worn by the subject without further hardware.
By the way, I have been able to install the develop branch using the requirements.txt on both PC and M1 mac
Hi! I was wondering if there was a python version of pupil labs 2d pupil detector! It would be beneficial if there was!
is it ok if I ask questions about it then here?
2D pupil detector
Pro tip: use the eye video overlay to verify whether the timing of your pupil data is correct
Thanks for this tip. The timing is way off. My pupil.pldata file is way shorter than it should be
How do we create the pupil.pldata exactly? Here are the steps that I am following: 1. Load the pupil.pldata file. 2. Use the timestamp in the pupil datum to search the eye timestamp for the corresponding eye id. 3. Use the eye frame corresponding to the least abs difference 4. Perform pupil detection on the frame and add datum to file 5. Continue till end of pupil.pldata or end of eye video. But this is giving me a much shorter file than I need
Are you generating the file externally or modifying it in Player?
Also, I recommend using pyav and not Opencv to decode the video
Will do. Thanks
Okay. So you mean I should just loop through the eye_timestamp.npy files and create a pupil.pldata file based on that?
Yeah, exactly. The datums for each eye do not need to be interleaved within the file. Player will sort them on load
Oh cool. Let me try that
I ran the code for both eyes. The eye overlay is still racing ahead. May I share my code here?
Custom pupil.pldata