Hi all,
I have been advised to include an overview of how much pupil size data I had to reject per participant, that was labeled as outlier (biologically invalid samples, blinks, samples with high dilation speed, etc.).
Do you have an example in mind, of a paper where they have done such a thing? or do you have a measure for doing this?
If I want to go in details, I seem to have to repeat all my preprocessing steps and somehow make a note of how many samples are being rejected or interpolated or somehow compensated for..
Hi @user-98789c 👋 . Reporting the % rejection should be enough (in published papers, you will just find the mean rejection rate across all participants). As for how to detect/measure your outlier/blink data, here are some options (but I'm sure there are many more out there that I'm not aware of).
For blinks, you can calculate rejection rate based on your system's blink detector (e.g., how many samples were blinks & were rejected out of the timeseries pupil data of X participant). There are more ways of detecting/removing blinks. For example, you can estimate pupil responses to blinks using deconvolution and then regress out the blink-related pupil responses from your pupil timeseries. See here for an example in Python: https://github.com/tknapen/FIRDeconvolution/blob/master/test/pupil_preprocess_python.ipynb - and a similar approach in MATLAB (specifically scripts blink_interpolate.m
and blink_regressout.m
): https://github.com/anne-urai/2017_Urai_pupil-uncertainty/tree/master/Analysis
For detecting pupil dilation speed outliers, you can refer to this paper (specifically Eq. 1-3) and calculate rejection rates after having detected & removed these outliers. https://link.springer.com/article/10.3758/s13428-018-1075-y
Hope this helps!
Thanks a lot @user-480f4c this was helpful 👌