Hi, can we use this to extract saccades?
Hi, unfortunately not, but it looks like this might https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing
Hi,
I would be grateful is someone can shed some light on this issue. We would like to measure the pupil size in our experiment for predicting cognitive load, we are using Pupil Core.
Since pupil size is also impacted by ambient light,
(1)how do we differentiate and factor in the change in pupil size due to ambient light and that due to the difficulty of the actual task? does the pupil core software manage ambient light?
(2)can ambient light be somehow adjusted post-hoc?
(3)we have tried to control the ambient lighting but our light readings (lux) shows that there is always some fluctuation, so is there some mathematical formula describing the relationship between ambient light and pupil size? as in how much increase in ambient light changes the pupil size
Thanks for helping.
PS: Not sure in which channel does this post belong.
Hello again (just a regular user here), your best bet is to control ambient light as much as possible or at least try to make sure it doesnβt change during your region of interest (e.g., when the stimulus appeared). Watson and Yellot (2012) have a formula for light adapted pupil size, which may be of interest. There is also this community contribution which could be relevant https://github.com/pignoniG/cognitive_analysis_tool
Thanks!, I will look at those repos
hey, what are some common steps people use to preprocess eye-tracking data? I've just been looking at basic data quality of recording and exported data. But I've just been wondering if people go over and beyond that.
@user-908b50 if you could share a bit about your experiment/type of data, the community might be better able to provide you with a more targeted/concrete response.
My experiment looks at the effects of sensory feedback on eye movements during slot machine play. We had participants wear the eye-tracker (with recording from right eye only) and play for about 20 minutes. Our sample size is around 156 participants. We have looked at about 1/3rd of the video files to see what can be salvaged. Since we are only interested in the eye movements on the surface of the slot machine screen, I am making sure the video files are intact (i.e. looking for consistent pick-up of fixations and a complete video that was properly calibrated). Our basic sense/decision is to not use files where the world video does not capture the whole screen, their were issues with calibration (based on RA notes and lack of marker during offline processing of surface data), the visualized data looks somewhat shifted/skewed visibly, or if recording was done in a single session. What do people think about our decisions? There seems to be no way to salvage incorrectly calibrated data or use data where half the screen is missing.
@wrp The @ everyone in announce channel gives me this:
Thanks @user-e637bd - link works for me - try https://link.springer.com/article/10.3758%2Fs13428-021-01657-8
hm, yea that works. not sure what happened maybe temp. springer glitch
I updated the π― announcements links accordingly. Thanks for the feedback
Hi , I had asked this question in the 'core' channel put am putting it here too, hoping someone can point out the right direction. I am trying to use the 'unified formula for light-adapted pupil size' (https://jov.arvojournals.org/article.aspx?articleid=2279420) to factor in the change in pupil size due to ambient light. One of the parameters that the formula requires is the field diameter in degrees. I have some doubts regarding this. -
1)Should I assume field diameter is the same as FOV? or is the field diameter, the diameter of the largest circle that can fit in the FOV? Since, the FOV of a camera is represented in three parameters (horizontal, vertical and diagonal), but, the formula requires a single value for field diameter. How do I get a single FOV value from these three values?
Thanks.
@user-562b5c A good question, and I'm guessing that this is a challenge of extending from the lab to the real world (where the FOV is not a circle). Perhaps it won't have a large influence on the results of the model, in which case you won't have to worry. For some insight, use Watson's tool to measure diameter according to the unified model across variation in field sizes in the upper ranges - say, 130 degrees (around the upper limit of the human vertical FOV) to 200 degrees (around the upper limit of the human horizontal FOV), and calculate the variation in pupil size (sum-squared error, or something like that). If there is little variation, then you can choose an arbitrary but empirically motivated value within the range I suggested, report the results of your test, and hopefully convince your audience that other choices within the range would produce similar results.
if the test shows a good deal of variation, well, then you'll have to come up with a theory-driven means of nailing down your choice of FOV.
thanks @user-b14f98 for the insightful idea, I will try out what you suggested and see how it turns out