πŸ”¬ research-publications


user-562b5c 01 August, 2021, 21:38:27

Hi, can we use this to extract saccades?

user-430fc1 02 August, 2021, 08:27:31

Hi, unfortunately not, but it looks like this might https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing

user-562b5c 01 August, 2021, 22:06:39

Hi,

I would be grateful is someone can shed some light on this issue. We would like to measure the pupil size in our experiment for predicting cognitive load, we are using Pupil Core.

Since pupil size is also impacted by ambient light,

(1)how do we differentiate and factor in the change in pupil size due to ambient light and that due to the difficulty of the actual task? does the pupil core software manage ambient light?

(2)can ambient light be somehow adjusted post-hoc?

(3)we have tried to control the ambient lighting but our light readings (lux) shows that there is always some fluctuation, so is there some mathematical formula describing the relationship between ambient light and pupil size? as in how much increase in ambient light changes the pupil size

Thanks for helping.

PS: Not sure in which channel does this post belong.

user-430fc1 02 August, 2021, 08:33:20

Hello again (just a regular user here), your best bet is to control ambient light as much as possible or at least try to make sure it doesn’t change during your region of interest (e.g., when the stimulus appeared). Watson and Yellot (2012) have a formula for light adapted pupil size, which may be of interest. There is also this community contribution which could be relevant https://github.com/pignoniG/cognitive_analysis_tool

user-562b5c 02 August, 2021, 09:52:32

Thanks!, I will look at those repos

user-908b50 03 August, 2021, 23:32:44

hey, what are some common steps people use to preprocess eye-tracking data? I've just been looking at basic data quality of recording and exported data. But I've just been wondering if people go over and beyond that.

wrp 05 August, 2021, 07:42:13

@user-908b50 if you could share a bit about your experiment/type of data, the community might be better able to provide you with a more targeted/concrete response.

user-908b50 28 October, 2021, 03:02:41

My experiment looks at the effects of sensory feedback on eye movements during slot machine play. We had participants wear the eye-tracker (with recording from right eye only) and play for about 20 minutes. Our sample size is around 156 participants. We have looked at about 1/3rd of the video files to see what can be salvaged. Since we are only interested in the eye movements on the surface of the slot machine screen, I am making sure the video files are intact (i.e. looking for consistent pick-up of fixations and a complete video that was properly calibrated). Our basic sense/decision is to not use files where the world video does not capture the whole screen, their were issues with calibration (based on RA notes and lack of marker during offline processing of surface data), the visualized data looks somewhat shifted/skewed visibly, or if recording was done in a single session. What do people think about our decisions? There seems to be no way to salvage incorrectly calibrated data or use data where half the screen is missing.

user-e637bd 09 August, 2021, 11:55:58

@wrp The @ everyone in announce channel gives me this:

Chat image

wrp 09 August, 2021, 12:02:00

Thanks @user-e637bd - link works for me - try https://link.springer.com/article/10.3758%2Fs13428-021-01657-8

user-e637bd 09 August, 2021, 12:03:29

hm, yea that works. not sure what happened maybe temp. springer glitch

wrp 09 August, 2021, 12:05:17

I updated the πŸ“― announcements links accordingly. Thanks for the feedback

user-562b5c 16 August, 2021, 23:21:56

Hi , I had asked this question in the 'core' channel put am putting it here too, hoping someone can point out the right direction. I am trying to use the 'unified formula for light-adapted pupil size' (https://jov.arvojournals.org/article.aspx?articleid=2279420) to factor in the change in pupil size due to ambient light. One of the parameters that the formula requires is the field diameter in degrees. I have some doubts regarding this. -

1)Should I assume field diameter is the same as FOV? or is the field diameter, the diameter of the largest circle that can fit in the FOV? Since, the FOV of a camera is represented in three parameters (horizontal, vertical and diagonal), but, the formula requires a single value for field diameter. How do I get a single FOV value from these three values?

Thanks.

user-b14f98 17 August, 2021, 13:52:44

@user-562b5c A good question, and I'm guessing that this is a challenge of extending from the lab to the real world (where the FOV is not a circle). Perhaps it won't have a large influence on the results of the model, in which case you won't have to worry. For some insight, use Watson's tool to measure diameter according to the unified model across variation in field sizes in the upper ranges - say, 130 degrees (around the upper limit of the human vertical FOV) to 200 degrees (around the upper limit of the human horizontal FOV), and calculate the variation in pupil size (sum-squared error, or something like that). If there is little variation, then you can choose an arbitrary but empirically motivated value within the range I suggested, report the results of your test, and hopefully convince your audience that other choices within the range would produce similar results.

user-b14f98 17 August, 2021, 13:53:04

if the test shows a good deal of variation, well, then you'll have to come up with a theory-driven means of nailing down your choice of FOV.

user-562b5c 18 August, 2021, 09:45:31

thanks @user-b14f98 for the insightful idea, I will try out what you suggested and see how it turns out

End of August archive