research-publications


user-98789c 02 September, 2022, 12:31:48

Hi all,

I have a decision-making experiment going on, in which my participants see some gray-scale (normalized luminance) cues on the monitor and associate them with the four conditions of the experiment, using keyboard inputs, while I record their pupil size.

After my pilot recordings, I see general constrictions in pupil size for all the experimental conditions, whose amounts are different (and are as differentiating among the four conditions as they should be).

I have been advised that only normalizing the luminance of my cue images is not enough, and the cues should also be iso-luminant with respect to the background (which is gray, and the same throughout all the runs).

  1. I did some search about this iso-luminancy, and it seems I might need a photometry device to take care of this task.
  2. On the other hand, I also got the comment that only the fact that my cue images are counter balanced among conditions and participants, is enough for me not to have to do the iso-luminance.

Do you have any comments about the two of these points? Has anyone done such a thing in their Pupil Core recordings?

Thank you!

user-42a104 05 September, 2022, 19:52:04

Hi all, there was this LPW dataset (https://arxiv.org/abs/1511.05768) collected using Pupil Pro a few years back. Does anyone know how to get to the camera parameters used for the exercise? @marc

user-9429ba 06 September, 2022, 10:35:51

Hi 👋 Relative changes in Pupil size - i.e. through your counter-balanced conditions - could be enough. It depends on the questions you are trying to answer and your baselining. The attached paper is a good source for reference.

Pupilommetry.pdf

user-98789c 06 September, 2022, 14:39:31

thank you @user-9429ba

user-6e3d0f 06 September, 2022, 12:23:08

hey, we recently released a paper with the pupil core that deals with speeding up and automate the evaluation of eye tracking data during an mirror exposure experiment. Feel free to ask me any questions or just have a look 🙂

https://www.bibsonomy.org/bibtex/2338d04965adadc2c33b593fe0362d180/hci-uwb

user-9429ba 06 September, 2022, 13:19:22

@user-98789c If you search pupillometry under keywords in our publications database you can find examples papers using Pupil Core: https://pupil-labs.com/publications/

user-98789c 14 September, 2022, 10:44:57

Hi all, do you know of any papers that have used pupil size "area under the curve" and "peak amplitude" for their analysis? I'm looking for reasons why these parameters can be used.

user-98789c 19 September, 2022, 12:55:48

hi all, no comments on my question? 🙂

user-9429ba 20 September, 2022, 07:21:20

Hey @user-98789c If you search pupillometry "area under the curve" in Google Scholar there are a number of hits. This one looks quite promising: https://www.frontiersin.org/articles/10.3389/fneur.2011.00010/full Perhaps you can draw more from the references contained in this paper too. You may already be aware, but PyPlr gives a lot of specific advice on coding and pupillometry data with Pupil Core! https://pyplr.github.io/cvd_pupillometry/04a_pyplr_and_pupil_core.html

user-98789c 21 September, 2022, 09:31:59

thanks @user-9429ba

user-dad280 20 September, 2022, 16:32:54

Hi, I have conducted fieldwork with pupil invisible glasses and are trying to use begaze to do some analysis. However, I can not seem to export large files (45 min recording) using the export function in pupil player v3.0.7. The export crashes and shuts down the pupil player. Is it just my pc that is not powerful enough? (I use an i5 laptop).

End of September archive