Hello! I submitted an article about my project where I used Pupil Core to make some experiments, but they replied asking for a permission to use one of the pictures. I guess it can be the figure that I took from your technical aspects section (the one with the glasses and corresponding components)... I put the reference of the website, but it seems is not enough. Is it possible to have a written premission from you? Thank you!
Please contact [email removed] in this regard.
Thank you! I need this today š„
I will make sure that you will get a response within the next 2 hours.
THANK YOU! I'm writing right now
We have received your email.
š
Hi @papr When processing the pupil_position data exported from pupil player I exclude data what has a Pupil_confidence
or Model_confidence
below 0.6 as your website says that data with confidence below 0.6 is unreliable. However, this has resulted in a big loss of data because for some of my trials the Model_confidence
for one eye (or sometimes both eye) is consistently low (<0.5, often <0.4-0.3) even though the Pupil_confidence
is high (<0.9). I know that a low Pupil_confidence
means that the pupil tracking was poor and it is often possible to see and fix this in real time using pupil capture during the experiment, but what causes the Model_confidence
to be low and how can I prevent this problem from happening?
Many thanks!
Hi, low model confidence means that the 3d eye model is not well fit. Its main requirement is high 2d pupil confidence. Secondly, it requires pupil data from different viewing angles in order to triangulate the eye model correctly, e.g. by rolling your eyes
Okay, so in my case given that the pupil confidence is high the issue is probably caused by the a lack of data at different viewing angles?
Correct. Does your experiment require you to fixate a single point for longer periods?
This pilot study does yes.
Does it follow a repeated measure design? And if yes, how long are your trials?
The subjects are asked to fixate on a fixed target under different viewing conditions with breaks at set intervals. Each trails (and thus recording) lasts about 2 mins but the whole experiment takes about 40 min depending on the number and length of breaks the subject choses to take.
ok, great. I suggest asking the subjects to roll their eyes before each trial, making sure the 3d eye model fits well, and freezing the model for the duration of the trial. You can freeze/unfreeze programatically using the network api.
Freezing the eye model makes it susceptible to slippage. But if your subject is keeping their head still during the trial, slippage probability is low
Cool! Thanks! How do I check the models fit in real time? I might be missing something and but I can't see the model confidence displayed in capture?
and what is the advantage to freezing the model once it has a good fit (i.e. I thought it only update if a slippage occurs)?
If you look straight ahead, triangulation is difficult, causing model estimation inaccuracy in z-dimension. For the 3d detector this can look like slippage and adjust the model. Freezing the model prevents any model changes.
model confidence is not displayed explicitly. You can either subscribe to model confidence and plot in realtime using e.g. python, or inspect the eye model visually. https://docs.pupil-labs.com/core/#_3-check-pupil-detection
green circle around the eye ball and a red circle around the pupil with a red dot in the center Additionally, starting with Capture 2.0, the red and blue circle should overlap as much as possible for as many viewing angles as possible
Thanks very much @papr.
could Pupil enable my father to use his eyes to control a Windows PC and/or Android device? Or is there something you folks who understand this tech recommend for someone who has paralyzed arms? I'm sorry if this is the wrong channel to be asking this.
Hi @user-049674. Pupil Core is a head-mounted system that requires set-up and calibration each time it's worn. It's possible to control a mouse via gaze collected by the headset, but it requires some technical knowledge to get set up, and is more for research than an end-user accessibility tool.
Hey, Iām trying to use pupil core on a windows computer and I keep getting device has malfunctioned pop up
Never mind, it worked
@papr hello, I was wondering how we can limit the information we are trying to gather from pupil core to the dilation of both eyes, their azimuth, and their virgence (if that is calculated by the machine).
Are you referring to real time data collection or data recording by Capture?
For the former, you can specify which data you are interested in by subscribing to specific topics https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
For dilation of both eyes and their azimuth you need to subscribe to pupil.
Vergence requires calibrated gaze data, which you can get by subscribing to gaze
and successfully calibrating.
The recorder does not offer much customization options for which data is recorded. You can turn off eye video recordings and disable active plugins to reduce the recorded data. Generally, I recommend recording as much raw data as possible to perform post-hoc processing should that become necessary.
can I apply host-hoc calibration from pupil-player for next pupil-capture recording session?
Hey, this is not possible.
ok thanks!
Hi, could I get your advice please, I need to record 2 world videos simultaneously with different recording frequencies. Then I want to match times of the videos and present them next to each other. Is there something that I can use for the time matching?
I use pupil capture 3.1.16 on a win10 system.
I am assuming that you are talking about recording two different scene cameras.
The easiest way (if you do not need eye video or pupil data), would be to select the second scene camera as an eye camera and use Player's Eye Video Overlay.
If you need these recordings to be complete, i.e. including pupil detection and eye video, use two Capture instances (need to be on the same computer as Time Sync is currently not working). Afterward, you can use Player's Video Overlay plugin to render one video into the other.
HI all, I'm trying to setup Pupillabs 3.x with my VR application using LSL. I'm well aware that Pupil Capture needs to be calibrated before any data will be put out on the streams. However, I'm not interested in the gaze data - only Pupil diameter, which as far as I know doesn't need depth gaze calibration to work. Are there any plans to allow the LSL plugin to output pupil diameter data without calibration in the future?
I know I can include a calibration procedure in my VR application but it is already quite taxing to begin with so I'd like to avoid this.
Hi, currently we are not planning any changes to the plugin. But technically, it is possible to implement your requirements.
This is the line that accesses the gaze data. The first step would be to access "pupil"
data.
https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L48
Afterward, you only need to adjust the setup-channel
and extract-data
functions to match the pupil format.
Thank you for your prompt response papr! Right now I've rolled back to 1.23 but I feel like I'm missing out on new juicy updates. I'll see what I can do with implementing it myself - Thank you!
Hello, pupillabs does not save any annotations I made. The annotation.pldata does not safe or change even though I the annotations pop up when I look trough the video. Any ideas?
@user-7bd058 If your annotations pop up when you look through the recording, they are saved. Run the exporter to view the annotations in a .csv file: https://docs.pupil-labs.com/core/software/pupil-player/#annotation-export
Thank you for your rapid answer. Pupillabs crashed and therefore the annotation file did not update. That was the problem...
Should you be able to reproduce the crash, please share the player.log file (directly after; without restarting Player) with us such that we can check what caused it.
hi, does the output data means normalized data?
I am sorry, but I do not fully understand your question. Could you try elaborate on what you want to know?
@papr
See also my response to your previous question as reference https://discord.com/channels/285728493612957698/285728493612957698/806941783175069706
Hi @user-7daa32 In the debug window, all three pye3d model outlines are rendered. The lines correspond to: Green: short term model Yellow: mid-term model Dark Red: long-term model
Please what do these lines mean? Only the eye 0 has the red circular line
The calibration doesn't cover the whole screen
I need the latter. would you know how I can use PL world camera with any other recording software? I'll record the picoflexx video with the pupil capture.
I am able to record the scene camera with Quicktime Player on macOS. On windows, you might need to disable the libusbk drivers in order to make the camera accessible to other recording software.
hi, little update: somehow I got 2 sessions of pupil labs capture running on my laptop. one recording world view with your high speed cam and one with eye-tracking and picoflexx plugin
so I can use the timestamps to map both videos. matching field of view.... maybe adobe will sort that for me.
If you want to post-process the videos in Adobe I highly recommend to export the videos with Player first, as the intermediate video files do not set the media-internal timestamps correctly. Adobe won't know anything about our external timestamps.
Did you perform a screen-based calibration, or did you use the physical marker (in the right of your picture)? If you did a screen-based calibration and the physical marker was also visible, this will throw off your calibration.
I realized that and I recalibrated. I did Single marker p calibration
Hey, I'm just wondering if the LSL plugin has changed substantially. I went to use the most recent version today (2.1 I think?) with the newest version of Pupil Capture and the newest version of Lab Recorder and found that it is not recording any pupil position data, but the outlet still is open. It passes all of the metadata about the outlet, but there are no time stamps or timeseries data
You need to calibrate first š
Oh, I don't think I had to do that before
Even if I just want pupil location data - not gaze?
Yes, see this message for reference https://discord.com/channels/285728493612957698/285728493612957698/829283882300997674
Ok thanks
@user-ecbbea @user-af5385 I will note this down, though. Maybe we can create a pupil-data only LSL plugin since there seems to be demand for it.
Very cool. thank you. I will keep an eye out for this.
yeah, I'm confused if I was just dreaming of it, but i'm almost positive I was able to get pupil position data out before. was this changed?
I would super appreciate it - in the meantime I'll try to hack together a pupil only version
It was changed
ah ok. yeah maybe either a separate version - or just split the gaze and pupil data into two streams. that way you can access both streams (or just one) and still be synchronized to LSL's clock
The design was based on https://github.com/sccn/xdf/wiki/Gaze-Meta-Data which requires pupil and gaze data to be matched.
ah I see. seems like a compromise could be to push pupil data as normal and fill in nans for gaze data if a good calibration is not available. but maybe I'm thinking too simply
Well, pupil data is not matched by default. But I guess we can make something work in that sense. Unfortunately, I won't have time to look at that in next 3-4 weeks. Probably early May again. I noted it down and will let you know once we have any relevant changes for you.
An easier workaround might be a dummy gaze plugin that only matches pupil data for that purpose š¤
it just wasn't clear I had to calibration before hand
yeah no worries - I know you're probably busy. in the mean time - do you have any quick tips for generating the dummy gaze values? can I just hardcore them into the original script?
Not sure what you mean. This is how gaze data looks like internally: https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format You can instantiate such a datum as a Python dictionary in your script and pass it in this line https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L49
Is this what you were looking for?
Until two days ago, the glasses were well executed, but now the glasses themselves are not working. The two parts that recognize iris are good, but the camera on top itself is not recognized. I just can't figure out why. Please help me. ( ps. I am using window )
Hey, this looks like a hardware issue with your realsense camera. Please contact [email removed] in this regard.
Is there a possibility to find out how long someone looked into someone else's eyes when doing the analysis? It's a bit complicated to patch a marker at each side of the eye š
And how high is the precision of measurement of a pupil labs core? - so how big is the aberration between the actual gaze and the measured gaze point, normally given in degrees?
The latter depends on your calibration. See https://docs.pupil-labs.com/core/best-practices/#calibrating
The former requires a custom post-processing. You could run a face-detection algorithm on the scene video that also annotates the eye region. Afterward you could check if the gaze is mapped on that area for a given frame or not.
Hi @papr , I have one query regarding the data output from PL software. 'circle_3d_normal's and 'gaze_normal's. They both are normals originating from the sphere center or eyeball center. 'circle_3d_normal' originates at sphere center and passes through pupil center in 3D and refers to the optical axis whereas 'gaze_normal's refer to visual axis but this also originates from sphere center. Is this correct?
Correct. They are equivalent in their meaning; the representation of the visual axis in two different coordinate systems (eye vs scene camera; the latter requires calibration and is therefore less accurate)
Is there any way to get the matrix that converts eye coordinate space (ECS) to scene coordinate space (SCS) from the export files only? Is this trasnformation stored anywhere or do I need to do with calibration again?
It is not being exported but should be part of the notify.pldata
file if it was made before or during the recording. Post-hoc calibrations are stored in the offline_data
folder
Are you referring to the calibration points or the matrix?
I must have mistaken the points for the gaze mapper parameters. But you could pass the calibration data to a new Gazer3D instance https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L327 (inherits __init__()
from below)
This will trigger a "calibration" https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_base.py#L204
and announce the result using a notification https://github.com/pupil-labs/pupil/blob/eb8c2324f3fd558858ce33f3816972d93e02fcc6/pupil_src/shared_modules/gaze_mapping/gazer_base.py#L325-L336 (It looks like this notification is being recorded v2.0)
I only see the calibration points in these files
It would probably be easiest to write a Capture plugin that loads the calibration data from a given recording, initializes the gazer and writes the captured calibration result to disk.
Actually, you do not even have to write a plugin. This works with an external script and the network API alone.
Thank you @papr. I will have a look at it.
I want to know what information the output data(like gaze position.csv) can deliver after you use pupil capture to record.
Please see our documentation for reference https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter
Hi, I am using Pupil Mobile + Pupil Player 3.2 . I found at the one timestamp point, Player crashes and found world_lookup pts reset at that point.
Hey, the PTS are extracted from the video when you open the recording in Player. Usually, PTS need to be monotonically increasing. Therefore, it looks like something is wrong. But it is difficult to tell for me which part. Would it be possible for you to share the recording with [email removed] such that we can have a look?
error message is like this. Any help?
2021-04-12 09:09:22,727 - player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "video_capture/file_backend.py", line 535, in seek_to_frame File "video_capture/file_backend.py", line 185, in seek File "av/stream.pyx", line 146, in av.stream.Stream.seek File "av/container/core.pyx", line 166, in av.container.core.ContainerProxy.seek File "av/utils.pyx", line 105, in av.utils.err_check av.AVError: [Errno 22] Invalid argument The above exception was the direct cause of the following exception: Traceback (most recent call last): File "launchables/player.py", line 690, in player File "video_overlay/plugins/eye_overlay.py", line 57, in recent_events File "video_overlay/workers/overlay_renderer.py", line 51, in draw_on_frame File "video_overlay/workers/frame_fetcher.py", line 34, in closest_frame_to_ts File "video_overlay/workers/frame_fetcher.py", line 43, in frame_for_idx File "video_capture/file_backend.py", line 537, in seek_to_frame video_capture.file_backend.FileSeekError 2021-04-12 09:09:22,727 - player - [INFO] launchables.player: Process shutting down.
Please I have questions
The green lines must be within the ROI right ? And should be smaller? I am still trying to look for the range of values that are ideal for accuracy and precision.
After calibration must the the field cover all the stimulus areas ? I am going to send a picture to understand this last question
The green lines must be within the ROI right? No, the green eye model outline should be slightly smaller than the eye ball. The ROI only informs the pupil detection where to look for the pupil. Your models look fairly well fit.
Must this cover the whole range of the visual stimulus ? The participant will not move the head during the tracking
Gaze estimation will be most accurate in that region. When the subject is free to move their head, calibrating the subject's field of view is sufficient as humans tend to move their head rather than doing extreme eye movements. If the subject's head is fixed, e.g. using an head-rest, you should make sure the calibration area covers your stimulus, yes.
Thanks. Is this possible when the visual stimulus is very large like 95 inches ? The visual stimulus is placed on the wall. Is single marker calibration good for such calibration?
You should calibrate using the viewing angles you expect to record in your experiment. Yes, you can use the single marker, as it is able to calibrate at small or large viewing angles depending on how you use it.
Hi! I have the following issue with the surface mapper and I would like your advice. I have an A4 paper with 4 april tags in the corners. I have defined a surface on pupil player, with all 4. However, it does not detect the surface when only one is within view. I thought the surface works even with a subset of the apriltags visible. Is there some kind of rule of thumb? 2/4 minimum or so?
edit: to clarify, when there 2-3 markers visible the surface is detected, and has the annotation <surface name> 3/4
Thanks
Please see our (recently added) note in the documentation:
Note that the markers require a white border around them for robust detection. In our experience, this should be at least equal to the width of the smallest white square/rectangle shown in the Marker. Please ensure you include a sufficient border around when displaying markers on screen or printing the markers.
Can I ask how to get the gaze coordinates of the calibration points? Is that possible? I want to draw a bounding box as a limit for my experiment.
That is possible. When stopped, the calibration choreography starts the selected gazer plugin using a notify.start_plugin
message. It includes the pupil and reference data necessary to calibrate.
Thank you very much for your response! Can you please give me some directions on how to do that? Can I use the pupil player to do it after the recording session?
Can pupil lab core audio-recorded? Like participants able to indicate when they actually see the target? This will help us know what is correct and incorrect
Since Capture v2, Pupil Core does not record audio. You could use the annotation plugin for this purpose, whereby participants press a hotkey when they see a stimulus if that is what your experiment requires: https://docs.pupil-labs.com/core/software/pupil-capture/#annotations
Do we have filter for setting threshold for what should be considered fixations or saccades?
We implement a dispersion-based fixation filter, with the exact procedure differing dependent on whether fixations are detected in an online or offline context. Read more about that here: https://docs.pupil-labs.com/core/terminology/#fixations The filter does not classify saccades. This community example implements a saccadic filter which might be of interest to you: https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing
Ah, my apologies, I was assuming you wanted to do that in real time. You wanted to do that post-hoc only?
If that is possible yes
I am not 100% sure on that.
I'm using the recorded gaze data.
Where do you want to draw the calibration area?
And do you use post-hoc calibration? Or the recorded gaze data?
Regarding the real time do I have to enable a plugin?
No, you would need to run a script in parallel, that is subscribed to the notification and extracts the relevant data from it. But the information might be stored in the recording. Let me check.
Check out this quick notebook I made https://nbviewer.jupyter.org/gist/papr/ad50c1146d297deef9a1738a4731eb45
Many thanks!!!
Thanks, that's not the issue. the individual marker is detected fine, but when it detect 1 out of 4 only (the others are out of view) it does not 'show' the surface.
Ah, ok. Pupil Player requires at least 2 visible markers to detect a surface successfully.
i see!
in another setup we tried (successefuly i think) to define a surface from a single marker. this seems to work fine>
Is it possible that this setup was made in Pupil Cloud?
no, it also worked on the pupil player
I can pm you some examples
I was just able to reproduce it. My apologies.
So, it seems like one-marker-surfaces need to be setup as such. Surfaces defined with more than one marker require at least 2 surfaces.
any chance we can tweak this in the plugin?
it would make detections somewhat smoother
I am not exactly sure why and how the software differentiates these cases. We will have to look into that.
ok
let me know if you find out anything š
Thanks again! Because pupil lab core has a binocular recording, does it mean we will have two data? One for the right eye and the other for the left eye? If that is a yes, how can we decide which data to choose? Right eye or average the two data ?
For the offline fixation detection what does this statement mean? " Instead of creating two consecutive fixations of length 300 ms it creates a single fixation with length 600 ms. Fixations do not overlap"
Is the minimum duration threshold of 100-200 ms proposed by Salvucci for online fixation detection?
You can read about choosing fixation filter thresholds here: https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds. Choosing the optimum threshold is dependent on your experimental setup.
https://www.frontiersin.org/articles/10.3389/fnhum.2020.589502/full#h3
In this paper, the authors used screen marker calibration for experiment that doesn't involve the use of the computer screen. I want to be sure if is possible to use the 5 point screen marker calibration for experiment where the visual stimulus is placed on the wall ?
Hello! I am using the pupil core eye tracker and since my last recording the world camera does not work anymore. When i start pupil capture the screen for the world camera stays grey.. I tried to reinstall the cam driver but it didnt help. I guess the problem could be a loose connector which connects the world camera to the usb hub. The other 2 cams for the eyes work fine. I am grateful for every tip!
@user-7daa32 that depends on what outcomes you are interested in. If you simply want to know where people are looking, the gaze point is estimated from both eyes automatically (binocular). If you want to examine the specific gaze direction for each eye individually, you can also do that by looking at 'gaze_normal*_x,y,z': https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter
For Annotation data, it is usually come twice or multiple for a single click of a hotkey. For example when the participant fixates on a particular AOI and click once the E annotations, we expect a single annotation display on player and one annotation for the AOI on the export file
Please contact [email removed] in this regard.
ok thanks!
@user-7daa32 here are a few example scenarios: 1) Your stimulus is 2x2 meters in size. The participant stands 2 meters away from the stimulus. In order to look at the whole stimulus, the participant will 'point' their head towards where they are looking. Therefore, the degree of eye rotation in the head won't be big. 2) The same conditions. However, you instruct the participant to keep their head still. Therefore, in order to look at the whole stimulus, they will need bigger eye rotations in the head. The calibration procedure you choose should replicate the eye rotations. So if your experiment aligns with scenario 1 (i.e. the participants are free to move their head) a screen-based calibration will probably be sufficient. If you experiment aligns with scenario 2 (i.e. the participants head must be still) you might need to use a single marker in order to calibrate at the bigger eye rotations.
To be sure, you should run a pilot study. You should try out both calibration methods. Then confirm which one was optimal for you experimental setup. Test them out by instructing the participant to look at a specific region and check the gaze point is in the expected place.
Great! Thank again. The visual stimulus is on the wall. And I thought we should use single marker calibration (physical marker display) since we are not using the computer screen. We have not decided yet whether there will be head movements or the head will remains still during visual search. But I have been learning the experiment with the head kept still. During the single marker calibration (physical), a spiral head movement was made while looking at the physical calibration marker placed on the center of the stimulus. Just thinking that the calibration method (screen display) used in this paper shouldn't be single marker calibration(screen display) because the computer wasn't used to display the stimulus. Most literatures usually don't write detailed methodology.
I want to ask, during single marker calibration (physical marker display mode), the subject must look at the marker while moving the head right?
zigzag ā Today at 11:14 AM Hi, a couple of questions I have a pupil labs 200 hz core eye camera.
I would like it to be positioned pointing straight at the eye and quite close (60 mm away) or so but I am having some issues getting a good image - perhaps the software is not expecting a head on view of the eye? I had a brief look but I don't really understand the plug-ins and whether or not an algorithm optimised for a head on view of the eye exists
My user will have glasses on so perhaps the glasses are causing issues?
Perhaps the fixed focus distance of the 200hz camera is too far away - what is the design focal distance ? I know it's not recommended, but could I adjust the lens focal position (or change the lens) I assume it is a standard m12 mount lens , and the glue fixing the lens could be scraped off ? (I accept all risk of destroying the lens)
What is the optimum situation for maximum contrast (i.e. dark pupil and white iris) - to have a high level of uniform illumination on the eye to increase the contrast ? (while keeping eye safety)
Does the software modulate the LEDs at all for glint detection (i.e. to distinguish between the glints from the two LEDs ?)
I assume the LEDs are at 850 nm, but I can still see a red glow from the LEDs which surprises me (perhaps the lower wavelength tail of the LED?)
I assume the pupil labs camera has an IR pass + visible block filter in it , and hence any replacement m12 lenses would need this too?
Thanks!
Hi @user-0c4111, pointing the eye camera straight at the eye should not be a problem. However, the eye cameras should have an unobstructed view of the eyes. If they are pointed through external glasses lenses, that is likely to be one reason why you can't detect the pupils. However, 60 mm could also be too far. The standard headset positions the eye cameras around 25 mm from the pupils. The focus is fixed at 15 mm - 80 mm.
We employ 'dark pupil' detection that doesn't require corneal reflection or glints. The IR illumination is not modulated. The best situation is a uniformly illuminated pupil. The standard positioning of the cameras is usually optimal in my experience.
I would try again without glasses obstructing the camera.
I do not recommend changing the eye camera lens. But perhaps @papr or @user-755e9e is better placed to advise on that.
Hello! Is it possible to run Fixation Detector in console (not via interface of Pupil Player)? If yes, please tell how (I cannot find scripts in the pupil repo)
A single press of a hotkey will only generate one annotation. Are you sure the hotkey wasn't held down or pressed twice? Incidentally, I would avoid using 'E' as a hotkey, as 'e' is actually used to trigger a data export in Pupil Player
Just an example. I am not using the "e".. I think it seems like hotkey was held down.. keyboard keys should be so flexible that one can't realize when a key is held down.
When using a single marker, yes you can move the head, e.g. in a spiral pattern. Alternatively, you could move the marker around whilst keeping the head still. The most important thing is to cover a similar area of the visual field that you will record in your experiment. The 5-point screen-based calibration can only cover a relatively small area that encompasses the computer screen. With the single marker, you can cover a much larger range of the visual field as you can move it or your head as much as needed.
Thanks
it looks like my camera intrinsics calibration may have been off during data recording. The displayed image looked correct (all parallel lines parallel) but when i go to undistort the frames in post processing, the undistortion fails. If i load the default intrinsics I am able to somewhat undistort the image while if I load the calibrated intrinscs I just get a black image when I display the undistorted image. If I recalibrate, is there a way i can still use the gaze projections? or will these now be off for all data collected? (I am only interested in 2d gaze data for the moment)
2d gaze data is not affected by the intrinsics as it is mapped into the distorted camera image.
Thank you very much. Another question I have come across during my debugging. When I get good coverage of the entire field of view during my intrinsics estimation I then get a blank image when i go to perform undistortion (sample coverage when taking the 10 images is shown in the attached image). However, if I use the default or an estimation of intrinsics when the 10 images were all fairly centered, I am able to get a result from the undistortion but the edges are quite unclear. Is there a reason why re-projection of points may be failing when coverage of the 10 images taken include the edge points? Thank you very much for your time and help
Hey, which camera model have you chosen for the intrinsics estimation? Radial or fish eye? Please use the latter for the 1080p resolution.
Hi, yes I am using fisheye for the 1080p resolution. I have gotten it working better by increasing the number of images taken (from 10 to 15), but am still not able to get good images taken with the calibration pattern at the edges of the frame. However, the new camera intrinsics calculated seem to work ok (images included below)
Hi, anyone has the wiring diagram of the core? I need to change the connector of the right camera.
Hi all, we're using the core to run an experiment in the light and in complete darkness, but in darkness we are struggling to get a reasonable confidence level, and we are missing a lot of data. Our hunch is that the pupil is probably too large, but I thought it would be a good idea to see if there are recommended settings for collecting data in darkness in case we are doing something silly! In the light we have nice data, so the only difference is in darkness and we're a bit stumped. Any advice would be very much appreciated, thanks.
Please see this issue for reference https://github.com/pupil-labs/pupil/issues/1370
Thanks for this info. If I understood correctly, this refers to changing the 2D/3D pupil detection (i.e., ROI, intensity range etc.), and adjusting the resolution of the camera? We have been playing with these settings, but unfortunately there doesn't seem to be a setting that will get a reasonable detection when in total darkness. Is it just a case of playing with the settings, or is there something obvious I need to select when setting up the recording?
Hi is there a link to latest update for Pupil Core? Thanks Gary.
Here you go https://github.com/pupil-labs/pupil/releases/latest
Thanks Papr, hope you are all well.
Thanks, I am. I hope the same for you. š
Yes just trying to get my brain working again after the long gap š
Sorry to bother you! But how can I get the concrete bounding boxes on the images by the gaze data, pupil data and other data?
@user-a9e72d what are you looking to achieve? Are you looking to map gaze data to surface/image in the real world? If so, you might want to see: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@wrp Sorry to bother you again. After I got the gaze position and pupil position. I want to use the data to get the area which has been looked at by users. (The position can be mapped to the screen?)
Yes, that is possible! š But for that you need to setup your screen as a "surface". See the linked documentation on how to do that.
I did not find related document!š„²
Please follow this link https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Thank you for your kindly help
Hi all,
I was wandering if someone already has troubles with the sampling rate of the pupil glasses ? I use LSL to record pupil and EEG data at the same time but the sampling rate of my pupil core is changing constantly (from 180 to 260Hz !). While the EEG sampling rate stay constant (around 512Hz). Did someone already has this kind of problem ?
Which version of the LSL plugin are you using?
Good question, I'm not sure but I use an old version of pupil Capture (version 2.1 or 2.2).
LSL calculates the sampling rate based on the data timestamps. In the first plugin version, these were not measured at frame exposure but after performing pupil detection. The variable duration between frame exposure and completed computation caused the uneven timestamp distribution, and therefore the variable "effective" sampling rate. The latest LSL plugin attempts to use the frame exposure timestamp instead, yielding a much stabler sampling rate calculation.
The Capture version is secondary here. The plugin version is more important. But given that Capture version, I am assuming that you are using the old plugin as well.
Ok, I understand, thank you for the explanation ! Where can I download the latest version of the plugin ?
https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture Please note, that this version requires a valid calibration before it can publish data to LSL.
Yes, my participants perform calibration before starting the recording. But what is a valid calibration ? There is a threshold that indicate if the calibration is good or bad ?
I meant valid in the sense of "successful" (no errors reported by Capture).
Thanks for the link
Ah yes, I see thank you. I have another question, regarding the "Nan". In the time series data I sometime have "Nan", does it meens that the glasses cannot capture the pupil at this particular moment ? What is the impact of the calibration on the quantity of Nan in the data ?
The pipeline is: Pupil detection -> pupil data matching -> gaze mapping (requires calibration). NaN values come from the first two steps.
Capture attempts to map gaze based on the data from both eyes. Sometimes this is not possible. In these cases, the values are set to NaN. The calibration does not have an impact on the number of NaN values.
Ok this is the reason why I have Nan values in the diameter values. Probably because the device cannot detect the pupil.
I have a quick question about dispersion filters to be fed into the pupil detection algorithm. I have seen a lot of papers that talked about the duration threshold which depends on the research questions, nature of task, stimulus design, field, literature or researcher's discretion but I have not found explanation for the dispersion threshold. I read that degree of visual angle depends on the tracker resolution. But I don't know which ranges of spatial bound should be taken as fixations or saccades especially for pupil lab core.
For example, in chemistry education, most visual stimuli have multiple representations required to solve a task and so the duration threshold usually set at 100 Ms.
Does detect eye0 and detect eye1 need to be checked, while recording? Or can they just be used to position the eye cameras correctly?
You can turn them off once you have verified that pupil detection works correctly. You can do the detection post-hoc in Pupil Player.
No eye recording seem to have been made, when they are unchecked.
Apologies, I need to correct myself. detect eye0/1
need to be enabled to process and record the eye videos. Turning off pupil detection via the UI is currently not working, but will be fixed in our next release.
When would the next release be? Are there any releases that I would be able to use in the meantime? the laptop that we are using is maxed out when they are on.
We are planning the next release for the first half of May. @user-764f72 Could you please check which is the latest release without this issue?
Thank you for your help! š
Hi @papr,
What would be the minimum specs for running the software?
We have an i7-7500U with 8GB ram, but we are having issues with this
Main factor for good pupil detection speed is CPU frequency. We are also aware of calculation speed issues on 3.x that we are actively working on.
The coarse detection is part of the 2d detector. It is disabled at the lowest eye camera resolution (eye window -> video source menu -> resolution). If that does not help, please share an example recording with [email removed] such that we can have a look.
Thanks! We are already recording at the lowest resolution, so I'm not sure that's the issue. I'll have another look tomorrow and may well take you up on the offer of having a look at an example. Thanks so much for your help.
is there any update on this by any chance? I turned out that participants look at the stimulus from more close so some tags are out of the field of view; there is at least one though and it would be great if we could still use it to detect the interaction with the surface
There is no update in this regard yet
Hello! I have a question about the frame rate. I exported the fixation data, and I believe the frames used their are for frames at 30hz, is that correct? Could you briefly explain how it goes from the 120Hz sampling rate of the world camera and 200Hz sampling rate of the eye cameras, to fixation frame rate of 30Hz? And, is that modifiable? Thank you so much!
I exported the fixation data, and I believe the frames used their are for frames at 30hz, is that correct? Could you elaborate how you came to this conclusion? I am not exactly sure what you mean by that.
I'm trying to do some analysis on the video image statistics at each fixation. Since I have the start frame and end frame of the fixations, I can just pull the frame from a simple video split function. However, I need to know what is the sampling rate of the frames indicated in the fixation file. Does that make sense?
See this conversation for reference https://discord.com/channels/285728493612957698/446977689690177536/814228154364985414
The frame indices in the exported file refer to frames in the scene video. These are independent of the recording frame rate. I highly recommend using this tutorial to extract single frames https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb or pyav to access the frame data directly https://github.com/PyAV-Org/PyAV/blob/main/examples/numpy/barcode.py
It is possible that frames are skipped/dropped during extraction if the software is not configured correctly. In this case, the scene frame indices would be invalid
great! I will look into what you have sent me!
Okay, so if I understand correctly if I extract frame index 4 through the process outlined here: https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb, and then get the fixation xy coordinates at frame 4 in the fixation file, then I should have a good corresponding frame to fixation right?
I have the feeling I am missing something. Is "frame to fixation" a metric? A fixation can span multiple frames, i.e. it is possible the frame content of the fixation can change.
Yeah, it is true a fixation can last many frames, but let's say that for the sake of argument, a fixation happens in frame 4 and only in frame 4. Then, I can use the method described in that tutorial to pull that frame from the video correct? Thank you so much for helping me figure this out š
That is correct š
Hello
Thank again for the help.
I have another questions about calibration. Can I just have like 4 to 6 calibration markers placed on the wall instead of moving a single marker around during calibration?
How big can a surface (AOI) be to have actual or correct fixations within and not in whitespaces or not interested areas? Participants should claim they are looking at A for example but the fixations seen in player are on B or in between A and B.
If we have the Alphabets as the stimulus for example and participants are to search for a particular letter as commanded, how big do you think each letter could be? How big in respect to the tracker resolution
Hi everyone, I am currently working with the Surface tracker plugin to detect April. And to detect AprilTags it is mentioned that you need a grayscale image of type numpy.uint8. I took a look at the code of the plugin and it appears to receive a "frame" which must be of numpy.uint8 type. So I'm wondering where this "frame" is coming from and what is it's real type.
Hey, I am not sure about the context. Are you using the surface tracker as part of the Core desktop applications or are you attempting to run the marker detection (a subcomponent of surface tracking) using the corresponding python package? (if so, please link to the code that you are referring to)
Please could someone help me with that ?
Thanks for answering quickly. Sorry if I was unclear, I am attempting to run the marker detection. I saw it is using this code to detect April tags : https://github.com/pupil-labs/apriltags/blob/master/src/pupil_apriltags/bindings.py line 400 for the detection part
ok, so you need this in realtime?
And I saw that finally the image which is needed comes from here : https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_tracker_online.py but I don't know where this frame come from
So I'm trying to get the image from the glasses and directly give it to the detector.
Capture uses https://github.com/pupil-labs/pyuvc to access the cameras here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L459
Yes that would be better
Thanks ! I'll take a look at it
This happens a lot lately to my recordings, that for some time during my experitment, cameras don't record anything and they suddenly need to reinit. How can I prevent this?
Hi, I'm new to pupil. Is there any webcam support?
Capture supports cameras fulfilling these requirements: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
Please note that Pupil Core is a head-mounted eye tracker and its software cannot be used for remote eye tracking.
I am the only one who has a problem with the program Pupil Player to not be responding - even though CPU is at 13.1 only? All programs are closed and this happens always. I managed to export data once. The video is only 2.5 minutes long ( a test video ) but even this is too much? Help please š
This is likely the issue that we fixed in v3.2-16 https://github.com/pupil-labs/pupil/releases/tag/v3.2
I use Pupil Player v3.2.15
funnily, I just opened v3.1.16 and there it works š¤
Looks like a physical disconnect of the cable since all three cams disconnected at the same time.
True, I had connected Pupil Core through an extended USB cable and that was the cause.
Hi everyone, is logitech c270 compatible with Pupil Capture v3.2.20?
If it was compatible with older versions of Capture, it will be compatible with this release, too.
It's the first time I try it, it doesn't see the webcam at all
On which operating system are you?
@papr Win10
See these instructions (steps 1-7) then https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
Yes I am following those instructions, as soon as I solve an error with pyhton in the last step I will have my answer
You do not need to run the last step (8) if you run Pupil Capture from bundle.
I'm running the compiled version named "Pupil Capture v3.2.20"
Yes, that is what I referred to as "bundle". š So there is no need to run the python-related changes from step 8
After following all the steps in the guide for custom setup world and eye0 cameras can't seem to recognize the c270, it says 3 times"unknown"
I'm sure I've installed the drivers manually using Zadig
Ok I figured out what I did wrong:
When using Zadig, once found the correct composite devices I did not set manually the driver to install to the "libusbK (v3.0.7.0)" (my bad), to everyone having issues with drivers resulting in 3 "unknown" cameras this could be a mistake.
Now I have another issue, I have 2 c270 to use, is this an issue? (when the 2 cameras are basically the same?) Because both World and Eye0 use the same one on my eye and the one looking forward is not used
Unfortunately, there is no good way for the software to differentiate between the two. The cameras should be listed twice if Manual camera selection
is enabled. In the world window, try selecting the cameras until the correct one is selected.
That would not be a problem, but in my case both world and eye0 or eye0 and eye1 open only 1 of them. Basically they take only on of the 2 I have. I have installed the correct drivers on both (I've checked to be sure)
It is possible that the cameras change order in the listing. I know this is far from optimal š
I understand but I've tried all. Anyway the front camera is used only for Eye Tracker feature right? I am more interested in a pupilometer
Please I have trouble with the gaze not hitting the AOI being looked upon by the viewer. For example, let say we alphabets A to Z. Each letter is an AOI. The AOI is made of a letter surrounded by a 5ā 5" squared black color line and again by a 1.2" 1.2" squared whitespace. I guess you get it!
All placed side by side in order of the Alphabets arrangement. By so doing the whitespace between two 5"5" will be 2.4inches.
The question I wanna ask is this.. How big or small can the size of an AOI be for it to be hit and not another or whitespace? What size can account for the tracker error?
Capture turn off immediately after opening... I don't know what I wanna do please
Correct, you do not need a scene camera if you want to estimate pupil size.
ok, thank you for the help.
You can calculate the stimulus size s
using your calibration accuracy E
and the target distance d
.
Thank you. You mean the size of the AOI?
What ever you want to identify reliably. š In your case target stimulus = aoi = letter
Thanks again. I will try it out
Sorry d is the size of the whole visual stimulus like the size of the plasma screen?
No, d is the distance from the subject to the aoi
The further the subject sits away from the stimulus, the bigger it needs to be.
I am asking a lot of questions.. Is the angle twice E? Or the 2 in the formulae has accounted for that ?
E
is the accuracy as reported by the Accuracy Visualizer plugin. You need to multiply by 2, else the result is the AOI radius, not its diameter.
Thanks. I am currently considering that I might have the stimulus on the screen instead of on the wall. Please the AOI size we are talking about from that calculation, is it the size of of each feature on the stimulus or for the surface to be created using the Apriltags on player? I don't like creating surfaces on Capture.
The equation above does work for both use cases, stimulus on wall and stimulus on screen.
The equation above gives you the size for the gazed at stimulus that you want to identify. For example, let's say you present a simple sentence to the subject:
Hello, nice to meet you.
Know it depends on your research question: (a) Are you interested in which single letter the subject is looking at, or (b) are you interested in which word the subject is looking at.
For (a), the equation gives you the size of each letter. For (b), the equation gives you the size of each word. In case (b), the letters can be smaller because you don't care about if the subject looking at the H
or the o
in Hello
. You are interested that they are looking at Hello
, not nice
.
Regarding the surface setup, there two possible setups: 1. One surface including all visible stimuli (easy to set up, more difficult to analyse) 2. One surface for each stimulus (more difficult to set up, easier to analyse)
There is the actual physical stimulus size and there is the AOI size place on the stimulus using the Apriltags
Thank you. Is is the size of the the tangible word on the screen or wall before data collection or the size of the word in capture or player created by Apriltags? I am talking about creating a surface.
Are you referring to the size of surface created ?
Or the size of the individual representations that form part of the stimulus?
The equation refers to the real-world size of the stimulus. So assuming a distance of 1 m = 100 cm (d = 100), and an angular error of 1 degree (E = 1), the stimulus needs to be ~3.5 cm in diameter.
The surface creation is independent of that equation. The surface should simply cover the stimulus.
I have trouble with the gaze not hitting the AOI being looked upon This sounded like the real-world stimulus size was too small, that is why I posted the equation. š
Hello, could anybody tell me what kind of connector is used to connect the eye camera to the USB cable coming out of the Pupil Core frame?
You might be referring to the JST connector posted in this message: https://discord.com/channels/285728493612957698/285728493612957698/701705600979042336
Thanks!
Thanks. I guess the angular error is already known based on the tracker used right ?
It is known after the calibration and validation (https://docs.pupil-labs.com/core/best-practices/#calibrating). I recommend assuming a worst-case angular error that defines the size of the stimulus. Should the validation yield a larger angular error, the subject needs to calibrate again.
I refer to the whole set up as the stimulus. And I refer to a relevant feature of the stimulus where surface will be created as the AOI. For example, whole A-Z is the stimulus. A is AOl if surface will eventually be created on A
So is the "your stimulus" the whole of the interested area?
ok, thanks for defining the terms. The stimulus I was referring to is a single letter, a single AOI in your case. So to correct myself:
I have trouble with the gaze not hitting the AOI being looked upon It sounds like the real-world size of your AOI is too small.
After the calibration and validation, I will change the size based on the error ?
What if the size is fixed and can't be changed?
I recommend assuming a worst-case angular error that defines the size of the stimulus. Should the validation yield a larger angular error, the subject needs to calibrate again. This way you do not need to vary
d
. If the error reported after calibration and validation is smaller than assumed, you are fine to continue.
Maybe if fixed, then I Will have to vary the d right ?
Would the calibration remain valid ?
Another question I came across, I have noticed that in the pupil and gaze exports, some frames are occasionally skipped (e.g. I have an instance where I jump from world index 5460 to 5462, and a similar occurrence happens ~10 times in my 11 minute sample collection). The world frames are correctly captured (e.g. frame 5461 exists in my frames export, but no pupil data is associated with that frame) Is this expected behavior or could something be going wrong with my system?
Pupil data is associated by time. Can you check the exported world_timestamps.csv
for the frames 5460ā5462 and check how much time lies between them?
They all have similar time between them, ~0.0335s. For my recordings, I include the calibration in them as per guidance i saw in the documentation. I notice that when the calibration is taking place, gray frames appear in the world camera, during which the world timestamps continue to be collected, but pupil data stops recording for a few frames
Hi, I have issues installing dependencies Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'c:\program files\python36\lib\site-packages\pip-18.1.dist-info\entry_points.txt'
Consider using the --user
option or check the permissions. Is there a solution?
Generally, on Windows, it is recommended to run python -m pip ...
instead of pip ...
directly.
Which command were you executing?
Just out of interest, what are your reasons for running from source instead from bundle? Installing the dependencies on Windows is quite tricky and you can do 99% of customizations via plugins, i.e. running from source is often not necessary.
Yeah, please run python -m pip install -r requirements.txt
I want to get data from pupil core for my app
Good news, in this case you do not need to run the application from source. You can download a bundled release here: https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads
gaze data
@papr thanks
š£ Friendly reminder to Pupil Core users & developers about some useful Github repositories:
Hey, I'm seeing a pretty big offset of ~60 seconds between timestamps made on two different PCs on the same network, using LSL. One PC is running a custom lsl outlet which uses liblsl.local_clock() and the other has PupilCapture with the lsl plugin running. Is this a know issue? I don't believe any interfering pupil capture plugins are running. Data is recorded with LabRecorder and parsed with Matlab's open_xdf.
Which version of the capture plugin are you using?
Also, what reference do you use to know that they are out of sync?
Ok this is driving me insane, no matter how I position the left eye camera I get pupil dropout for no discernable reason.. I've played with the intensity / min / max settings and it doesn't seem to make a difference. What am I doing wrong?
Please try reducing the exposure time in the video source menu. It looks like it is quite dark on the right side of the eye1 video
I'll give that a try - I've set them both to auto, which I thought would fix that
It is possible that one of the IR LEDs is no longer working, hence the uneven illumination
I think that may be the case, as reducing the exposure seems to make it worse
apologies for the second message, I was just wondering if you may have any further tips for debugging this issue. Thank you so much for your help.
Feel free to share the recording with [email removed] can have a look next week.
So, dropped scene images during calibration are expected, as the calibration calculation is blocking the main process.
I took a picture, and both of the IR LEDs seem to be working... ignore the mocap marker
I've been playing around with this for hours, using different computers, trying all of the angles I can and I keep getting the same problem... I'm not even really going into the extremes of my eyes. It looks like the IREDs are working, but I'm not 100% sure..
Hi @user-ecbbea. Please contact info@pupil-labs.com in this regard, and include your original order id.