@papr I have issues trying to make pupil capture recognize the pupil
You do not seem to have an IR filter installed. It makes detecting the pupil a lot easier. Also, in your case, the reflection is occluding the pupil.
is there a guide which covers all aspects like resolution of the image to adjust...?
how are the world_timestamps.npy file in the primary recording folder and the exports folder related if i am exporting the entire video? for some reason mine are different lengths (the npy file in the exports folder has more timestamps).
EDIT: I found the difference but have a followup question. I realized the difference is because the main process stops during calibration, but during the pupil player export, gray frames are included at these timepoints. Since the main process stops during calibration, does the pupil process continue? in the exported files, it seems that there are still pupil readings
The pupil processes continues. It is possible that if the world process stops for too long, the pupil data receiving queue fills up and drops data temporally.
Player interpolates gaps with gray frames. These are also exported and therefore you have more exported timestamps than in the original
@papr my markers did not work well during the experiment, is there a way I could still add surface post hoc using the pupil player?
You can add and detect surfaces post-hoc in Player. But it uses the same algorithm for detection, i.e. it is likely that the markers will not be recognized either. Have you ensured that there is sufficient whitespace around the markers? It is essential for good detection.
I have one question. If I want to output Eye Camera (both eyes) and World Camera video at 120Hz, what kind of PC specs (CPU, GPU, RAM) do I need?
After validation, the visual space covered reduced. Please Would you expect angular accuracy and precision to from validation step to be smaller than that of the calibration?
Angular errors for accuracy and precision usually go up in the validation and it maps new data. The covered area is independent of that. It is ok if the validation area is not the same as the calibration area
It's App-Pupillabs 2.0, so it's not too old. I'll try and update and see if that makes a difference
I have 2 other streams (3 in total), where I can see both of their timestamps are running between 0 and 300 seconds, whereas pupil captures stream's timestamps are running between 6000 and 6300 seconds. One LabRecorder starts recording the three streams at the same time, the two first streams are running on one PC and the thirds (pupil capture) is running on another PC (Same network ofc). One of the two first streams is continuously pushing data at 100 Hz.
You are comparing the timestamps recorded by LSL Recorder, correct?
Yes
I'm currently doing a side-by-side comparisons of the versions. I tried on two different machines and saw no offset, those were running the 2.1 plugin
In 2.0 and later, App-Pupillabs syncs pupil time the pylsl.localclock()
. LSL time sync then is applied on receiving side (LSL Recorder).
Thank you. What's now the essence of validation? Just to see how much error has accumulated?
It is a concept from machine learning: We need to calibrate because we do not know the relationship (mapping()
) between the pupil positions (pupil
) and the subject's gaze location (ref
).
ref = mapping(pupil)
During calibration, ref
and pupil
are known, mapping
is unknown. mapping
is inferred based on ref
and pupil
.
Afterward, when mapping
is known, the calibration error (training error) is calculated based on the same data:
training_ref = mapping(pupil)
error = angle(ref, training_ref)
The question is, how big is the error for unknown data, or better: How well does mapping
generalize. For that, we need the validation. We collect new ref_new
and pupil_new
data, but do not change mapping
. Afterward, we perform the same math to calculate the error:
validation_ref = mapping(pupil_new)
validation_error = angle(ref_new, validation_ref)
This error is more meaningful, as mapping
was inferred from independent data. It tells you how well mapping
works for new data.
I tried replicating the issue without luck. I suppose that is good, but still concerning that I don't know the source of the issue.
You have not been running other plugins or scripts, e.g. Time Sync, in parallel, have you?
Accuracy Visualizer and Pupil Remote only
Which commands have you sent via Pupil Remote?
The T
command conflicts with the LSL time sync.
It seems I am acutally sending 'T' somewhere in my app!
I guess I will add that as an example conflict here https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture#lsl-clock-synchronization
ok, this explains the offset then.
Okay, thank you! You should make it bold, because it can cause a subtle but quite devastating problem if the timestamps are incorrect. Perhaps it's possible to record "T" events, and compensate for the reset when you're pushing the samples on the LSL stream
This sounds like you are attempting to sync Pupil time to two separate clocks. This is not supported. Instead, I suggest following the Pupil clock in your script to avoid conflicts. See our new time sync example here: https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py
Somehow accumulating the "T" reset timestamps to get an accurate local_clock()
Well the intention is to synch LSL time to two seperate clocks, which is partially the purpose of that platform, and I was under the impression that the lsl plugin for pupil capture would support this.
I was not clear. The purpose of the LSL clock is to sync multiple independent, local clocks to one target clock (LSL Recorder). For that to work, the sending software needs to use the local clock provided by the lsl framework.
Pupil Capture uses a fixed clock uvc.get_time_monotonic
, and adjusts to other clocks by applying an offset g_pool.timebase.value
. https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/world.py#L237-L241
g_pool.get_timestamp()
is used to calculate timestamps all over the software.
App-PupilLabs sets this offset on init g_pool.timebase.value = g_pool.get_now() - lsl.local_clock()
https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L34-L35
Sending T x
will change g_pool.timebase.value
such that g_pool.get_timestamp()
returns time since x
.
x
and lsl.local_clock()
behave like to separate clocks. And this is what I meant by
This sounds like you are attempting to sync Pupil time to two separate clocks. This is not supported.
Ah right okay. Anyways, removing the section sending the "T" command did solve the issues, everything is synced up nicely now. Thank you for the assistance!
Thanks
So the calibration accuracy is what I will use in calculating for stimulus size right ? While the difference in accuracy of the C and V step just tell us The error accrued? Or how well the mapping works for new data
No, you should use the validation accuracy because it is more meaningful for the new data recorded during your experiment.
how well the mapping works for new data That is exactly why you want to know V error
I decided the angular accuracy error of 1.5degree, distance of 75cm. Used these to calculate the stimulus size which is 1.96cm.
Calibration Accuracy: 0.96, Precision: 0.082
Validation
Accuracy: 1.76, Precision: 0.083
The angular accuracy after validation has gone above 1.5.
My understanding is that it must be equal or lower than 1.5.
Another way I thought I should solve it: create surfaces that satisfy the new accuracy and precision or even enough margin using discretion
What ever value you choose, V error should be equal or below it. Beware that subjects usually are not used to the calibration procedure. So it is possible that their C/V errors are higher than yours.
My understanding is that it must be equal or lower than 1.5. Correct!
Another way I thought I should solve it: create surfaces that satisfy the new accuracy Correct. You could go for 2 degrees instead and increase the stimulus size accordingly
Thank you
I actually need to keep the stimulus size constant. What other things need to be varied easily among subjects ?
I read a paper that varied 60-65 cm distance
Do you mean constant during one experiment session or constant as in it needs to be 1.96cm?
Constant throughout the experiment for all participants. Whatever distance I decided... I might likely use 2degree instead of 1.5
I mean all participants get the same stimulus size throughout the experiment
just run some test calibrations with friends or colleagues and check if they can reach 2 degrees consistently.
Reducing the distance if the error is higher is possible of course, but this only works to some degree. If the viewing distance is 10cm, it might become uncomfortable for the participant
Thank you. I will try that. To my understanding, I might have 2degree, the distance and size fixed for vary numbers of participants as long as the validation is going on well as we said.
A quick question please.
What did you think might gone wrong a video in player blank out ?
This happens during disconnects or if the world process is busy calculating, e.g. shortly after the calibration is over
@user-ae6ade Please see this conversation ☝️
That's right !
Thanks. I will get back later
The AOIs are almost hit perfectly but I will still need to add a little margin
The new version of Pupil player (3.2.20) doesn't seem to read fixations from Core recordings. Drag & drop of the recording loads it, but I get the "Fixation detection - [WARNING] fixation_detector: No data available to find fixations" warning. I would appreciate your help. Thanks.
The fixation detector requires gaze data. Are you using Gaze from recording or post-hoc calibration to create gaze data?
I'm using gaze from recording, so directly the raw recording. This worked perfectly a few days ago with v2.6
Also, please be aware that you need to calibrate to get gaze data.
Can you see the green gaze circle when playing back the recording?
No. It was there when I played the recording in the older player. Calibration was always done before the recording.
Ok, so this is a recording made with v2.6, that you are now opening in v3.2.20, and the gaze is no longer displayed?
It was actually 2.3 (recording software version). Exactly, I'm opening them with 3.2.0 and fixations are not detected. "fixations.pldata" and "gaze.pldata" are all there.
Can you check the file size of gaze.pldata?
For most recordings around 10 Mb.
Are all of your recordings no longer showing gaze? Or only one specifically?
unfortunatelly, all of them
ok, could you please share one of them with [email removed] such that we can try to reproduce the issue? In the meantime, you should be able to open the recordings with your previous Pupil Player version.
I'd be happy to. Can I still find the older versions on GitHub?
The old version should still be installed, unless you uninstalled it explicitly. You can find all releases here https://github.com/pupil-labs/pupil/releases
I'll install it again. Thanks a lot for the support. I'll send a recording so that the (presumably small) bug can be solved.
I had the same issue and thought I must be the problem
Thanks for letting us know! Feel free to share a recording as well, such that we can check if it is indeed the same underlying issue.
Hi people, I have some questions about the exported data. I am using AprilTag to create a surface and it seems like 2 out of 4 were visible and detected during the session. Now when I export the gaze data I can't tell if any of the gaze positions x and y are in the referance frame of the tags.
The exported data should have a column indicating if the gaze is within the surface or not (on_surf
I believe)
Do I expect to see a specific variable name for the gaze position in the referance frame of the surface?
ah!! I see that it is FALSE for the entire dataset
I wonder if that is because only two tags were visible
To improve marker detection, please ensure a sufficiently large white border around them. Also make sure the scene is sufficiently illuminated.
they are actually on a screen with a black background. Do you recommend that I add a white frame to the images of the markers? Also, do you recommend that I sharpen the markers (by repeating the pixels) or using them blurry?
The edges should be as sharp as possible. The recorded contrast between the white area/border and the black area should be as strong as possible. If you printed the markers without the white border I highly recommend including it. The detection (green border around detected markers) should be very stable in the preview.
got you. Thanks for your help.
Is anybody else encountering gray parts in the world video? as if the video wasn't recorded, just the whole screen gray. the fixations and the eye videos work nevertheless. Is this just about a very slow device or has this other reasons? I encountered that with the newest version (where the fixations don't work for me) and with v3.1.16 (which I now used because of the missing fixations in v3.2.20).
it's mostly around 1,5 minutes in. But also the calibration part takes about 45 seconds because it's so slow. might be right here, too. thank you.
hi i wanted to know in which coordinate space is the gaze point and gaze direction data that is provided by gazedata.cs?
Gaze is mapped into the Unity main camera
See https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#map-gaze-data-to-vr-world-space on how to transform it into unity world coordinates
Thanx!!
I have issue using two computers. The Annotation plugin don't work when two computers are used
Keypresses are only recorded in the Pupil Capture instance in which the key was pressed. I am not sure if you are expecting the keypress to be transferred to the other computer. Please provide more details about the setup, the performed actions, and the expected outcome.
Hey, i am using the pupil labs wearable eye tracker on windows, and when i use the pupil capture program, it tells me that it cannot connect to device
And when i open the device manager of the device, it tells me that the installed driver is the best one
@user-42a6f7 What version of Windows? What version of Pupil Capture?
Device manager: you should see Pupil Cam devices listed under libusbK devices. Do you see any Pupil Cam devices listed in Cameras or Imaging Devices category?
I am using windows 8
And pupil capture v3.2.20
Pupil Core software is only supported on Windows 10. Please try to use a machine with Windows 10.
Okaaay great
I will test that, thanks @wrp
But libusbK does not appear in the device manager on windows 8
Is it due to windows 8 also?
Driver installer was compiled/programmed for Win 10 if I recall correctly.
Okay great
I have a laptop and a desktop. Both connected by HDMI cable. There were key presses during the exercise. The Annotation commands on Capture and Player are the same. The issue is that it shows 0 annotations on players but there were actually annotations. The keys were pressed
So, to clarify, you are using an external display, not two separate computers, correct? It is important that Capture World window is focused when pressing the keys. If it is not, the keys cannot be recorded.
Please do you have any reasons why one can end up having zero Annotations even when the hot keys were pressed ?
Hi I we have a problem with the calibration - single marker
Hi @user-8e5c72. Please can you explain your problem in more detail so we can be in a better position to offer advice?
following up on susi5's comment: We use the single marker calibration but it doesn't turn blue when fixated, only the borders stay green enlighted - is that normal? then, the calibration is very bad. compared to previous studies the gaze is still flickering around after calibration. can that be due to the camera used? this time we are using the fish-eye camera - last time it was the other one.... Thank you!
Is it possible that you have been using the "Manual marker" calibration previously?
No, last time we used the screen marker calibration. but it's not possible this time.
ok, thanks for letting me know. And you are using the physical marker mode in the Single Marker calibration, correct?
jup.
It is normal that the world window shows a green circle around the detected marker. I am not sure why you would expect it to become blue. Did you see this effect somewhere else before or in a video tutorial?
ah yes I saw it in a video that's why I asked myself if the marker might not be working properly. But obviously it works the way it should, thanks for clarifying! but do you think the bad calibration results are because the single marker is just not as good as screen marker calibration or because of the different camera or what could be the problem?
It is important to understand that the single marker calibration requires a different procedure than the screen marker calibration. The latter expects the subject to fixate the different targets while having their head still. The former ha two options: 1. Fixate a still target while moving the head around 2. Keep the head still, and follow a moving gaze target
Which of those two are you employing?
we tried both.
ok, could you please share a Capture recording of you or someone else performing the single marker calibration with [email removed] This way we will be able to give concrete feedback.
Thanks
At what resolution would you make image like a 3x3 square to appear clearly on player? Images on player are usually small and/or blurry
done. Appreciate your help!
Hi @user-0cc7c5. I have responded by email with feedback about using the single marker to calibrate (and post-hoc calibration and gaze mapping).
Yes I just saw it. Thank you very much!
The display resolution of your stimuli is not primarily relevant. Instead, check the scene camera resolution and make sure it is focused correctly.
Focussed with a resolution of 1080x720
Is there a recommended chinrest for head mounted eye tracker?
What about ideas on how to get the participant to click a space bar or some other devices like the mouse when they find the target.... Clicking to align with the Annotations
Hello, the right eye camera of my pupil core (Pupil w120 e200b) do not respond, I have measured impedance in S1 and s2 in both and you can see the difference in the photograph. I also measured pins at the SN9C5256AJG (camera controller) with low resistances (3.3 ohms… 3.1ohm..). Could you help me with this?? I want to keep me
Please contact info@pupil-labs.com in this regard
I want to keep using it monocular, but pupil player is not working. Is there a way solve that?
Quick question about the fixation report csv from pupil player, is start_frame_idx-end_frame_index an inclusive or an exclusive range? Thank you!
inclusive 🙂
hey, a few questions - however we/I are really new to eye-tracking and procedures
we are aware of the implications if we would use screen cal. - such as minimal movement as possible (however our participants will stand in front of the mirror, so there will be natural head movement)
Is there any possibility to improve the single marker calibration? Average accuracies right now are always >2.5 at least. Would "natural features" be more promising (e.g. calibrating on body features)?
However we noticed that in every recording there is a slight left shift, such as that even with good cal. + validation (accuracy <1) the fixations are systematically left shifted to the intended positions
I would be very grateful for suggestions!!
1) Single marker calibration is very dependent on how ones performs the procedure. Feel free to share a Pupil Capture recording of you performing it with [email removed] such that we can give feedback. You should be able to receive comparable accuracies with screen and single marker detection. After any calibration, it is important to avoid slippage. But you can use the screen marker calibration and then let the subject look somewhere else.
Natural features calibration is less reliable as it requires good communication between subject and operator.
2) Usually the offset is systematic but not in a linear fashion. So it is difficult to correct it. Nonetheless, the post-hoc calibration in Pupil Player allows you to recalculate the calibration using a manual (linear) offset.
Could you let us know what is not working for you specifically?
When I throw the folder in Pupil Player, it opens and closes immediately. I copied the data from the camera that is ok, and renamed them as if they were the missing camera, so that I could see the data without using post hoc calibration. But I can not use the calibration and validation in a monocular way (selecting just one eye data), is there a way to do that??
Hi, I have two questions regarding using Pupil Core with the newest release of Pupil software on Win10.
1) I’m using a second monitor for calibration, as my experiment involves visual stimuli on a monitor. Once the calibration is done, it remains frozen with a black color. Any idea what this could be? It stays this way until Pupil capture is closed and makes me unable to perform the experiment as I cannot show anything on the screen.
2) Are there any recommended settings in regards to frame rate and resolution on both world and eye camera? Couldn’t seem to find anything about this in the docs.
Thanks!
Edit: The frozen black screen does not happen when running calibration on the primary screen on the laptop.
We are testing a workaround for the black-screen-on-exit issue https://github.com/pupil-labs/pupil/pull/2144
Did you know that you can run the calibration in window mode, too? Is this an acceptable solution for now? I am trying to assess how quickly we need to release this bugfix.
I was able to reproduce the issue. I will let you know on Monday when we will be able to fix it
Player should be supporting monocular recordings. Just don't copy the video. Leave it as it is. Should you continue having issues with it, please share it with data@pupil-labs.com such that we can try to reproduce the problem.
👍 I will try again, thanks
Thanks for reporting the issue and the detailed context information. We will try to reproduce the issue.
Re 2) we recommend the lowest resolution and highest frame rate that your computer allows without dropping frames
Thank you for your quick answer. This happened in v. 3.0.7 as well, but I updated today to see if it worked with a newer version - without luck. Nothing in the .exe terminal implies that there is an error, and once I calibrate again or validate the black screen goes away and the choreography is launched.
@papr how do we validate calibration result? Noted it indicates % dismissed for accuracy below .8. How much % dismissed would mean failed calibration? Thanks.
The most important metric is the accuracy. Which threshold to use, depends on your experiment/setup.
How do i delete an annotation within pupil player ( i annotated a section of my video with the wrong annotation so I need to delete and correct it)
I have invisible eye-tracking glasses I had an individual wear them in an arena and the signs that were 150 feet away are blurry and you cannot see what the signage says. any way to fix this to make the video footage sharp?
Thank you @papr. Where can i see the accuracy metric ? I can only see % dismissed for accuracy below 0.8 confidence.
Check out the accuracy visualizer menu. It shows the accuracy and precision for the latest calibration/validation.
Hello! We have tried recording eye movements in paragliders and skiers. The problem is that due to the sun or snow, eyes light up and the pupils are not visible. Is it possible to somehow change the settings to fix this? Or the eye-tracker is unusable in high-light conditions.
Hi, unfortunately, there is nothing one can do post-hoc in Pupil Player to fix the overexposed eye images. 😕 We highly recommend checking camera exposures during the recordings.
Thanks, I see. So, the problem is just in exposures of eye cameras and it's possible to change it in the soft before recording, right? I didn't pay attention to such a function before.
Correct. See the "Video source" menu of the eye windows.
Thank you! I'll check it.
I have one question.I am doing research using Pupil Core. Is there a setting to disable gaze measurement in real time?
If you do not calibrate, gaze will not be estimated. If you are talking about pupil detection, that can be disabled, too.
Thank you for your reply. I have an additional question. Currently, I have World/Eye Camera set to output at 120[Hz], but the frame rate of the camera does not reach 120[Hz]. I'm looking for a solution to this problem, other than increasing the specs of my PC, is there any other way to set the frame rate of the video (to lighten the processing)?
Disabling the pupil detection (without closing the eye window) should solve the issue. Unfortunately, the UI for disabling the pupil detection does currently not. We will release an update shortly that fixes this issue.
Let's try the experiment without calibration. Thank you very much.
A red dot appears on the screen even though I have not calibrated it. Does this mean that it is estimating the line of sight? If it is, what is the solution to stop it from estimating the gaze?
This comes from the fact that Capture has restored the previous calibration. You can start with default settings in the general settings to reset it.
Hi folks, one question. Is information regarding an analysis done in Pupil Player (i.e. specifically intensity range) stored somewhere in an export/offline_data? Just wondering if this is available somewhere (to ensure e.g. reproducible analyses). Thanks!
No, the pupil detection configuration is not stored as part of the data.
Is there any way to do it – or would you recommend taking a screenshot?
Either that, or you can use the real-time pupil detector api to read out the configuration and store it programatically. https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_detector_network_api.py
You can use the API to set the properties, as well.
Hi, just wondering what sort of filtering is done to the pupil confidence signal to detect blinks? I see it turns drops into positive peaks and gains in confidence into negative peaks. Is this some sort of median filter? Thanks!
Hi @user-acc963. We run the 2D detector on every frame of eye data and assign a confidence value to show how well the pupil was detected. Our blink detector leverages the fact that the confidence drops rapidly in both eyes during a blink. It processes both eyes' confidence values by convolving it with a filter whose resulting values (called filter response or activity) spike the sharper the confidence drop/increase is. The onset of a blink is defined when the filter response rises above the onset confidence threshold (which corresponds to a drop in the 2D pupil detection confidence). The offset of a blink is defined when the filter response falls below the offset confidence threshold (which corresponds to a rise in the 2D pupil detection confidence).
Please, do you mean we can use the screen marker calibration but look elsewhere outside the screen ?
Correct
I have chosen (changed) to have the stimulus inside the screen for similar reasons @user-597293 pointed out. Our original plan was to have it placed on the wall. Never thought it is reasonable to calibrate that way. It still looks unreasonable to me because the subject must look at the calibration marker
I think we misunderstood each other. Of course the subject needs to look at the screen during the calibration. But afterwards, the subject can also look somewhere else without losing the validity of the calibration. Remember: Gaze is calibrated in the coordinate system of the scene camera, not the coordinate system of the screen or the wall.
Any news on this @papr? Have you been able to reproduce the bug, or may it be a local problem with my computer?
If a (potential) bug fix is far away I might implement single marker calibration by showing the markers directly in the stimuli software.
Thanks a lot ☺️
Unfortunate, no news yet. I will be able to update you by tomorrow.
Hi@papr,I would like to ask whether the head tracking plugin needs to adjust the camera parameters and Apriltag parameters. When I was using Apriltag, the detection result was related to the camera internal parameters and the size of the Apriltag.
Apriltag detection in the head pose tracker should work out of the box. Are you having issues with the marker detection?
I care if I adjust the resolution of the camera, will the detection result be the same (Realsens 1280720 and 640480)
I do not know that for sure. I assume apriltag detection will be better at higher resolution.
Thanks a million.
Hi, I'm part of a group of students that is meant to use a pupil core headset for a small university project. We currently have the problem that Pupil Capture does not recognize the world camera of our hardware. Both Eye Tracker cameras work fine, and the PC also recognizes the camera (my colleague can use it like a normal webcam) but the Pupil Capture software only shows three "unknown @ Local USB"-devices that can not be selected (tying to do so results in the error message "WORLD: The selected camera is already in use or blocked."). We are running the software under Win 10 and have already tried uninstalling and reinstalling the drivers once or twice. Do you guys have any idea what could be wrong?
Sounds like the drivers are indeed not being installed correctly. To debug driver installation could you please do the following: 1. Unplug Pupil Labs hardware 2. Open Device Manager 2.1. Click View > Show Hidden Devices 2.2. Expand the libUSBK devices category and expand the Imaging Devices category within Device Manager 2.3. Uninstall/delete drivers for all Pupil Cam 1 ID0, Pupil Cam 1 ID1, and Pupil Cam 1 ID2 devices within both libUSBK and Imaging Devices Category 3. Restart Computer 4. Start Pupil Capture 4.1. General Menu > Restart with default settings 5. Plug in Pupil Headset - Please wait, drivers should install automatically (you might be asked for admin privileges automatically)
Hm, we did that now (twice, we also deleted camera drivers on a second attempt), but the problem still seems to persist...
Could you please check the Cameras
and Imaging Devices
categories in the device manager? What camera names do you see there?
Cameras:
Integrated Camera
Intel(R) RealSense(TM) 3D Camera (R200) Depth
Intel(R) RealSense(TM) 3D Camera (R200) Left-Right
Intel(R) RealSense(TM) 3D Camera (R200) RGB
Microsoft(R) LifeCam VX-2000
Pupil Cam1 ID0
Pupil Cam1 ID1
Imaging Devices:
WSD Scan Device
libusbK Devices:
Pupil Cam1 ID0
Pupil Cam1 ID0
Please delete the Pupil Cam drivers from the Cameras category. Leave the libusbk category as it is. ~~Please be aware that you need to use a Pupil version with pyrealsense support or the third-party backend plugin to access the realsense camera as scene camera https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29~~
You will need to use a very old Pupil version if you want to use that R200 as scene camera
Ah i see.... where can i get that? 😄
You can find all release at https://github.com/pupil-labs/pupil/releases
v1.9 might be a working version
Okay, we will see if that works and keep you updated on how its going 🙂
Hello ,
Capture is crashing after I open it. The world window and one of the camera window will go off suddenly after I open capture
Please share the capture.log
file
Sorry, where is the file location?
Home -> pupil_capture_settings -> capture.log
The capture
file is the correct one.
We got it to work on version 1.2.7 now, thanks a lot for the help!
Does anyone know the meaning of Pupil_timestamp? Can it be converted to UTC?
Hi @user-555c9d. You can read about pupil's timing conventions here: https://docs.pupil-labs.com/core/terminology/#timing
At recording start, the current time of the pupil clock is written into the info.player.json file of the recording under the “start_time_synced_s” key. Additionally, the current system time is saved as well under the “start_time_system_s” key.
You can convert Pupil timestamps to System time with some quick conversions. Here is an example of how to do this with Python using the system/synced start times and an example first Pupil timestamp:
import datetime
start_time_system = 1533197768.2805 # unix epoch timestamp
start_time_synced = 674439.5502 # pupil epoch timestamp
offset = start_time_system - start_time_synced
example_timestamp = 674439.4695
wall_time = datetime.datetime.fromtimestamp(example_timestamp + offset).strftime("%Y-%m-%d %H:%M:%S.%f")
print(wall_time)
# example output: '2018-08-02 15:16:08.199800'
Hi,
We are conducting experiment with Pupil Labs using Pupil Mobile app. When I first plugged in there was an error: "error: shoutAttachedsensor is not available KeySensor..." After that I unplugged and plugged it again. And there was no errors. I started the experiment.
There was a recording approximately 40 minutes long. But When I copy the data and analyze with Pupil Player, there is data for only 3:56 minutes. Any ideas ? Where to look for log/error history etc. on the phone? The cable connection might be the problem, because the metal surrounding of the cable seems damaged. Trying again to connect with Pupil Capture, can not connect to device and not showing the eyes images. Using Samsung Galaxy S9+ with Android version 10.
Hello, I'm writing a report for my project and since I use Pupil Core, I also have to document some of its key components. I have a question about pupil detection method Pupil Core use. In the docs as well as Pupil Player, it's apparent that the device run 2 methods at the same time: 2D detection and 3D detection. The 2D detection is quite clear to me as it detect the pupil location in the camera image and the algorithm is based more or less on the one mentioned in the section "Pupil Detection Algorithm" in this paper (https://arxiv.org/pdf/1405.0006.pdf). But how about 3D detection, it uses a 3D model of the eye(s) that updates based on observations of the eye.... (https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detection), but what kinds of 3D model of eyes it is talking about here? How it is gonna detect the pupil based on this model? Are there any paper or docs where I can find more information about this?
Thank you so much. Also, one more thing, where can I find more info about the gaze mapping function? In the same paper I send earlier, it says "The user specific polynomial parameters are obtained by running one of the calibration routines", and there are 4 of them: Screen Marker, Manual Marker, Natural Features and Camera Intrinsic. So, I guess each routine has its own way of doing gaze mapping, right? Or maybe there's still an underlying function/method and depends on the routine, there will be some additional thing built on top? I can find only find this code for it on Github (https://github.com/jeffmacinnes/mobileGazeMapping), but it seems not enough and I still struggle finding the docs on website.
Please note, the camera intrinsics estimation is not a gaze calibration. It is just meant for estimating camera specific parameters, e.g. focal length. Capture comes with a set of default intrinsics such that running the estimation is usually not necessary.
Screen/single marker/natural feature calibration only refers to the choreography which defines how reference locations are being collected. The actual calibration (2d or 3d) happens afterward, based on the collected reference and pupil data.
2d calib. uses polynomial regression. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_2d.py#L72-L73
3d calib uses bundle adjustment to estimate the physical relationship/transformation between eye and scene cameras. The result are two matrices (for left and right eye) that you can use to transform eye to world coordinates. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L207-L209
thank you so much, that's very helpful for me 👍
Hi, can I use core with glasses?
Hi @user-8e2443, the eyeglasses would need to go on top of the Core headset, and the eye cameras adjusted such that they capture the eye region from below the glasses frames. This is not an ideal condition but does work for some people but is dependent on physiology and eye glasses size/shape. The goal is to get a direct image of the eye (not through the lenses).
It has not worked well for me using glasses
I still don't know why I am getting zero Annotations. Even when I was clicking the right hotkeys
Hello. We've been running two instances of Capture (two headsets) on one computer for about 12 months. We've been using the Groups plugin and Time Synch to mirror recording on both headsets. Two days ago, though, we've run into the issue where Groups no longer mirrors activity and we are force to hit Record on both instances of Capture. Any solutions to this? Or explaination as to why this has suddenly become an issue?
This is a known bug that we fixed in our latest v3.3 release. :)
This might seem impossible but I have a recording for it. While clicking the Annotation hotkeys with the world window in view it will have the loaded Annotations show on player, but when I switch the scene to that of the visual stimulus, the Annotations done at this time would get loaded and won't show in player.
For example, If I click the Annotation during recording with the world scene in view, I will have loaded Annotations in player. But the Annotations I do when I look at the stimulus instead of the world camera scene won't be loaded when view in player.
Again, the Capture application window needs to be in focus/active. The operating system is designed such that keyboard input can only go to the active/focused window. For example, right now, Discord is in focus for me as I type this message. Once I focus a different app, e.g. Spotify, I can no longer type into Discord. Windows highlights the active app in the task bar. When you switch to your stimulus software, you are taking the focus away from Capture and therefore it does no longer receive your keyboard input and therefore cannot record any annotations.
I don't know what's happening
The stimulus is just a picture... Jpeg file... Do you mean I don't need to minimize capture window ? I don't think I do. I just start recording and then click the jpeg stimulus picture on task bar to bring it on the screen
Exactly. The app showing the picture takes the focus in this case!
I'm using two computers and both are in duplicate. What appears on the desktop also appears on the laptop
Then how would I bring the stimulus into view... ? I have not been having this trouble
I do not know your requirements. I guess one screen is for the subject and one for the operator? One idea would be to not mirror the screens. Instead, show the stimulus on the subject-screen and Capture on the operator screen. Make sure to refocus Capture such that it can process the keyboard input after you opened the stimulus
I will like to do the calibration on the stimulus screen (desktop). The calibration don't work and the markers don't come up during calibration
Both computers are connected by HDMI cable. What is on the laptop must be on or from the laptop. The screen can either be duplicated or transferred. The desktop can't function on its own. When I was on the duplicated mode, the calibration worked but annotation. Now I am trying to use the other mode to check for annotation, the calibration refused to work . Don't know if you get this
Have you chosen full-screen or window mode? If full-screen mode, make sure that the display is correctly set. If it is window mode, please check if the calibration window is hidden behind another window. Also, tip: From the top bar of that window, you can see that this window is not active/focused. It is greyed out.
Both computers are connected by HDMI cable. What is on the laptop must be on or from the laptop. The screen can either be duplicated or transferred. The desktop can't function on its own. When I was on the duplicated mode, the calibration worked but annotation. Now I am trying to use the other mode to check for annotation, the calibration refused to work . Don't know if you get this
I understand that the calibration is not showing. I am asking for these setting such that we can figure out possible causes for the markers not showing.
Screen marker calibration menu:
Have you chosen full-screen or window mode?
Under screen marker calibration, there is no "full screen or window mode " written here
WOW
I switched it to the generic PNP monitor (1) one and it worked. I have to check the Annotations now
ok, great. Make sure the Capture window is active
The capture window can't be in focus at the same with the stimulus
Correct. That is what I was saying before. Capture needs to be in focus.
How would you do that ?
You can put the world window into focus by clicking on it. You should see how the text in the window bar at the top changes colors.
Meaning the stimulus should not be in full screen while the capture window will be in full screen. What about user viewing natural features without using the screen?
I feel like I do not know enough about your setup to be able to help you here. I can only tell you what the requirements are if you want to create annotations via the keyboard: The world window needs to be in focus.
Also, the stimulus is not shown during the calibration, correct? Because the subject needs to focus the markers, not the stimulus. I do not know if natural feature calib would solve your problem.
If you are having trouble with the full-screen mode in the screen-marker calibration, you can turn it off. See your last picture -> disable Use fullscreen
. This way you get a window that you can position where ever you like.
I'm not saying the stimulus is not showing.... The stimulus doesn't need to show during calibration. I am concerned about having zero annotation when though I pressed the hotkeys during recording.
You said for me to have annotations, capture window must be in focus.
Then I asked how can it be in focus when the stimulus should be in focus during recording?
Maybe we are having different terminology here? Multiple windows can be visible but only one can be in focus.
Also, focus does not mean that the subject needs to be looking at it. Focus is a technical term that determines to which window the operating system sends the keyboard input.
@papr I just sent my capture file to your email address since it is crashing again
ok, I will have a look at it on Monday.
Thanks @papr. Where can I see the results from the accuracy visualiser? I don’t seem to see any from the player. Thanks again
I think it's in Capture, but you need to enable this plugin
@papr I went over all your responses and thought about them. This made me understood what is meant for capture to be in focus. I know what went wrong. I wasn't having issue getting annotations before. I used to calibration using single Maker and when I switched to screen marker calibration, I had both capture window and the stimulus on the desktop. So if I minimize capture on the desktop so have the stimulus on the screen, it won't be in focus again.
Here is how I solved it.
I moved capture window from laptop to desktop so I can calibrate. After calibration, I moved it back to the laptop so that capture can be in focus (active). The desktop will have the stimulus in view and I won't have the need to minimize capture( already showing in the laptop)
I hope you get it ? Thanks for your responses.
Yes, sounds like you got it right! 🙂
@user-d1072e is right. I was referring to Capture. You can get these values in Player, too, if you do post-hoc calibration. Each gaze mapper entry has a submenu validation that calculates the accuracy/precision for the corresponding section.
Thank you @user-d1072e and @papr If I have enabled the visualiser accuracy at capture, can I still get the results from player? Thanks again
I had a look at the file. There is no indication for a crash 😕 Do you continue having this issue?
I have experienced it twice and it turned back to normal after I switch it or my computer off for a while.
Quick question on fixation detection algorithm
I saw some recommendations on Max duration on Pupil Lab website and I understand that whatever one chose depends on a lot of factors..
Please do you have any resources that talked about best practices for a variety of tasks ?
I am doing a visual search task for example
maybe a weird question, I will use two eye trackers on different computers. Is the timestamp of those two recordings then comparable? How do I synchronize them when they are not in the same room? I want to find out if they look at the same AOI at the same time (on a screen).
Checkout the Capture Time Sync plugin https://docs.pupil-labs.com/core/software/pupil-capture/#network-plugins
Yes, see this message https://discord.com/channels/285728493612957698/285728493612957698/843760243611140116
Thanks @papr How do I perform post hoc calibration? Thanks again.
Hi
I have started data collection with the pupil core. While playing around with some early data I plotted the results (i.e. the angle moved by the eye within a period of time) from the SAME recording based on data obtained from: 1) real time pupil detection in capture v2.5, 2) posthoc pupil detection in player v2.5, and 3) posthoc pupil detection in player v3.2. While I thought there might be some small differences between the results from these different sources I was not expecting them all to be so different. I expected the data processed by v3.2 to differ somewhat since it uses the new pye3d method but there was even a difference between 1 and 2 which I found very surprising given that I would assume they should yield the same, or very similar, results.
To try and figure out if the difference between 1 and 2 was because I might have used different settings when running the posthoc pupil detection I decided to run the posthoc pupil detection in player v2.5 on a separate recording twice with exactly the same settings and exported the data from each. Although the results were more similar there were still some large difference between that would definitely impact my study.
Do you know why there is such a large amount of variability in the data from the same recording but from repeated posthoc pupil detection? Any help getting to the bottom of this would be amazing as this has really cast some doubt on the reliability of using the pupil position data for my current study.
Many thanks.
Which data are you looking at exactly? And would it be possible for you to share a raw data export for each version that exhibits this high variability? The email for sharing would be [email removed]
Hi Pupil Community, I have a question very specific for my use case. I need to be able to calibrate eyes individually, that is 1) Calibrate left eye while right eye is covered, then 2) Calibrate right eye while eye is covered. When performing gaze mappings, I need to have mappings based on step 1) for left eye and based on step 2) for right eye. Is there a plugin for that? I know there is this one: https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350 It does monocular mappings, but eyes are not calibrated individually. Am I missing something here? Many thanks in advance!
You should talk to @user-a09f5d 🙂
I will have a look at this tomorrow
Thanks @papr
Hi @user-7ff310 I needed to do exactly the same thing in the past so I'd be happy to chat with you. In a nutshell (and @papr please correct me if this is no longer the case), as far as I am aware it is not currently possible to calibrate each eye individually within the same recording. In the end I found a way to use the raw data from the eye cameras to avoid the calibration step altogether. Still work in progress but hopefully it will workout.
Thanks! I was thinking to do the same thing, basically to develop my own calibration procedure outside of pupil core. Unfortunately, I am very limited in time and don't have it enough to dive into development.
I use the mean of Circle_3d_normal_x,y,z at time 1 and the mean at time 2 and calculate the change in eye position (in degrees) between the two time points
Absolutely, happy to share. Should I send just the exported files or do you need the whole recording?
The exported files should be sufficient for a first look. 🙂 If you say the mean, what values are you averaging? The normals from both eyes or do you average multiple normals from one eye in a given time period?
I do the latter. For the left eye I average the normals in a 2 sec time window when both eyes are viewing the target (time 1- binocular viewing) and compare it to the mean of the normals during a 2 sec window when the right eye is covered (time 2- monocular viewing). I then calculate the angle between the position at time 1 and 2.. I then do the same for the right eye (except time 2 is when the left eye is covered).
ok, please include the timestamp ranges for time 1 and 2 in the shared recording.
They are all listed in the annotation file.
I'll try to get that sent over to you ASAP. Thanks @papr.
Since it is fairly late over here, I won't have time to look at it today. So no hurry 🙂
reading the docs and having little experience with Pupil Core, I am wondering what does the red overlay on the eye camera (a red circle and a dot) mean, in comparison with the blue overlay described here https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings
Hi @user-3bb37c, the red overlay represents output of the 3d pupil detector; the blue overlay represents output of the 2d detector. You can read more about that here: https://docs.pupil-labs.com/core/terminology/#pupil-positions (with links to detection methods)
maybe I am missing more detailed documentation
Hello. I am working the gaze_report (the recording was binocular). Gaze postions from both eyes are included in the gaze_report, aren't they? Is it possible to get eye positions for each eye separetely?
Hi , the Pupil Capture world video window failed to appear quite frequently. How do I fix this issue on Mac? Thank you
Hello @user-189029, gaze_positions does contain metrics from both eyes, yes. You might be interested in the gaze_normals. These describe the visual axis that goes through the centre of each eyeball and the object that’s looked at. But there are other possibilities. We report two kinds of data with Core recordings: Pupil data (this refers to any data that is relative to the eye camera coordinate system); and Gaze data (this refers to data that has been mapped to the world camera coordinate system following calibration). You can read more about the data made available here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
Thank you! I have another question. I am analyzing timestamps in the gaze_report for one eye and see that sometimes the lag between successive obseravtions is higher or lower than sampling rate (not approximately 5 ms as I expected). Is it because samples are missing, or because the samples are delayed/shifted?
Hi :) Just to clarify, the whole window is missing? Or only the video feed?
Thank you. I've solved that problem.I have another question.I have another question.There is no red ring around the pupil, and the green outer ring sometimes appears and sometimes disappears
The cameras do not have a fixed sampling rate. In other words, it is expected that the samples are not recorded exactly every 5 ms. In addition, if the computer does not have sufficient computational resources, it is possible that Capture drops frames in order to keep up with the real-time data. As the last point, I would like to note that the gaze positions are composed of three different streams: 1. Monocular left 2. Monocular right 3. Binocular
Wether a gaze datum is mapped monocularly or binocularly depends on the confidence of the underlying pupil data.
So, in total, the gaze data usually has an higher frequency than 200 Hz.
Thank you again! Is possible to get information how mapping takes place (monocularly left / monocularly rigth / binocularly) for a particular recording?
Which Capture version do you use?
3.3.0
Green and red circle represent the pye3d pupil result. Green is the eye model outline, red the estimated pupil detection. For this to work well, you need to fit the model. This can be done by simply rolling the eyes. Make sure that the blue ellipses (pye3d input) fit the pupil well, first though.
Once the model is stable, you should see the green circle around the eye ball and the red and blue circles should match for various gaze angles
Thank you for your patience. I will try again
https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds
I have actually read there multiple times before today. It seems this depends much on researcher's discretion while putting the nature of the task into consideration. Researchers might also check the literature. This is because you need to know the duration-dispersion filter chosen is appropriate.. the dispersion might be hard to get... The research on chess is somehow not going to be a day read to grasp
I have been thinking about getting the values too. Why post ad hoc calibration?
Would this go if calibration was not included in the recording file ?
No, Player needs reference data in order to calculate the accuracy post-hoc.
Please take a look at this picture. I'm in this situation right now. I can't align my pupils
The eye camera is adjustable
I see. Are you using a Pupil Labs Core headset?
yeah.That was not the case last night, and it's been the case today😅
@user-7daa32 is right. Check this example camera adjustment video https://www.youtube.com/watch?v=7wuVCwWcGnE&t=13s
This is a paper that used the PUPIl Lab and here is what they said
Maximum dispersion of 3 degrees would probably suit an experiment whereby the wearer is moving around a lot. Such movements would elicit larger eye rotations in order to fixate static regions of the environment. We don't specifically recommend any particular parameters in general. Many researchers examine their respective datasets and tweak the thresholds until reasonable fixations are evident. Relying on existing literature to define thresholds can be problematic due to experimental/participant differences.
That's true. In my case participants will use a chinrest and the stimulus is static and a 2D image. It's a visual search task.. I will definitely read more literature to see if I can see anyone that gave a rationale for selection. Eye tracking research are hard to replicate. Many research hide a lot of details and some just state the manufacturer's recommendations or what they thought it is. For example the tracker accuracy and precision
So, can I register safely with only a stable red overlay? I am running exps. with head movements.
In adittion, I have trouble setting the auto-brightness option of the world camera. There are 4 options but only the "Manual" can be set in my case. Other options do not set with an error saying "Could not set values..." Because of this, when my subject enters or exists a, it seems world camera is not adjusting brightness correctly. There are several options not listed in the online documentation. Is there a more detailed Manual so I don't have to ask too much in here?
The scene camera performs auto-exposure in its default setting (see screenshot). If you have trouble with that functionality, please share two short recordings where you change from a bright to a dark environment, and vice versa. Please make sure the exposure is ok during the start and that the exposure mode is set up as indicated in the screenshot (you can use the Restart with default settings
button in general settings to get these settings, too).
The same experimental material, different experimental personnel to view, the data results can be superimposed statistics to view?
This seems to be the key. If the blue overlay depends on 2d model, then the output of the 2d model is the input to the 3d model? Should I adjust fixation filters thresholds first of all?The weird thing is that I have a lot of data with confident red overlay but the blue missing (or hidden in the back of the red overlay if they match?)
i am running vehicle conduction experiments
If the 3d model is well-fitted, characterised by a robust red ellipse overlaying the pupil and a green circle that outlines the eyeball, you likely will not notice the blue ellipse as it will match the red one (like in this video: https://www.youtube.com/watch?v=7wuVCwWcGnE&t=13s).
Yes, this information is included in the base_data column. It includes 1-2 entries per row, where one entry is has the format <eye timestamp>-<eye id>
. Two entries indicate a binocular mapping.
Thank you
I have a question please..
Still on Annotations
The participants are going to click hotkey when they find the target. Is there any USB connected devices I can use instead of the keyboard? For example a device like the mouse in which left-click will means start and right-click will means end.
If using two hotkeys will be confusing to the users because they might make mistake in the clicking, the space bar will be used (how can we set it to spacebar?) Or just one side of the mouse.
Using letter keys can be distracting
With the natural calibration. Can the subject move the head between fixations? Or should it make fixations with head still for this type of calibration?
Head movement between fixation points is allowed. This way you can perform a similar choreography to the single-marker-calibration, where you only have one static target and the subjects looks at it from different angles.
I would also be willing to fund development of a plugin that would do this. Is there someone in this community interested in doing that? I would pay for development and code would be released back to the community under open source license. Please message me directly if interested.
Plugin should enable calibrating eyes individually, that is 1) Calibrate left eye while right eye is covered, then 2) Calibrate right eye while left eye is covered. Both 3d and 2d models should be possible to use. Thanks software-dev core
I have an idea on how to implement that. What is your time frame?
couple of weeks to a month would be perfect
There are external usb devices that have only a few buttons, specifically made for experiments. They behave like a keyboard. I think one can configure them what letter they generate. Unfortunately, I do not have a link to a specific product right now
Thanks.. I just checked online.. I noticed there are mini keyboard..I saw ones with 2 and three keys. These ones have no alphabet on them and they are completely blanked. They are programmable for OSU something and gaming. I saw one with ON and OFF written on it. The issue is how can I program them to work with capture? Maybe one that have the letters or numbers will worl just like the number keyboard
I am starting to get things working now. Thanks for your answers, I will soon reduce my questions frequency.
Now I'd like to know 2 things regarding screen calibrarion. When does the center of the marker turn green? Is it when the world cam detects it correctly? I have noticed that too much exposure saturates the world camera, and only by reducing screen brightness (so that it matches the brightness outside the screen) I can get the center of the marker turn green.
I know that theoretically if am interested in natural world viewing, I should try the natural calib., But sometimes the screen calib. extrapolates the gaze outside of the screen quite precisely.
I would always use the screen marker calibration choreography. The natural features one is mostly for the case when you do not have access to a marker. And yes, it turns green when the marker is being detected.
As long as they behave like a keyboard, you should have no problem in Capture. Just program them to emit the letters u
and i
for example, and setup the annotations to use these keys.
Since we don't know how we can do that, I am going to look for the ones that have letter or numeric keys just like the keyboard
4-key USB Keyboard Mini Keyboard DIY Custom Shortcuts Keyboard
I think this mean we can easy program it
Ok, I think we can provide that in this time frame.
that's great, thanks!
Hi, we have a quick question with regards to versioning improvement of algorithms, and older recordings. Can we think of the raw video recordings made by 2.6 and 3.x as essentially the same (so we can think of the video hardware capture is constant), and that the later pupil player versions doing posthoc analysis are improved capture algorithms? For example a session recorded by 2.6 analyzed by 3.3 player is the same as one captured by 3.3 and analyzed by 3.3? And that one recorded by 3.3 but posthoc analyzed by player 2.6 would be the same one captured by 2.6 and posthoc analyzed by player 2.6? Thanks!
Everything is correct, but this detail:
And that one recorded by 3.3 but posthoc analyzed by player 2.6 would be the same one captured by 2.6 and posthoc analyzed by player 2.6? Recordings are not backwards compatible. Only forward compatible.
Thanks papr! Is backwards incompatability inherent to the video recording itself, or in the supporting files? For example, if for some reason scientifically we wished to use the 3.3 video recordings in an older algorithm, it would theoretically be possible, but not practical because manually editing the supporting files would be too onerous?
The supporting files, correct.
Thanks for looking into this. I also realized that a quick fix is to use the primary screen for the participant. Thus fullscreen screen marker calibration could be launched on top of the fullscreen stimuli software without getting the black screen, as on the second monitor 👨🏼💻
For me the issue triggers independently of primary/secondary displays. It triggers for me if the fullscreen window is on a different screen than the world window.
https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
Thank you!!
Weird. I actually changed computer today, from a laptop to a more powerful desktop PC. And the problem actually seems to be gone, and full screen calibration can be done on both monitors 🤷🏼♂️ Regardless of where the world window is.
Maybe it is dependent on the windows version? Can you check which one you have on your machines?
I am on 20H2 (OS Build 19042.985)
Yes. I am on the same version and build as you - for both PCs. The only difference is that the laptop where the issue originated runs Win10 Home, whereas the desktop PC runs Win10 Education.
ok, thanks for letting us know.
Hey folks, can you describe to me how pupil confidence influences the process of the py3D model fit during pupil detection? Is there a hard threshold? Are individuals frames weighted by confidence?
I am actually not quite sure about that. Let me come back to you on this on Monday
Ok. Thanks!
Hi@papr, would you tell me what is the latest Pupil software that supports Realsense.
''Perform another validation. This can give you an estimate of how much slippage error you accumulated during the experiment. If you are recording multiple blocks with a non-fixed calibration setup, you can also re-use the calibration session of the following block as post-validation for the previous block.'' TRYING TO UNDERSTAND THIS STATEMENT. HOW DO YOU REUSE A CALIBRATION SET UP?
Hi @user-7daa32 , I would highly recommend you replicate the steps in these videos: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection.
Problem with pupil-video-backend on Raspberry Pi Hi. I bought a pupil core with 120 Hz eye cameras and would like to use it to do capturing remotely, so I connected my pupil core to a Raspberry Pi model 4 B. I want to use the pupil-video-backend to stream the videos of 3 cameras from the Raspberry Pi to another computer where Pupil Capture is running. Things work well when I only streamed the world camera. When I only streamed 1 eye camera, the FPS on the Pupil Capture is unstable (stay around 100-120). When I tried to stream 2 eye cameras, the FPS for both eyes on the Pupil Capture drops significantly (one around 60-70, the other around 30-40). When I streamed all 3 cameras, the FPS for 2 eye cameras are around 30-40, and the FPS for the world camera is less than 10 on the Pupil Capture. I also noticed 2 abnormal things. Firstly, when I streamed just one camera, the upload speed of my Raspberry Pi already reached 9 MB/s, but neither the world camera nor the eye cameras should take so much bandwidth. Secondly, when I streamed a single eye camera, the CPU usage of the capturing computer already reaches 50%, but processing video with 192*192 resolution and 120 Hz should not take so much CPU. Does anyone know where the problem is? Thanks for any help in advance. The Pupil Capture is run on Win10. The Raspberry Pi is installed with Ubuntu 20.10 64-bit. The Linux kernel is 5.8.1-1020-raspi aarch 64, and the MATE is 1.24.1. I installed the dependencies and ran the pupil-video-backend through Pycharm.
It is possible that the backend sends the raw bgr buffer which indeed could generate high bandwidth usage. The limited bandwidth would also explain the lower fps when streaming more cameras. I suggest contacting the backend's author directly.
BTW, the frame read latency and frame send latency was fine on my raspberry pi
Also, sometimes the pupil-video-backend will just crash and say TypeError: None does not provide a buffer interface. Usually after such a crash, the code cannot recognize an eye camera.
Hey all, I have generated some extremely long pupil capture recordings by mistake (4-5 hours long). Is there any way to cut such a long recording in smaller bites? Including the video
Unfortunately, there is currently no tool to do that.
What does "at the population level" mean here: Py3d detector is able to provide gaze angle independent pupil size estimations at the population level.
To be fair, I haven't had a good chance to go through the py3D code. I'm not afraid to, but was checking to see if you had an easy/quick answer.
I had a quick look and was not able to find a place where we filter for confidence when it comes to model fitting. It might make sense to do that though. We will have a closer look and discuss this.
pye3d requires some physiological parameters to work, e.g. eye ball radius, refraction parameters, etc. Since it is difficult to measure them for each subject, it uses average values that have been measured by other researchers in the past. Therefore, the correction is not correct on the subject-level (per subject) but should be correct on average (population level). In other words, the correction only works well if you aggregate the data from multiple subjects. As example, the pye3d might overestimate the required correction for one subject but underestimate it for another.
very interesting, thanksss 👍
In a meeting now, but we should talk! It must be filtering blinks, at least.
Found it. It is a bit hidden. This is done internally by the observation storage classes e.g.https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/observation.py#L127 These are the used thresholds:
threshold_short_term=0.8, threshold_long_term=0.98,
I have reason to think this is the case - we've been trying to implement our own segmentation method, and we see it explode during blinks. IN part, this is because all our samples seem to produce a high confidence value.
I'll check in again later so I can answer questions etc.
Hi @papr, I found the same issue last night. The raw bgr buffer really takes a lot of bandwidth. A single eye camera already takes 13 MB/s. All 3 cameras combined takes 33.45 MB/s, which is quite high. I'm considering about sending out compressed images to reduce the bandwidth requirement to a reasonable level. Does Pupil Capture support any frame format other than bgr?
Apologies, I confused this with the frame publisher. In theory yes, the backend can be extended to compress the image buffers before sending them. But this is purely up to the backend. Capture does not have any influence on it.
~~Yes, it supports sending the jpeg payloads that it receives from the cameras.~~
If the backend compresses the image, then the Caputre must be able to know how to decompress it into the origninal image. Does the Pupil Capture supports decompression of image?
The backend has two components. The OpenCV camera reader running on the PI and the "backend" plugin running in Capture. Capture just receives the raw buffer from the backend. How the backend gets the buffer is up to the backend. You would have to implement the compression on the PI side and implement the decompression on the backend side.
Or do I have to write my own decompressor and pass the decompressed image to the Pupil Capture?
OK. So I need to write the compressor on the raspberry pi that sends out the image, and write the decompressor on the computer where Pupil Capture is running, right?
Could you please tell me where can I find the "backend" plugin running in Caputre?
Apologies, I just noticed, that it uses the built-in HMD_Streaming_Source
backend. It does not support any compression. But you should be able to extend it
by extending FRAME_CLASS_BY_FORMAT
with your compressed frame class. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/hmd_streaming.py#L121
Capture will load all python files in ~/pupil_capture_settings/plugins/
. Create a python file in there with the following code:
from video_capture.hmd_streaming import FRAME_CLASS_BY_FORMAT, Uint8BufferFrame
class CompressedFrame(Uint8BufferFrame):
... # TODO implement decompression
FRAME_CLASS_BY_FORMAT["compressed"] = CompressedFrame
See the Uint8BufferFrame class for details.
On the sending side adjust
- pass "compressed"
as format for payload here https://github.com/Lifestohack/pupil-video-backend/blob/e185aee94818b6fddb472ffc7c3359980ff4436b/video_backend.py#L149
- pass compressed buffer instead of image
here https://github.com/Lifestohack/pupil-video-backend/blob/e185aee94818b6fddb472ffc7c3359980ff4436b/video_backend.py#L166
I see. Thank you!
Another, related question. We've found that the confidence values for our custom segmentation plugin are uniformly high when using the most recent build of Pupil Player. However, when using a Python port of your confidence metric we made about two years ago, the confidence values cover a much greater range, and more accurately reflect the quality of the pupil fit. Any insight into why confidence is uniformly high when using the new version, but not the old?
The 2d confidence measurement did not change in years. Are you talking about the 3d confidence?
Did py3d introduce a landmark change to the algorithm used to calculate pupil fit confidence? ...or, perhaps there have just been changes in some parameters? It seems like we're going to have to dig into the code to understand, but I thought I would hit yu up for for any general advice.
By uniformly high, I mean all >.98, which is why I asked the question before. A blink destroys the 3d fit.
No, I'm talking about 2D confidence, which I assume is the condifence used to threshold out which samples effect the 3D eye model fit.
Correct. That one did not change.
...as you were so helpful to point out before.
Interesting. So, this could be an issue with the masks we're sending , or... ? That's a very helpful answer, even if it leaves me scratching me head. 🙂 Thanks.
FWIW, I suggest you guys document the role of confidence thresholds in the portion of the docs dealing with pupil plugins.
It's a pretty critical point.
I also suggest that you guys open that parameter up via slider 🙂
I wonder if that used to be the mysterious slider that determined the frequency of model switching, but is now missing?
The long-term threshold.
You should be able to set the thresholds via the network pupil detector api. I do not think that exposing all parameters via the UI is a good idea. The UI is already too complex as it is. 🙂 Since pye3d changed its approach to slippage compensation, there is no need for the slider anymore.
Ok, that's reasonable, as long as this is documented. Should I open an issue for you?
Feel free to create a PR to pye3d repository to include the info in the README. I guess it could use a flow diagram documenting what happens in which order.
Ok, thanks. I also suggest adding a mention / link in the pupil plugin portion of the docs.
This is not really related to the plugins but pye3d specifically.
I'll try and do this later in the week.
I understand. The issue is that a plugin that is trained to produce pupil-shaped estimates (as most machine learning approaches will be) is generally going to score high when using your metric, which really tests for contiguous contours of an ellipse.
I believe the metric is # pixels that form continguous contours (support pixels) / ellipse circumference, in pixels
Well, if the pupil is partially occluded, the metric should result in lower values.
Not if your ML network is robust to occlusion, and produces masks that are full ellipses despite occlusion.
This has nothing todo with how the ellipse is fitted. If your contours are a square and you fit an ellipse to it, it will only partially overlap. If the pupil is occluded, only parts of the ellipse will overlap with the contour. Or am I misunderstanding you?
Give us a partial ellipse, and we'll produce the rest of it 🙂
Consequently, when there is a blink, confidence remains high, and our samples are used to fit the 3D model even if they are poor.
Maybe, it could make sense to use a different confidence metric in your case?
Now, this is the behavior when using the confidence metric bundled in with the most recent version of PL, but not the old version.
You're telling me theres no difference, and I believe you.
So, this means that somthing with our "old version" is off, or soemthing with the way we're sending our data to the new version is off.
We have work to do.
https://arxiv.org/abs/2007.09600 if you're interested. THe figures should help where my description might falter.
It may. I think you've helped as much as you can until we figure out what the differences are between the old / new metrics are.
Changes in parameters? A mistake in the way we are sending preparing our mask for ellipse fitting?
Our approach is to convert the eye image into a mask by passing it through our network before passing it off to detector_2d_plugin.py .
Ah, I thought your detector generated 2d ellipses on its own.
Using our network almost as apre-processing step. This allows a more direct comparison between the native algorithms and our own.
It does, in the form of a mask. We are not using an ellipse fitting algorithm that generates the centroid and axes. Instead, we are letting detector_2d_plugin.py do that on the output of our mask.
We CAN regress the ellipse using our network, but we've turned that off for now to allow for a more direct comparison between algorithms.
Right now, our network is really replacing only the histogram bisection you guys do to find the pupil.
And you are saying that for blinks, the 2d detector generates ellipses with high confidence using your masks?
Yes
Can you share an example 2d detection of a mask during a blink?
In a few days, yes 🙂
I'm very willing to share, but it seems like I'm asking you to do work that we should handle first. It's the obvious next step.
If you want to see examples once we're sure eveyrthing is working well, I can DM you some.
but I don't want to waste you time and send them before that.
I'm more convinced there's a bug now than before the start of this conversation.
ok. I will sanity check my claim tomorrow.
that the metric hasn't changed?
that one
or a different claim?
Ok, thanks!
I wouldn't do much more than ask your peers if you're forgetting something. My graduate student, Kevin, will also begin comparing the new vs old code this week. It will be good for him.
Ok, my dog says I've spent too much time here. Thanks again! I'll let you know if we learn anything interesting, and I'll send a few masks your way if they are interesting.
Would be ncie to see them. But I can wait a few days, too
Anyone having trouble with camera intrinsics calibration? I've covered most of the FOV from different angles (see image), but get the following warnings:
2021-05-25 22:36:29,510 - world - [WARNING] camera_intrinsics_estimation: Camera calibration failed to converge!
2021-05-25 22:36:29,510 - world - [WARNING] camera_intrinsics_estimation: Please try again with a better coverage of the cameras FOV!
This is followed by an endless stream of OpenGL errors:
2021-05-25 22:36:35,706 - world - [DEBUG] root: No more errors found in OpenGL error queue!
2021-05-25 22:36:35,741 - world - [ERROR] root: Encountered PyOpenGL error: GLError(
err = 1281,
description = b'invalid value',
baseOperation = glOrtho,
cArguments = (-0.0, 0.0, 0.0, -0.0, -1, 1)
Thanks 🙂
Hi@papr, there is a question related to pixel coordinates. How to convert norm_pos_x and norm_pos_y to pixel coordinates.
@user-a98526 Multiply it with the resolution of the camera
Is it like this:
fix_x = fix_x * 1270 fix_y = (1.0 - fix_y) * 720
@user-26fef5 is right. You might find this tutorial helpful: https://discord.com/channels/285728493612957698/285728493612957698/771023287555326012
The resolution of my camera is 1270720@user-26fef5*.
It's helpful, thank you@nmt.
For which resolution did you try to calibrate? And which distortion model did you use?
It happened with many of the uncalibrated resolutions. But I realize now it was due to me not setting distortion model to Radial
before calibration, as it should be for the narrow FOV lens. Maybe one could add something along the lines of "Have you selected the right distortion model?" in the warning? It might be helpful if one should forget it prior to calibration - like myself 🤷♂️
By the way, while cycling through the different resolutions, I found a potential bug. When resolution is set to (1024,768)
it seems like auto exposure doesn't work. It is "stuck" with the auto exposure from when it first was selected. See images. Does not matter to me, as I won't be using this resolution. Auto exposure works for all other resolutions.
Thanks again for your helpful and quick support 👍
Newbie question - what's the max FPS the current release of the eye tracking code supports? Looking at prototyping an app that may require up to 400 FPS. Thanks, Mike Brown
Hi @user-9b5cc8 👋 . Pupil Core can sample at up to 200 Hz (read the tech specs here: https://pupil-labs.com/products/core/tech-specs/)
Hi. Is there a way to use Pupil Core over prescription glasses?
Hi @user-e0d63b 👋 , You can try to put on the Pupil Core headset first, then eyeglasses on top. The eye cameras should be adjusted such that they capture the eye region from below the glasses frames. This is not an ideal condition but does work for some people but is dependent on physiology and eye glasses size/shape. The goal is to get a direct image of the eye (not through the lenses).
@user-b14f98 @user-3cff0d This is the line that calculates 2d confidence. As you can see, this has not been changed in 2 years https://github.com/pupil-labs/pupil-detectors/blob/master/src/pupil_detectors/detector_2d/detect_2d.hpp#L222
thanks!
Hey guys! I got really excited about the Pupil Core DIY set and wanted to get started building one myself, but was wondering what the scope of the statement "The Pupil DIY Kit is not for commercial use or commercial clients." is. So of course, just double checking, if I work for a startup, which I understand is a commercial client, but which does not intend to ship anything eye tracking related and I would use it just to see what it feels like to use eye tracking with the startup's technology (a clicker), would that still be an improper use of the open source kit? I'm aware the startup is a "commercial client" but just double checking!
Hi @user-9541d1 👋 Yes, startup = commercial use. We would request that Pupil Core DIY is not used within commercial context.
BTW - what is a clicker? Maybe we can get in touch via email - info@pupil-labs.com - to discuss the user case further and provide you with some feedback. Or we can continue the conversation here if desired.
Thanks for confirming @wrp ! Sure thing, I'll shoot that e-mail a message :)
Hello, I have the following situation: I have the exact videos twice, at home and at work (duplicated). The thing is that the videos at work already have added surfaces which makes our 30 minutes videos so complicated to compute that my laptop at home can not do it.
In addition to surfaces, we mant to annotate manually. As I said, at home I have the same videos. If I annotate them, I get an annotation_player.pldata file. Is it enough to copy this file later in ordner do transfer the annotations? This would help me a lot as I'm often at home at the moment.
Thank you.
You need to copy annotation_player.pldata
and annotation_player_timestamps.npy
, but yes, that should work.
thank you 🙂