@wrp @papr i am using hmd pupil and in GazeData.cs there is this code at line 163 if (dictionary.ContainsKey("norm_pos")) { Debug.Log("here"); NormPos = Helpers.Position(dictionary["norm_pos"], false); } i want to know what does norm_pos signify and what does it do
@user-6cbd99 please check the documentation on the data format: https://docs.pupil-labs.com/#data-format
@wrp @papr where can i read about how pupil capture performs mapping in 2d and 3d
and why do the csv files look so different for 2d and 3d when i export them from player
@user-6cbd99 Re csv export: The documentation that I linked above states
When using the 3D gaze mapping mode the following keys will additionally be added: ....
See these papers on how we perform 2d and 3d pupil detection: - 2D Pupil Detection - Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction https://arxiv.org/pdf/1405.0006.pdf - 3D Pupil Detection - A fully-automatic, temporal approach to single camera, glint-free 3D eyemodel fitting https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf
The 3d pupil detection is based on results of the 2d detector, i.e. we always perform 2d detection, and if 3d detection is enabled, we additionally process the 2d datum into a 3d datum.
Hi, may I ask if there is an open source repo for the pupil-mobile project? Or if there is any example android project for developers? Thanks.
@user-5e6759 sorry for the short answer, I am currently on my mobile. the Pupil Mobile repository is not open source
@papr Got it. Thanks for the swift reply.
What does
# The theoretical response maximum is +-0.5
# Response of +-0.45 seems sufficient for a confidence of 1.
filter_response = activity @ blink_filter / 0.45
I can't seem to use it in a separate compiler
confidence = min(abs(filter_response), 1.0) # clamp conf. value at 1.
As far as I can tell it creates an array of confidences
@user-adf88b hey, what do you mean by
I can't seem to use it in a separate compiler
When I try to do
test = testvar @ nestvar2 / 0.45
@user-adf88b which language do you use? If you use python and you get a Syntax error, then you might be using a version older than python 3.6
Hmm, let me check
Python 3
I'm not sure what the @ operator does and I can't find any sources for it
It is a matrix multiplication
Oooh
@user-adf88b https://www.python.org/dev/peps/pep-0465/
Looks like you are using 3.4 or older
Please upgrade to Python 3.6 if you want the full functionality of our code base
Yea I just updated looks like the @ operator works
So as a question, what size is the filter_response list expected to be?
I would assume it's going to be size 2 (a value for each eye)
Blink detection basically works with simple signal detection. We have an on-off filter that results in a high value on sharp drops (blink onset) and a low negative value on sharp confidence increases (blink offset)
The filter response is a single value
@ in this specific case is a scalar product that is equivalent to a convolution
Since our filter has the same length as the signal
So how would I check or register changes in confidence in a single eye?
@user-adf88b in the same way. Activity contains confidence values either from one or two eyes
Oooh
But it is more reliable if it contains binocular data
Binocular data? As in being the product with the filter?
Right, I'll make try to modify my script and post it in a bit
ty papr
https://gist.github.com/Ramit110/9d291f3d11565231ff720d24b7ead228 Would this work in any way?
@user-adf88b binocular data means confidence data from both eyes in the case of the blink detector
Right, so comparing the confidence of both eyes.
@user-adf88b in your case you will have to split activity into two lists, one for each eye
Also, in your script above you do not actually calculate the filter response as the blink detector does.
Do not forget, confidence can drop for many reasons. The blink detector is based on the phenomenon that during blinks both eyes close and open in parallel, resulting in a clear filter response. Using the same approach for winks won't work as well, since confidence drops in a single eye are common and you wouldn't be able to differentiate between a wink and a different reason, e.g. looking to the left
Hmm. I can add some c# validation and checks, I intended for the program to only respond to multiple wink inputs so it might be fine, I'll need to test.
Would it be correct to assume that the filter responce which doesn't fold into a single value would be filter_response = activity * blink_filter / 0.45
Also, would it make sense for even indexes would align with one camera input and odd ones to be the other?
Also, would it make sense for even indexes would align with one camera input and odd ones to be the other?
This is not guaranteed. You would have to sort explicitly.
Not sure what the * operator is supposed to do in this case.
Oh, well I'll see how my current code goes
Thanks again
@papr @fxlange can you refer me towards a document that explain or just give me a small explanation on how calibration works and on what principles in an hmd?
3D gaze mapping - This method uses bundle adjustment to estimate the physical relationship between eye and world camera during calibration.
In case of hmd-eyes, it tries to estimate the physical relationship between eye cameras and the virtual 3d space that is defined through the sent reference locations.
Hi all. What would be the best way to go about trouble-shooting a mechanical failure of the eye cameras? Should we contact support, or is there any info online?
@user-6512e6 please contact info@pupil-labs.com
Ok, thanks!
Hello everyone. All of the sudden neither of the cameras on my hmd tracker are initializing in pupil capture. I have tested the tracker on multiple devices and get the error every time. Any ideas on why this would happen?
@user-4f56f8 looks like a physical connection issue. Please contact [email removed]
@papr ok, will do.
Hey
Is it possible to use cross as a gaze visualizer in Pupil Capture instead of a dot? Just like vis-cross plugin in Pupil Player. Thanks in advance.
@user-d28f08 not without changing the source code 😐
@papr ok, thanks for info
Hi, is it possible to find information in Player about size of deifned surfaces? I want to make my own heatmaps, and I need to know the width and height of each surface 😉
@user-a6cc45 do you want to implement it as a plugin or based on the surface tracker's csv export? @user-764f72 could you provide details for both cases? In the first case, a small example how to read the surface definitions file should be sufficient. In the latter case, I do not know if we export the surface sizes, too. If not, we should do this for a future release.
@papr for now I use information in exported csv files, but there are no info about maximum /minimum values of x and y coordinates of each surface, so my heatmaps don't have correct width and height (I'm making plugin for Pupil Player)
@user-a6cc45 you can use the normalized coordinates
They have a min value of zero and a max value of 1. If the mapped gaze coordinates exceed these values, they are outside of the surface.
Also, you can use the normalized and scaled values to calculate the surface size.
Do I need special algorithm to do that? @papr
@user-a6cc45 No.
Original calculation: norm_pos_x * surface_width = scaled_pos_x
Therefore: scaled_pos_x / norm_pos_x = surface_width
@papr thank you very much! I just want to be 100% sure 😃
@papr @fxlange in pupil hmd under calibration setting we have samples per target could u please tell me what it does i have read the docs need a better explanation like what are calibration points and what do they do thanx in advance
@user-5fa537 you can use this as template for your plugin https://gist.github.com/papr/0f13943e2aebd768ab6b1508d466caae
can anyone tell me where I can download the most recent versions of pupil capture for Mac? thank you very much
@user-478e7e I would assume scroll to the end here https://github.com/pupil-labs/pupil/releases/tag/v1.16 and download the macos_x64.zip?
@user-31df78 that is correct, thanks! @user-478e7e the latest version of Pupil Core software will always be available at https://github.com/pupil-labs/pupil/releases/latest and scroll down to assets
and download the macos_*.zip
Hi, I have a couple of questions to the hardware timestamps on window systems. Pupil labs own cameras will always use hardware timestamps, isn't it?
As I understand, software timestamps for other cameras are more reliable but come with a little offset. Can you give me a rough idea how much the offset is how constant it is?
I try to estimate the influence on the matching timestamp for the picoflexx point cloud data.
Hi @user-14d189 we got multiple reports in the past of issues with timestamp synchronization on Windows. When investigating we found that hardware timestamps on Windows will sometimes drift apart very very slowly, which can be a problem in longer recordings. This is not the case every time though. We are investigating the cause for this issue, but until then we have disabled the use of hardware timestamps on windows in the current v1.16 release. So if you want to disable Windows hardware timestamps we recommend you try out v1.16. The offsets are relatively constant for a given resolution and frame-rate. In the current version we did some measurements across different settings and came up with a good offset setting that works well for all resolutions. If you are experiencing any problems, you might want to try to adjust these offsets. Here is the code from the corresponding pull request: https://github.com/pupil-labs/pupil/pull/1621/files#diff-2683a5b0c67fdcd1b8dcdd5aaadc66a2R199-R209
@user-c5fb8b I definitely will try v1.16. Thank you. I assume with resolution you refer to the world camera image and/or the eye camera in relation to the frame rate and the resulting size of data that need to be processed. Picoflexx point cloud has a low resolution of 224x171 but the point cloud is a bout 2.5-3MB per image and we use 5 -10Hz recording frequency. I will try it out an let you know. thanks heaps!
@user-14d189 Exactly, the offsets depend on the time needed for transmission of the data. The frame rate has a very minor effect compared to the frame size though.
Hi, a quick question: Where can I download pyav-0.4.3?
Related to this question: https://github.com/pupil-labs/pupil/issues/1572
@user-5e6759 hey, my apologies, I see that we have a draft release for it, but have never published it. I will do so now.
@user-5e6759 please give it a try and let me know if there are any problems.
@papr It works great! Thanks for the swift response!