any idea why the surfaces folder is my export is coming up empty?
No files at all?
nope :(
now I'm half remembering some problem with onedrive? was it when exporting?
but annotations are exporting fine, just surfaces aren't. folder is there but nothing in it
Yeah, there are known issues with it. Mostly related to reading files. But I recommend moving the recording out of there and trying again
scratch that, files are being created very slowly! since they didn't appear when the process seemed to be finished, i assumed it hadn't worked and deleted the folder. thanks!
it's hard to know when it's finished though
I hear you.
I have been away for too long. I am sure I missed a lot of interesting stuff and updates. Right now I am searching for a way to plot a heat map. Anyone with ideas ? I simply have average Saccade length data
Hey there, please help me out. One of the eye cameras is suddenly not working. Like showing nothing but all black. But another eye camera just works fine. Anybody has any ideas on this?
Hey, black or gray?
Itβs black. I have one screenshot if it helps.
If it is fully black try restarting with default settings in the general settings
It worked. Thanks!
Alright. I will try out. Thank you.
Hey @user-7daa32, welcome back! Is Saccade length data really all you have? I'm not sure how a heatmap would work in that context, since presumably the length data has no directionality? Maybe a histrogram/descriptive statistics would make more sense.
Heatmap created in player .... Thanks
Thank you
Not the Saccade length. It is a derived average using the Saccade lengths. What about using the gaze data, I was able to create a heat map in the player but the visual stimulus shape is distorted.
hi can I ask a ELI5 type question as I haven't used the software yet - I have skimmed through the docs and couldn't quite figure it out So if you wanted to define an AOI post-recording (i.e. without markers during recording) - I see you can add a surface with the surface tracker plugin - but how does this work with continuous video with a dynamic world and/or head - would you need to modify the surface for each frame of the video assuming a shift of the AOI in the camera image?
without markers during recording That does not work π You can detect the markers post-recording but they need to be visible in the scene video.
ok - lets say I wanted to have an AOI as a car that moves across someone's view - would this be possible in post-processing analysis (using your software)?
If you placed a huge marker on it, maybe π But this is probably not what you are looking for. In your case, I would run a third-party object detector on the scene video and use the exported raw gaze data whether it falls onto any of the detected objects.
ok thanks - is there software that you would recommend that I could have a look at - ideally free?
There are many neural networks that have been published over the years. I am not up-to-date in this regard. The choice would also heavily depend on what you need, e.g. if you only need a rough outline (a rectangle) or an accurate per-pixel map.
This is it! I am not sure if I would need to re-export the data again. I am using version 2.5.0. https://pupil-labs.com/releases/core/v3.3/
You are not affected. The bug only applies to 3.0-3.2
yeah there has been a lot going on with automatic scene recognition - ok so I think it get it - the analysis is very geared up around the markers - cheers
I have a question about the demo workspace. I tried playing around with the videos and things but realized that most of the templates and videos there were locked. Is the demo workspace designed to be locked to edits, or is there a way around this to allow for edits?
How does the natural calibration algorithm work? How does the code match the gaze point with the red point in the natural scene? What algorithms are used to deal with it?
Could you please explain the running logic of the natural calibration algorithm? Use picturesπ π
Hello, I've been using the LSL Relay plugin to stream data to LSL while also using the lsl_inlet.py example in pupil helpers to export the data to a .csv file. How would I go about modifying the lsl_inlet.py example in order to display the computer's local clock for the timestamp, using datetime for example?
Ah, you need to run the commands in the command prompt, not within python π the commands will start their own python instance to run the code in isolation.
I'm still having issues when entering the script in different ways. Would you be able to list the steps I need (e.g. packages I need to download, code I need to enter beforehand) to make this script work? Thanks!
@user-e3f20f (Pupil Labs) hi
HI there. Just a quick question as I am writing a paper - when one does the eye tracking calibration on Pupil Capture, an error message can appear when the calibration is considered not sufficient (i.e. low data confidence, notably, I imagine ?). What threshold / error of measure does Pupil Capture automatically use ? In other words, what data quality is considered acceptable for the software to consider the calibration is suficient ? Thank you so much for your answer.
Hi π 2d or 3d calibration?
Hi there ! 3D calibration please !
Hi, I'm facing some issues with one or the pupil camera
Please contact info@pupil-labs.com in this regard
okay
In that case it will only fail for sure if there was no pupil or no reference data collected. Otherwise it will attempt to run the calibration which is a complex optimization function. There is no clear cut value that tells you beforehand if it will converge or not. Even if it converges, it might be very inaccurate. This is why we have the accuracy visualizer which applies the estimated calibration to the recorded Pupil data and compares it to the reference data. This tells you how good the fit is in visual angle error. Depending on your use case, you can decide to repeat the calibration or proceed
Thank you ! π
And what about the 2D calibration ? Does that have a threshold ?
You are running both commands as one, which confuses the command line. Enter each line one by one.
I'm still getting errors
Hi @user-d90133! You are missing git https://git-scm.com/. Install it, make sure is in your PATH and run the command again Check out https://www.activestate.com/resources/quick-reads/pip-install-git/
Hi, so I installed Git, and attempted to run the lines again. Are the errors due to an incorrectly set up PATH?
hi @user-e3f20f (Pupil Labs) When I run this code, I get an error and can't install it
π
Before I run the code, I have downloaded the 'requirements.txt' file
@user-d407c1
Hi, I compiled pupil core on my nvidia xavier nx (arm based), if try to run pupil capture i get the error glfw.GLFWError: (65542) b'GLX: No GLXFBConfigs returned'
This looks like an issue with one of our dependencies. See https://www.glfw.org/
i can run only the service app, but the fps rate is very slow (3-5 fps...)
Pupil Capture requires a fair amount of CPU. It is possible that this device is not able to deliver it. Note, Pupil Core software does not make explicit use of the GPU.
hi, i have a question, regarding exporting multiple recordings at once in pupilPlayer. I automated PupilCapture to trigger recording if certain events happened. Now i have about 200 single recordings and i need to export the 3d pupil diameter with pupilPlayer... Is there any faster way then drag and drop every single folder into it and press e?
Hi! Are you only interested in the recorded pupil data, without re-processing the eye videos?
If that is the case, check out https://gist.github.com/papr/743784a4510a95d6f462970bd1c23972
is there any explanation on how to use this right? I didn't seem to get it working... π€
Yes perfect thank you very much! Thats excatly what i need! π
i have another problem, if i capture the world camera with uv4l the image is distored, like a fisheye but only on the left part
solved the fisheye problem π But linux detect the 2 eye camera as video device, if i try to capture the video from the world camera works, if i try to get the video from the 2 eye camera no, maybe this is the problem why the fps is so slow (pretty blocked...) ?
Just to clarify, you selected the world camera in the world process and the eye cameras in the eye processes, correct?
I launch sudo python3 main.py service
the 2 eye camera starts, display a single image, the freeze
Right! If you close one of the eye windows, does it work?
no, everything is freezed, i need to kill the process
i get this error
Assertion failed: ok (src/mailbox.cpp:99)
Are you running from source? Which branch are you running from?
and a lot of cython error log...
i cloned the master branch, and i recompile everything from source
Hi all, is there a post hoc kind of way to know with what sampling frequency I recorded pupil size using the Pupil Core?
You can simply subtract neighbouring pupil timestamps (grouped by eye id), the inter-sample duration. 1 / intersample-duration gives you a per-sample estimation of the sampling frequency. I recommend averaging the values over time.
Ah yes, thanks Pablo, I just remembered that I actually had to write a little script to do this, because the missing data were messing with my annotations. But if I have not changed any settings on Pupil Capture while recording, the sampling frequency should be around 200Hz, right?
The default settings is 400x400 resolution at 120 Hz
perfect, thank you π
Any recommendations as to what confidence level to use as a cut off point? I want to filter out noisy data and keep only the samples where gaze is good, but I don't know much is enough. Thanks in advance!
Hi, did you use only the confidence level as a filter or did you enter other thresholds for example on diameter, smooth and more? How did you filter the signal?
Hi @user-75df7c π
I recommend you discard data points with a confidence level lower than 0.6
Thanks! I was using 0.8 until now and I was worried I was not being strict enough! Any reason for this particular number?
It is difficult to evaluate this number quantitively. It's more of a qualitative estimate of what is accurate "enough" and was is not. In the end, this threshold is a trade-off between "how good do you want your data to be" and "how much data are you ready to discard". If you have clean data, you can afford of increasing the threshold. If your data is noisy, you might need to reduce the threshold to have any data left. Ideally, you check that pupil detection works well before you start recording. But one does not always do their own data collection...
Absolutely. Ultimately I just wondered if there was an official recommendation from you guys so I can defend whatever number I choose with my supervisors haha
hehe, I get that. 0.6 is that number.
Thank you!
Hello, I have a question about the 3D gaze origin. I understand the origin of the x and y axis but not the origin of the z axis. What does the mean of gazenormal*z? If it was 1, where should I think a person is looking?
Hi, the z-axis points forward π
Thank you for your fast answer. But I want to know the origin of the z value. I think if it is 0, a person sees something close. But when it is 1, I don't know how far away he looking at something.
The origin of the gaze coordinate system is the scene camera, i.e. (0, 0, 0) is the center of the scene camera sensor.
The gaze_normal0/1 vectors are only directional vectors. They origin in the eye_center0/1 points. Together, they define a line of sight for each eye
Oh, thank you. I did misunderstand. So, the z value means the world camera's vectors right?
The z value alone is nearly meaningless.
Imagine two 3d lines. One per eye. Each goes through the center of the eye ball and the center of the pupil. These lines point towards what you are looking at. Each line is defined by a point (eye_center0/1) and a direction (gaze_normal0/1).
We try to find the intersection of those lines, which corresponds to gaze_point_3d.
Is this clearer?
Then the z value is the difference from the gaze determined by x and y?
Sorry. I became more confused. Is gaze_normal0/1 not enough to know the wearer's gaze? This question originated from this reference picture. I think I can know direction only with x and y, but I don't know what z means.
You can't interpret z independently of x and y in this case.
Note, gaze_normal0/1 are normalised. That means that the length of this vector will always be 1. It describes the rotation of one eye ball.
Then the z value is the difference from the gaze determined by x and y? If you so want, z depends on the values of x/y, yes.
Until I asked, I thought z was the distance to the object.
Thank you very much. Then, how can I interpret z with x and y?
As mentioned above, x/y/z describe a direction. Specifically, the direction in which one eye ball is rotated towards. The direction alone is not so useful, unless you want to know by how much the eye ball rotated in comparison to a second eye ball direction.
Thank you very much. Looking back on your answer, I think I was confused with gaze_point. Now, It is clear.
Hello, I am having troubles connecting to pi.local:8080
Can I ask you one last question? I want to know the unit of each gaze_point value.
Note, the length (the distance to the gazed-at object) of the gaze_point_3d vector is known to be inaccurate for distances >70-100 cm.
The unit for that coordinate system is millimeters
Then, is the z value distance to the object?
Not z alone. But the length of the whole vector is.
I see! thank you. Have a nice day.
Hi everyone, I would like to use my Pupils Lab Core to determine how often a gaze direction change occurs between three monitors, measure fixations and make a heatplot. I now have a setup that allows me to do robust tracking. Despite the good documentation, I still have some questions: In which format are the timestamps? For example, what does "9245888951" mean under "world timestamp"? Under pupil_positions.csv is also the variable "diameter". This is shown to me with e.g: "2292571258544920". How is this number to be interpreted? Is there a more detailed documentation for the beginner questions? Thank you!
Both world timestamp and diameter should be decimal point values. The first in seconds, the second in pixels.
Hey, are looking at the exported csv in Excel?
Hi! Yep
Excel is known to misinterpret the exported data, depending on the language settings. e.g. 2.5 (2,5 in German notation) is interpreted as 2500 because the . in German is the separator for large numbers.
Thanks for your quick and good explanation, but could you please explain it to me again? When I click in the "diameter" field in Excel, it shows me "2292571258544920". Now I understand that it is pixels, right? How many pixels would that be and can I convert that to mm in a reasonable way?
2) check out https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter If you want the diameter in mm, use the diameter_3d column (which is likely effected by issue (1), too)
1) Excel is displaying the value incorrectly. The value is likely 22.92571258544920 in the original csv file. You will need to reimport the CSV with the correct language settings s.t. these values are displayed correctly.
Great. Now, I got. Thanks!
Hello folks, I'm planning to buy a new PC with a new CPU for my experiments, as the current CPU cannot keep recording even at 30Hz. I found this old post saying that the Pupil Labs bundle does not support Xeon processors. Is this still an issue? Should I get i7 or i9 processors? https://discord.com/channels/285728493612957698/285728493612957698/578844966638452749
That is still correct
Thank you so much!
Hi, can I ask a question? What is the difference in the value that norm_pos_x/y between pupil_positions and gaze_positions?
The frame of reference is the difference. pupil norm_pos refers to a point in the corresponding eye video camera. gaze norm_pos refers to a point in the scene camera.
Thank you so much!
Hi, I need use pupil diameter. But blinks affect pupil diameter. Do you have any reference code or resources to get rid of the blinks points and interpolate new values?
hi, can I copy the calibration data from a PC to another ? I can launch the capture app on my PC, calibrate glasses; but i need the service app on my SBC with linux, the problem is the SBC can use only service app, and the calibration doesn't works, but i need IPC gaze messages (no messages without calibration...)
If you are ok without the scene video, what is it that you need the gaze data for? In other words, what kind of data are you interested in in particular?
i need gaze data, i need to capture the world camera then store where users is looking
on the SBC i use service app, but without a calibration the doesn't send me data
Let's continue in the thread of the other day
Hello folks, can anyone help here? One of the eye cameras stop functioning, showing 0 fps. And the eye camera window is in grey. How can I fix it??
Any answers is appreciated.
please contact info@pupil-labs.com in this regard
Okay. Thanks. Will shoot them an email then.
does Pupil invisible works with iOS???
Hi, no it does not. π It only works on OnePlus 6 and OnePlus 8/8T with Android
Hello!
I just got the glasses and companion device
The glasses do not connect to the companion device
Hi! The first steps will always be to connect the glasses via the included USB cable to the Companion device / phone. Have you done that already?
How can i connect them?
Does it need to be connected the whole time?
When you want to use it, yes! It gets its power from the phone and the gaze estimation happens in the phone, not the glasses themselves π
That's not what is advertised. This is a major issue
Can you point me to the specific advertisement that you are referring to?
https://pupil-labs.com/products/invisible/ Can you show me where it is written?
In all the pictures where people are wearing the glasses, you can see the cable. We never advertise that the glasses can be used wirelessly π
Hey @user-72f9ba π. Does your use-case strictly exclude usb connectivity? Is the issue cable management? You can also reach out to info@pupil-labs.com to discuss product fit and options π
Hello Dear Pupil Labs Team,
We want to mention about an issue which I have gotten from Pupil Capture using Core product.
I use a new computer with Ryzen 7 and a 3000 serie of Nvidia GPU. On the device, the frequency of the eye camera is fairly suitable for my new purpose, 120+ Hz. However world camera gives nearly 15 FPS, with a huge latency and froozen sometimes. I guess due to some reasons related with world camera process, I got Blue Screen error sometimes on Windows 10. When the error didn't exsist, gaze became really noisy.
I would like to hear your suggestions about the relavent jumping data. I know that the issue is not clear but maybe some aspects can be enough to predict it.
Have a good day!
Hello developers. I am a university student in Japan. I am using Pupil Core in my research and I have done the following work to enable LSL. 1. 1. copy pylsl and pupil_capture_lsl_relay directories to pupil_capture_settings/plugin 2. run Capture with Sudo
However, when I try to activate LSL, I get the following error which I cannot resolve.
world - [WARNING] plugin: Failed to load 'pupil_capture_lsl_relay'. Reason: 'liblsl library '/Users/tappun/pupil_capture_settings/plugins/pylsl/lib/liblsl.dylib' found but could not be loaded - possible platform/architecture mismatch.
You can install the LSL library with conda:
conda install -c conda-forge liblslor with homebrew:brew install labstreaminglayer/tap/lslor otherwise download it from the liblsl releases page assets: https://github.com/sccn/liblsl/releases On modern MacOS (>= 10.15) it is further necessary to set the DYLD_LIBRARY_PATH environment variable. e.g.>DYLD_LIBRARY_PATH=/opt/homebrew/lib python path/to/my_lsl_script.py'' world - [WARNING] plugin: Failed to load 'pylsl'. Reason: 'liblsl library '/Users/tappun/pupil_capture_settings/plugins/pylsl/lib/liblsl.dylib' found but could not be loaded - possible platform/architecture mismatch.
I ran the following, but it made no difference.
brew install labstreaminglayer/tap/lsl
I don't have a good understanding of DYLD_LIBRARY_PATH, so I tried specifying it as follows.
DYLD_LIBRARY_PATH="/opt/homebrew/lib/python3.9/site-packages/pylsl/pylsl.py" It would be helpful to know if there are any mistakes.
How can I do this? OS used is MacOS 13.0 (22A380) Python 9.0.
The bundle ships the intel-x86_64 python that is emulated on m1 macs. You installed the native M1 pylsl which is why you get the emulation error.
Regarding the above problem, we considered the possibility that liblsl does not support M1 Mac, and by using a macbookpro (ver. 11.6) equipped with an intel CPU, the error did not occur. Therefore, we will use that machine for development for a while. If you know how to run it on the latest macOS and M1 Mac, I would appreciate it if you could let me know.
Thanks.
I recommend to do so on the develop branch. See the special note about creating an intel-x86 virtual environment https://github.com/pupil-labs/pupil/tree/develop#installing-dependencies-and-code
Analyze the data
Hi all, does anyone have a detailed analysis of how to analyze the data exported from the system? With the codes and parameters that can be obtained? I am new here and I wanted to understand how they work. Also I wanted to know if the tests can be done directly from the software you download or if you need to implement them with Python or Matlab codes, thanks!!!
Hi @user-e91538 π. Welcome to the community! You can find a detailed overview of analysis and visualisation plugins available in Pupil Player (our free desktop analysis software) here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-player. You don't need to use coding to work with these tools. That said, you can export the results into .csv files, which can of course be processed with Python or Matlab, if that's your interest. If you can share details of your use-case, we can try to follow up with more concrete guidance/examples!
and how does lite analyze the data? Python, Matlab, C ++?
For example if you wanted to graph how you change the diameter of the pupil by changing the difficulty of the action that is taking place? Can it be done directly from the software or do you need to export the data and create a special code? In case there is already some code ready to practice? Sorry for the severla questions
That's no problem at all! You can see a very basic trace of pupil size data over time in our Software. But to extract insights about changes due to cognitive load, you'd need to export the data and do some further processing. Here are a few resources: 1. Best practices for doing pupillometry with Pupil Core: https://docs.pupil-labs.com/core/best-practices/#pupillometry 2. Third-party Python tools for evaluating pupil size changes (should be compatible with Core recordings): https://pyplr.github.io/cvd_pupillometry/index.html 3. Pre-processing pupil size data: https://discord.com/channels/285728493612957698/285728493612957698/854346611076235274 I hope this helps!
Hello everybody! I was wondering if there is a 32bit version of pupil_core software for linux
To try to run it in a raspberry pi
Hi, what you specifically need for that are arm-based versions of the dependencies. Check out the #1039477832440631366 thread history.
Ok, thank you. I will take a look.
On the other hand, I guess i will have to use a similar aproach if I need to run it on windows 11 right?
Do you mean Windows 11 on the rpi?
no, on another machine
We offer a pre-built bundle for Windows on Intel/AMD 64 systems
OK, thanks its up and runnning now on windows 11. I will try to build in rpi tomorrow. Thank you for your time.
Maybe an additional note: Pupil Core requires a fair amount of CPU. It is likely that the rpi won't be able to provide sufficient speed to run the pupil detection at higher frame rates. You can use it to record the video and run the detection post-hoc though.
I have another question: does the Pupil Player software export the data already filtered and having applied a smooth one or do we then need to process them also through a filter eliminating the sources of noise?
Hi! Pupil Player does not filter or smooth the data. We recommend discarding samples with a confidence of 0.6 or lower before preprocessing the exported data any further (see the third point listed in Neil's message)
Hi there! I am discussing a research study that used Pupil Labs CORE glasses. This is my students' first conversation about eye tracking details in my psychology of music class. I am looking for a post-hoc video that might illustrate fixations, saccades, etc. There are several videos on the Pupil Labs channel of post-hoc analysis and rendering, but there was no audio in the videos. Thanks in advance! I downloaded the demo videos from this link, https://pupil-labs.com/products/core/tech-specs/, but I cannot get them to load in any video apps that I have. Any help is appreciated!
Please open the demo recordings in Pupil Player and export them. The export will include the raw data as csv and the scene video with overlayed gaze. Check out our documentation for instructions.
are the demo recordings titled world.mp4, eye1.mp4, eye0.mp4? Thanks for the help. Looking to do work in eye tracking with Pupil SOON!
These are the raw camera recordings. You can play them back one by one using e.g. VLC media player.
@user-e3f20fI've finished the recording. How do I expor the pupil in marking the (x, y) coordinates of world-cam?
The (x,y) coordinates of the red point
Open the recording folder in Pupil Player and press the export button. This will save the points as csv
I have finished
π
What is the name of the (x,y) coordinate of the red dot?
Variable name (x,y)
norm_pos
Excuse me, are these two picture?
Norm_pos_x/y in the gaze position csv file
These are the coordinates of Norm_pos_x/y
The y value is too small
My pupil line of sight moves within the frame of the screen
Just like the red circle
@user-e3f20f
from which file did you load the data?
pupil_positions.csV
pupil != gaze data. What are you looking at is the pupil position within the eye video, not the scene video. You need to look at the norm_pos in gaze_postions.csv
Thank you. I found itοΌοΌοΌ
I want to use the heat map function
You can read about the setup in our documentation https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@user-e3f20f
How do I do that
Excuse me, how to turn off the effect of this highlight?
You can turn off visualizations via the plugin manager (for unique visualizations) or in their corresponding menu (for non-unique ones).
okοΌThank you. I did it successfully
I can't get a heat map
Because you have not setup the markers and not defined a surface yet
I've turned on the heat map
What should I do
Please read the documentation that I linked above
Can natural calibration generate heat map
Calibration and surface tracking are two different steps. You need to calibrate first. This gives you gaze in scene camera coordinates. Then you need surface tracking to map the gaze onto the surface. Afterward, you can generate the heatmap.
Or is the heat map generated only by screen calibration?
I've done the natural calibration
Then, you only need to setup the AprilTag markers and define a surface as described in the documentation.
Then I recorded the video and the data
What is surface?
The surface is the area of interest onto which you want to generate the heatmap. Pupil Capture and Player need the markers in order to track the surface. for example your computer screen.
As the error message says, you need to setup markers first. The markers are displayed and linked in the documentation. https://docs.pupil-labs.com/core/software/pupil-capture/#markers
Right, I need to calibrate the heat map using the computer screen
May I ask, do I need to print marker?
You can either display them digitally on your screen, or print them and attach them to your screen, yes.
https://raw.githubusercontent.com/wiki/pupil-labs/pupil/media/images/pc-sample-experiment.jpg
The only requirement is that you do not use the same marker more than once. Otherwise, you can choose any number (tag id) that you like
Is there any requirement for the numberοΌ
Make sure to include sufficient white border when displaying or cutting out the markers!
ok
I'll do the experiment now
after step 1, you can check if the markers are detected in Pupil Capture in realtime. Enable the surface tracker plugin. The markers should be marked in color. Then add a surface. You can edit the surface to fir your screen. https://docs.pupil-labs.com/core/software/pupil-capture/#preparing-your-environment
Is this step correctοΌ
@user-e3f20f
Is the surface tracker plugin enabled in capture exe?
not by default. you can turn it on in the plugin manager
OK
Is this all right?
@user-e3f20f
Looks good to me. Please confirm that the detection works by running Pupil Capture and pointing the scene camera to the screen.
How should I marker the colors?
Pupil Capture will overlay color in the video preview on the markers if they are detected. (Surface tracker needs to be running)
βThe markers should be marked in color. β
ok
In capture exe, the heat map is in motion
But in record exe, the heat map is not moving
I am not sure what you mean by "record exe". Do you mean Pupil Capture? Also, what type of movement do you expect?
It's the player
In Capture, the heatmap is calculated based on the last x seconds. In Player, the heatmap is calculated based on the data within the trim marks. The trim marks are the small controls on the left and right of the timeline.
In the player, the heat map is always in place
It doesn't follow the eye movement
But in capture exe, the heat map follows eye movements
yes, because in player, the heatmap is calculated on all recorded gaze data.
okk
π
π
Excuse me, I want to export the x+ y+ fixation value of the heat map data, and then draw the heat map of the points in MATLAB.
Which variables should I use?
Which file is it in?
π
If you have the surface setup in Player, hit export. There is a surfaces folder. Look at the fixations_on_surface_...csv file. Note that there is a difference between fixation and gaze data. The heatmap in Player is based on gaze data.
The heatmap in Player is based on gaze data"
i have the surfaces folder.
but the β fixations_on_surface_...csvβ size only 1KB
its blank only name tittle
That means that you have not run the fixation detector yet. Look at the gaze_on_surface...csv data instead
The table has data
Can I use these two data. x οΌx_ norm ; y : y_norm
yes
Where should I look for gaze data?
@user-e3f20f
en en
π
Where should I look for gaze data?π
you are looking at it already
What should the name of the file be?
Load the data in Matlab and use this https://de.mathworks.com/matlabcentral/fileexchange/66629-2-d-histogram-plot to generate the heatmap.
x_norm/y_norm
Is it confidence?
okk
I need one more variable. This variable is used to draw the color of the heat map. The degree of this variable determines the color.
That variable depends on how you aggregate the data. matlab should be doing this for you
The value of this variable determines the color of the heat map.
I want to use this variable : "gaze data"
gaze data are x/y locations on the surface. this is the x/y_norm data.
Can the software export gaze counts?
you can simply count the rows
ok
I'd like to ask you again. What variable did you use for the color of the heat map you generated?
the variable is not exported directly. This is how we calculate the heatmap https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L616-L640
I now want to verify the accuracy in matlab.
I want to use the same variables to generate the same heat map as you do
π π
Thank you very much
Hi! Just wondering if there is a way to change the background colour of the on-screen calibration and test routines in Capture? The current white is far too bright compared to our stimuli.
Hi, unfortunately, no. You might want to use the natural features calibration instead.
That said, why do you need the calibration routine to be of the same brightness as your stimulus?
We don't need it per se, but because our stimulus is a rather dark (air traffic control) radar display, the bright calibration in between blocks is less pleasant on the eyes
Natural features would indeed work, but I was rather fond of the automated nature of the on-screen markers.
Hm, thinking about natural features I might have gotten an idea. We can embed the markers in our radar display using some custom logic and then run the on-screen marker calibration from Capture in parallel, while not actually displaying its markers on the screen.
In that case, I recommend single marker calibration in physical mode. It will expect that an other source displays the marker and only performs detection (without displaying a marker on its own)
I'll run some tests with that as well. Thanks for the swift replies!
Hello Pupil Labs,
We have Pupil Core Child Version. We will have active children participants in a live play room environment. Can we download software to phone or tablet other than laptop to run Pupil Capture and Player?
Hello everyone, has anyone ever tried to install the pupil software source code on a raspberry? I got several error after taping this line of instruction: pip install -r requirements.txt So I decided to install each package indivudally. But I am stuck with the package cysignals. I always received the log message errors: "failled building wheel for cysignals". Any ideas? Thanks in advance for your help.
Let's continue the discussion in #1039477832440631366
Pupil Core can not be run from a phone. You can use a small form-factor tablet-style PC running Pupil Capture to make Pupil Core more portable. If the specifications of such a device are low-end, you record the experiment with the real-time pupil detection disabled to help ensure a high-sampling rate. Pupil detection and calibration can then be performed in a post-hoc context. But it is important you establish a good eye model fit prior to recording. Alternatively, you can connect the laptop through internet, place it on a backpack and control Pupil Capture from a separate machine using the network API
hi everyoneοΌ Is this means pupil core must always connect to a computerοΌ
Thank you very much!
@user-2196e3 see https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv and https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
@user-e3f20f Thank you! That's very helpful!
When I started my two devices (pupil invisible) they went into a sync mode and I cannot access the cloud or any of the menu items on the app. It did allow me to take eye-tracking data on a previously entered wearer name but both apps on each device are now stuck and I have recordings on each device that will not upload to my cloud account. Please help..
Could you please try logging out and back in again?
Will do, standy
That did it, thank you
hello! I am working on Reference Image mapper and I am wondering how to add a sample to be mapped. I used a reference picture and video but I now want to apply that enrichment to recordings. How can I do that? Thanks
Add all the recordings that you want to have mapped to the project in which the enrichment is defined.
great! Thanks
Hi all, is there anyone who did a filter after exporting the data? Can you share the code if possible? I would like to get feedback, for now I have read an article that was recommended to me above but it is a bit confusing! I would like to confront someone who is processing the exported data! Thank you
and also, does anyone have photos of the setup during the experiment?
Hi, I have 2 questions. 1. where and how can we download the recording folder. I found we only have recording and pupil data, but we don't have recording folder. 2. If we don't have recording folder, so we don't have info.player.json. Anyway we could convert the pupil data timestamp to real time correctly? Thank you!
Could you share a Screenshot of the file list?
And .MP4 video. That's all
It looks like you are working with exported data only, without access to the raw data. I fear that there is no possibility for you to reconstruct the date times of the recording.
We are still use pupil core device to do more experiments. Is it possible to get raw data for new experiment? How to get access to raw data?
Are those files from a recent recording? Pupil Capture creates a folder that you would open in Pupil Player. That is the raw data. You might be using a very old Pupil Capture version.
Yes, these data are from recent recording. They were got in 3 months.
Just to clarify, you are using Pupil Capture to record the data, correct? Which version are you using?
I will ask the person who did that. This is not done by me.
The latest one, 3.5.
By the way, which version do you recommend to use?
Can someone explain to me what timestamp is? It's in seconds but what does it mean? Is it seen as time for each sample?
Is it seen as time for each sample? it is π
More specifically, the time at which the sample was generated. Not its duration.
Maybe this tutorial can give you a more concrete idea of how "Pupil time" relates to the normal system clock https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
So anyway I have to convert it for example if I wanted to see the trend of the diameter with respect to time?
Hi all, I want to use a Raspberry pi to stream video to a computer. Where the computer runs pupil capture. I found and tried to install using this link https://github.com/Lifestohack/pupil-video-backend
But I got an error message "ModuleNotFoundError: .... cv2"
When I start to run python main.py
Pip install opencv-python
Thank you, it works. I can imagine the problem is about openCV. But the instruction on the page already ask to run the command: sudo apt install -y python-opencv. So it is a different openCV?
sudo apt install -y python3-opencv
This installs Opencv in the global environment for the default python.
pip install opencv-python
This installs in the current environment
@user-e3f20fI modified the English alphabet instead of the native language. But the output is a box
I think the included font only has English characters. It might be possible to load a custom font but I would need to look that up next week.
Hello team,
We are currently using pupil core to do realtime eye tracking (gaze on surface) via network API. We would like to get surface data in higher frequency (at least 120Hz) to detect saccades. We noticed the surface datum is 30Hz so we subscribed to both surface & gaze, and tried to map the gaze ourselves usingimg_to_suf_trans (which updates 30Hz). However, applying the homographic transformation to gaze data (norm_pos in gaze) does not give us the same surface data. I saw from a previous post that it is actually more complicated since you map gaze in undistorted camera space. I'm still not sure how to apply img_to_suf_trans in undistorted camera space to get surface data in higher frequency. I would very much appreciate any suggestions or pointers to existing examples. Thank you in advance!
Do I understand it correctly that you would prefer mapping gaze using an older surface location instead of waiting for the newest one?
The surface datum should contain transformation matrices for both, distorted and undistorted, spaces.
I think that is correct. To get the surface datum in 120Hz, we need to map gaze using an nearest older transformation. If we wait for newest data, we won't be able to collect mapped gaze data in realtime at 120Hz. I screenshot the conservation from your channel before, and I am encountering the same issue as this person did. However, I'm still not sure how to solve this issue yet. Any suggestions are appreciated!
@user-e3f20fThank you very much!!
hello
i'm using pupil core but i have Q
my pupil is so sensitive
it's using harder
can i handle in software?
@user-e3f20fLooking forward to your reply!!
π
Hello, where can I get the format/description of the data that can be collected via pupil labs on HTC VIVE?
It is the same as for the Pupil Core format. https://docs.pupil-labs.com/core/software/recording-format/
You can download an example raw recording here https://drive.google.com/file/d/11J3ZvpH7ujZONzjknzUXLCjUKIzDwrSp/view?usp=sharing
Using Pupil Player, you can export the data to csv https://docs.pupil-labs.com/core/software/pupil-player/#export
Thank you for your reply. This is really helpfulπ I have looked up the materials you provide. I am now processing a gaze dataset that is collected via pupil labs. The gaze data and its description is in Figure 1. And I print the data of the first five timestamp in Figure 2. I have a few questions about the dataset and hope you could explain it for me.
It looks like this dataset not only contains the center/normal data of left and right eye, but also has "3D Gaze Position in relation to HMD/world (X,Y,Z)". I don't find any relationship of this data with left/right data. Moreover, it looks like in the documentation, the pupil lab will collect the gaze data from left and right eyes separately. So I don't know how to get this "integrated 3D gaze position" data the the meaning of it. Could you explain it for me?
Based on the documentation, it looks like the gaze data collected via pupil lab is in world coordinates. But in the dataset, it looks like most of the data is collected based on HMD coordinate. I was wondering can pupil lab also collect the data on HMD coordinates and how can I convert it to worlds coordinates?
Hello all, I used the pupil-video backend on a raspberry and a PC. The raspberry takes the video flux and the PC use the video flux to compute the pupil detection. Everything looks right on the raspberry part.
However, I cannot see the video on the PC
I just saw a "HMD Streaming" on the video source/settings
When I developed the list on the "Active Device", I only saw "Local USB"
I use Windows PC
Have you tried toggling on the "enable manual camera selection" on that panel? the icon will become green and you will be able to select other sources
I already tried to enable the button, another source appears on the list. But when I select the new source, nothing happen
Actually, I cannot select the new source
Are you able to access the streaming (eye camera) from other apps (e.g. OBS, Meet,...)?
I never tried, how can I do that?
hi i have a question, is there a fast way to extract blink count data from about 500 recordings? Without using PupilPlayer?
hi @&288503824266690561 - I am looking at your DIY core solution. Is the published BOM (google sheets: Pupil Headset BOM : Sheet 1) the most up-to-date version? I see that the base compatibility requirements are UVC 1.1 - are there any additional limitations with the DIY solution and the available github pupil labs software?
Please see the first 5 points https://gist.github.com/papr/b258e0e944604375752eae502b4ad3d5 (i.e. the camera needs to support mjpeg, not h264)
HI, just a general question for people using the surface tracker, What do you do if participants moved a lot? This led to surface moving quite a lot in our case and I am not sure what to do. Please help! Also, for videos we had trouble getting the plugin to work. Support appreciated!
Is the issue that the surface markers are no longer being recognized due to motion blur?
Yes, for some videos, the surface seems to be unstable and vary because of movement. In other cases, the videos doesn't load. Or, the markers aren't recognized. I am trying to salvage the "lost data". I am also wondering what are the best steps for using/not-using data where there might have been clear issues with gaze reliability during calibration on certain parts of the screen. Do we use that data with a caveat?
The issue should be solvable. Let me write up an example.
Thank you for the prompt response! Please let me know when the example is ready. I really appreciate your help!
Hello
I have 1 Q
When I do calibration after that
Can see a square box in monitor
Which color has it? Is it green?
What is that??
I don't have photo.... but hmm...
Can see in capture
Like brown
Could you please share a screenshot of what you are referring to?
I will check now
Can you see that box??
Yes, that is the box that I had in mind. This is the calibration area. The area in which the gaze estimation will be most accurate.
Aha...okay
And please a few Description that graph???
What can I do when I see that...
I am not sure what you are referring to. This sounds like you are seeing an error in the picture. But in the picture, the software looks normal to me.
Yes it's correct.
I need to describe to customer
About software
So nowadays I do operations
Hello!!~~
Hi all, I currently use an old eye tracker with only 2 cameras: left eye and right eye. So, I would like to know if it is possible to have the old algorithm that you used to calculate the direction of the gaze, when you used only 2 cameras? Thanks in advance.
The new eye tracker also uses two eye cameras. If you are referring to the old 3d pupil detection algorithm, you should use Pupil Core software version 2. Note, the 3d pupil detection works per eye camera, I.e. It is agnostic to the amount of cameras
I forgot to mention, the eye tracker without the camera world. Thank you, I will see the version 2 then.
Gaze mapping can happen without the world camera. You just need a way to provide calibration targets, e.g. virtual targets. Typically, this corresponds to the use case we have in VR. Pupil Core glasses without scene camea are mostly used for pupillometry-only research.
But where I can find an older version of the pupil software?
So it is possible to provide the calibration target from an eye tracker1, then use the calibration to another eye tracker2?
That would only work if both eye trackers had the same exact camera positions. So likely, no
Hello, I have a question about the core eyetracker. Is there any analysis software from pupil labs that can automatically evaluate the recordings made? The recordings are around 2 hours long and an automatic evaluation would be a significant advance over doing it by hand. Or are there any recommendations for such a project? Thanks for an answer
Hi, what kind of metrics/evaluations are you looking for?
We want to measure the time the user is looking at a certain area. Like for example monitors (up to 6) and then per monitor a time as output value.
Have you been using the surface tracker in realtime/Pupil Capture?
Yes
Using this script, you can extract the realtime-surface-mapped gaze https://gist.github.com/N-M-T/b7221ace2e7acf0c0c836773a3b4cf7c and automate your analysis
I'll try that, thanksπ
What does this do? That pupil player already doesn't.
It extracts the real-time recorded surface-mapped gaze. Player redetects all markers and surfaces and remap all gaze.
If you want share one of those with data@pupil-labs.com and we will try to help you to recover as much as possible
Just done! I have only 3 so far because of space limitations. If the solution tried for this works with the other files also, then I won't need to share those. Thank you!
Hello, I am using the Pupil Core glasses with Unity, and have used scripts from this resource: https://github.com/pupil-labs/hmd-eyes
I am aware this works for the HMD, but I was wondering if it works directly with the Pupil Core glasses?
Okay we have also been using the surface tracker and looking fixations at certain regions. So I am wondering if there is any advantage of re-mapping the gaze. Does it give better surface tracking?
Unless you want to use the Post-hoc pupil detection and calibration, there is no advantage
Hello. I'm using the pupil core headset. I have a question about gaze point 3D values in the gaze recordings. What is the datum of the 3d coordinates, since there are negative values? and what's their units? Thanks!
Hi @user-040866! Please check out our 3D camera space coordinates https://docs.pupil-labs.com/core/terminology/#coordinate-system, which is the one followed gazepoint3d follows. I also recommend you to take a look at the reference system here too https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html. As you can see (x,y) can be negative depending on whether you look to the right-left or up-down. The z-axis should be always positive
The units are in mm
hello.The client asked for an explanation of the software menu Do you have an explanation for the menu? Not the data on the homepage
Hey, the documentation should explain the general menus. Then, each plugin has their own menu. We don't have a description of each single ui element, but the documentation has a section on each plugin. That helps understanding the menu.
I got it Thank you for always responding kindly and quickly.
Thank you. Another question, can 3d gaze points and 2d gaze be converted to each other? How? Many thanks!
Do you mean to the scene camera coordinates? you already have this value normalised at norm_pos_x and norm_pos_y https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv
Hi, i have a question: is it possible to use eye trackers while wearing glasses or is it better to take them off?
Hi @user-e91538 π Sometimes it's possible to put the Core headset first and then the eyeglasses, ensuring that eye cameras capture eyes from below the glasses frame. Note that it is not an ideal condition but does work for some people, depending on eye physiology and eyeglasses shape/dimensions.
Hello everyone ! I have a question. Is it possible using Pupil Core to get the head positions (coordinates of the head or head mouvements) ? Thank you in advance and have a nice day !
Hi @user-6586ca! Check out the head-pose-tracking plugin https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Thank you very much π
Hello, have some concerns about safety issue of the IR led. Saw in the bom sheet, core used SFH 4050 with 4mW/sr Radiant Intensity. Currently, I can't find this LED at local store. They have other alternatives, Can I consider any IR led around 4mW/sr - 14 mW/sr is safe?
Found an interesting document about eye safety.
Hi @user-78c370 π. The IR LED listed in the DIY bill of materials should be available from online vendors. Note that Core conforms with EU IEC-62471 standard for eye safety. We wouldn't be able to comment on the safety of third-party IR LEDs I'm afraid.
Got it. Thanks for answering!
Hi,
I've noticed that for some of my participants, the edge of the pupil and iris is blurred in the eye camera. At first I thought it's a camera issue but I've swapped cameras and the problem persists.
I also used the same eyetracker/cameras on myself in the same lighting conditions and the problem wasn't reproduced.
My participants' eyes look normal to me, so it wasn't a result of some eye defect. I noticed that the two participants with the same problem had green irises, but not sure about the others who didn't have the problem. I've dark brown eyes, but I doubt everyone else I used the eye-tracker on successfully has dark brown eyes.
(I notice that this person has residual mascara on, but I don't think it would cause what I described -- but I could be wrong)
May I know how what's causing this and how to fix it please? Thanks! π
Can you suggest some automatic pipeline method or code to get Saccades from Pupil Core recorded data.
Hi @user-89d824 !
Mascara and eye cosmetics can influence the lipidic layer of the tear film. This can be why you might see this blurred area as the tear film becomes oily. You can find more about how eye cosmetics affect the tear film here https://www.dovepress.com/investigating-the-effect-of-eye-cosmetics-on-the-tear-film-current-ins-peer-reviewed-fulltext-article-OPTOβ¨β¨
If you have physiological serum at hand, one way could be to "clean" the tear film before measuring.β¨β¨
That said, here are some steps you can take to improve pupil detection: 1. Position the eye camera to minimise 'bright pupil' and/or glare. Specifically, try to reduce the bright spot you can see near the pupil's edge. 2. Manually change the exposure settings to optimise the contrast between the pupil and the iris: https://drive.google.com/file/d/1SPwxL8iGRPJe8BFDBfzWWtvzA8UdqM6E/view?usp=sharing 3. Set the ROI to only include the eye region, excluding the dark corners of the image. Note that it is important not to set it too small (watch to the end): https://drive.google.com/file/d/1NRozA9i0SDMe_uQdjC2jIr000iPjqqVH/view?usp=sharing 4. Modify the 2D detector settings: https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings 5. Adjust gain, brightness, and contrast: In the eye window, click Video Source > Image Post Processing Let us know how it goes!
Hi again,
I've resumed data collection this week and 3 out of 6 of my participants have the same problem (blurred border between the pupil and iris). One of them had some residual mascara on her lashes, the other had 'permanent mascara' (see photo) that they cannot remove, and the last one did not seem to have residual makeup on her eyes but applies makeup daily otherwise.
Previously I have discussed with my supervisor about getting physiological serum but they weren't very keen on that because the ethical approval process might take too long.
I have tried the steps you proposed last time but none of them worked. That said, it's possible that I haven't done them correctly, especially Step 1. I have tried adjusting the cameras in all sorts of directions but the pupil still remains undetected. I don't think I know how to remove the 'bright pupil' problem π¦
I've also tried Steps 2 to 5 that didn't help.
My sample consists of mostly female participants and I believe a majority of them wear some form of eye make-up daily. It doesn't seem feasible to only recruit those who don't wear makeup (regularly) -- I would have a difficult time finding participants.
Here's a video recording of another one of my participants with this problem: https://youtu.be/unmY9iWN8ks
I'm guessing that this problem has to be quite common given how many women wear makeup?
Hi,
I had another participant with the same problem but I didn't manage to improve pupil detection based on the instructions given -- maybe I need more practice with that.
Nevertheless, I might want to try getting some physiological serum. How do you 'clean' the tear film with that? Do you soak a cotton ball with the serum and gently dab the eyeball? Or do you just pour it directly into the eyes and dab it dry?
Thanks a lot! I'll implement these steps if and when I get another participant with the same issue π
Hello, how can I extract the surface identified by the markers in order to overlay the heatmap?
Hi @user-e91538 π You can use our Surface Tracker plugin to obtain gaze data relative to the surface and generate the heatmap. Please take a look here for further details https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Is there a size of core pupil glasses for children age 4?
Hi @user-e91538 π. We offer a child-sized frame (not listed on the store) suitable for 3-9 year olds at the same cost as the adult-sized headset. If you would like to get a quote or place an order for a child-sized frame, just leave a note in the order form or reach out to [email removed]
Yes thank you, I had already seen this but I would like a heatmap on the surface itself, overlapping the two
It's possible to generate a heatmap overlapping the surface both in real-time and post-hoc. In Pupil Capture, enable the "Show Heatmap" toggle in the Surface Tracker plugin to visualize a real-time heatmap of gaze over the surface. In Pupil Player, we do offer a post-hoc version of the Surface Tracker plugin to create and export heatmaps within the defined surface. More details here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker
thank you very much, I have one last question: is it possible to use the Surface Trackers while the user is on the move? I tried to use them and the heatmap is deformed and not very precise. I would need it for a walking assignment
Unfortunately, motion blur induced by head movements may lead to failure in markers detection - resulting in a deformed surface. To improve detection, please ensure a sufficiently large white border around markers, make sure the scene is sufficiently illuminated and try to avoid abrupt movements that could cause motion blur.
Hi, I have question regarding the message timestamps I am using the Network API to read the pupil messages in real-time but I have observed that the pupil messages I receive have timestamps which are behind the current pupil time. Here's the code I am using ``` while True: socket.send_string('t') curr_time = float(socket.recv()) logging.info(f'Before : Current time {curr_time}') topic, payload = subscriber.recv_multipart() msg = msgpack.loads(payload, raw=False) timestamp = msg['timestamp'] logging.info(f'Msg timestamp {timestamp}') socket.send_string('t') curr_time = float(socket.recv()) logging.info(f'After : Current time {curr_time}')
And an example of the output I observe -
INFO: Before : Current time 8087.21867836
INFO: Msg timestamp 8087.211949
INFO: After : Current time 8087.227443357
``
Is this expected? I'd expect the message timestamp to be in between theBeforeandAfter` current time in this case
Maybe I am doing something wrong here?
Pupil data inherits their timestamps from the eye images which are in turn timestamped at their exposure. The time difference between now and the pupil datum corresponds to its processing time, including the transfer over the network.
Note, that your way to estimate the current pupil time is not as accurate as it could be. See https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py
Hello Pupil team, this is Amber again. I recently asked about using the nearest older transformation to get surface data at a higher sampling rate, and you mentioned that you will write up an example. I'm not sure if I missed the example because I couldn't find it on Discord anymore π₯ . Any pointers will be very much appreciated!
Here you go! https://gist.github.com/papr/da7b17c165ccbfa6a6835e261607c51e
Hey, sorry for the delay. I will try to get this done today
Hello Pupil team, I am trying to do an experiment using pupil core to measure people's gaze while they are doing their daily activities. By daily activities, I mean for example, cooking, watching TV, etc. What exactly do I need to do when conducting such an experiment? For example, do we need to use printed calibration points instead of using the computer screen for calibration?
Hi, is there a reason for using Pupil Core? Pupil Invisible might be much better suited for this use case. That said, you can use printed markers if you want, but you can similarly use the screen marker calibration. That is because Pupil Core calibrates gaze relative to the scene camera, not the environment.
I understand about the calibration. Thank you. Due to budget and experiment schedule, it is difficult to buy pupil invisible now. If the examinee works somewhat, is it ok to use a longer USB instead of the included one? I also have a question about syncing with external sensors. I understand that the timestamp in gaze_positions.csv represents UNIX time. The attached picture would be the timestamp when recorded on 2022/09/13 04:22 (UTC), but this number does not seem to represent the correct time. Could you please give me your opinion on this?
Hi, thanks for your reply. I did follow the script you have linked but, correct me if I am wrong, isn't that just for synchronizing clocks? I am just concerned about the processing time at this point as my RTT over the network seems significantly less than that (RTT ~ 1e-5s) Does it usually take ~ 0.01s for the processing?
To estimate the correct delay, you need to know the pupil time at arrival of datum in question. This is what the linked script is all about. The t request is not instant and includes process/transfer time as well.
10ms does not sound that unlikely. Camera latency alone is 8.5ms
To estimate the correct delay you need
Thank you so much!! I will work on it and post here if I have follow up questions.
Hello Pupil team, My colleagues and I are wanting to measure aspects of the vestibular ocular reflex with the Pupil Core system; specifically, ocular torsion. I was wondering if that was possible, and if so, which output measure should we look at?
Hi @user-bdb6e6 Pupil Core does not provide with cyclotorsions measurements. To collect these measurements, you will need to get an Iris pattern in the eye camera, and check in the next frames for rotations.
While, you can change the eye cameras resolution to 400 by 400px, it might not be enough to detect features of the Iris, especially if the subject' iris has not many salient features to match (like freckles or distinctive collagen patterns) Potentially, one can use a cosmetic contact lens with an irregular pattern, as they may have more salient patterns to match, but that contact lens needs to be fitted by a professional, and you will have to account for movement of the contact lens. Astigmatic (toric) contact lenses have an specific marking that optometrist use to know whether the lens is correctly placed, yet those markings would most probably not be seen by the eye camera, as they are quite subtle.
Thank you, I appreciate the help!