any idea why the surfaces folder is my export is coming up empty?
No files at all?
I have been away for too long. I am sure I missed a lot of interesting stuff and updates. Right now I am searching for a way to plot a heat map. Anyone with ideas ? I simply have average Saccade length data
Hey @user-7daa32, welcome back! Is Saccade length data really all you have? I'm not sure how a heatmap would work in that context, since presumably the length data has no directionality? Maybe a histrogram/descriptive statistics would make more sense.
Hey there, please help me out. One of the eye cameras is suddenly not working. Like showing nothing but all black. But another eye camera just works fine. Anybody has any ideas on this?
Hey, black or gray?
Itβs black. I have one screenshot if it helps.
If it is fully black try restarting with default settings in the general settings
Thank you
Not the Saccade length. It is a derived average using the Saccade lengths. What about using the gaze data, I was able to create a heat map in the player but the visual stimulus shape is distorted.
hi can I ask a ELI5 type question as I haven't used the software yet - I have skimmed through the docs and couldn't quite figure it out So if you wanted to define an AOI post-recording (i.e. without markers during recording) - I see you can add a surface with the surface tracker plugin - but how does this work with continuous video with a dynamic world and/or head - would you need to modify the surface for each frame of the video assuming a shift of the AOI in the camera image?
without markers during recording That does not work π You can detect the markers post-recording but they need to be visible in the scene video.
ok - lets say I wanted to have an AOI as a car that moves across someone's view - would this be possible in post-processing analysis (using your software)?
If you placed a huge marker on it, maybe π But this is probably not what you are looking for. In your case, I would run a third-party object detector on the scene video and use the exported raw gaze data whether it falls onto any of the detected objects.
ok thanks - is there software that you would recommend that I could have a look at - ideally free?
There are many neural networks that have been published over the years. I am not up-to-date in this regard. The choice would also heavily depend on what you need, e.g. if you only need a rough outline (a rectangle) or an accurate per-pixel map.
yeah there has been a lot going on with automatic scene recognition - ok so I think it get it - the analysis is very geared up around the markers - cheers
I have a question about the demo workspace. I tried playing around with the videos and things but realized that most of the templates and videos there were locked. Is the demo workspace designed to be locked to edits, or is there a way around this to allow for edits?
How does the natural calibration algorithm work? How does the code match the gaze point with the red point in the natural scene? What algorithms are used to deal with it?
Could you please explain the running logic of the natural calibration algorithm? Use picturesπ π
Hello, I've been using the LSL Relay plugin to stream data to LSL while also using the lsl_inlet.py example in pupil helpers to export the data to a .csv file. How would I go about modifying the lsl_inlet.py example in order to display the computer's local clock for the timestamp, using datetime for example?
@papr (Pupil Labs) hi
HI there. Just a quick question as I am writing a paper - when one does the eye tracking calibration on Pupil Capture, an error message can appear when the calibration is considered not sufficient (i.e. low data confidence, notably, I imagine ?). What threshold / error of measure does Pupil Capture automatically use ? In other words, what data quality is considered acceptable for the software to consider the calibration is suficient ? Thank you so much for your answer.
Hi π 2d or 3d calibration?
Hi, I'm facing some issues with one or the pupil camera
Please contact info@pupil-labs.com in this regard
okay
In that case it will only fail for sure if there was no pupil or no reference data collected. Otherwise it will attempt to run the calibration which is a complex optimization function. There is no clear cut value that tells you beforehand if it will converge or not. Even if it converges, it might be very inaccurate. This is why we have the accuracy visualizer which applies the estimated calibration to the recorded Pupil data and compares it to the reference data. This tells you how good the fit is in visual angle error. Depending on your use case, you can decide to repeat the calibration or proceed
Thank you ! π
And what about the 2D calibration ? Does that have a threshold ?
hi @papr (Pupil Labs) When I run this code, I get an error and can't install it
π
Before I run the code, I have downloaded the 'requirements.txt' file
@user-d407c1
Hi, I compiled pupil core on my nvidia xavier nx (arm based), if try to run pupil capture i get the error glfw.GLFWError: (65542) b'GLX: No GLXFBConfigs returned'
This looks like an issue with one of our dependencies. See https://www.glfw.org/
i can run only the service app, but the fps rate is very slow (3-5 fps...)
Pupil Capture requires a fair amount of CPU. It is possible that this device is not able to deliver it. Note, Pupil Core software does not make explicit use of the GPU.
hi, i have a question, regarding exporting multiple recordings at once in pupilPlayer. I automated PupilCapture to trigger recording if certain events happened. Now i have about 200 single recordings and i need to export the 3d pupil diameter with pupilPlayer... Is there any faster way then drag and drop every single folder into it and press e?
Hi! Are you only interested in the recorded pupil data, without re-processing the eye videos?
If that is the case, check out https://gist.github.com/papr/743784a4510a95d6f462970bd1c23972
Yes perfect thank you very much! Thats excatly what i need! π
i have another problem, if i capture the world camera with uv4l the image is distored, like a fisheye but only on the left part
solved the fisheye problem π But linux detect the 2 eye camera as video device, if i try to capture the video from the world camera works, if i try to get the video from the 2 eye camera no, maybe this is the problem why the fps is so slow (pretty blocked...) ?
Just to clarify, you selected the world camera in the world process and the eye cameras in the eye processes, correct?
the 2 eye camera starts, display a single image, the freeze
Right! If you close one of the eye windows, does it work?
no, everything is freezed, i need to kill the process
i get this error
Assertion failed: ok (src/mailbox.cpp:99)
Are you running from source? Which branch are you running from?
and a lot of cython error log...
i cloned the master branch, and i recompile everything from source
Hi all, is there a post hoc kind of way to know with what sampling frequency I recorded pupil size using the Pupil Core?
You can simply subtract neighbouring pupil timestamps (grouped by eye id), the inter-sample duration. 1 / intersample-duration
gives you a per-sample estimation of the sampling frequency. I recommend averaging the values over time.
Any recommendations as to what confidence level to use as a cut off point? I want to filter out noisy data and keep only the samples where gaze is good, but I don't know much is enough. Thanks in advance!
Hi @user-75df7c π
I recommend you discard data points with a confidence level lower than 0.6
Hi, did you use only the confidence level as a filter or did you enter other thresholds for example on diameter, smooth and more? How did you filter the signal?
Hello, I have a question about the 3D gaze origin. I understand the origin of the x and y axis but not the origin of the z axis. What does the mean of gazenormal*z? If it was 1, where should I think a person is looking?
Hi, the z-axis points forward π
Oh, thank you. I did misunderstand. So, the z value means the world camera's vectors right?
The z value alone is nearly meaningless.
Imagine two 3d lines. One per eye. Each goes through the center of the eye ball and the center of the pupil. These lines point towards what you are looking at. Each line is defined by a point (eye_center0/1
) and a direction (gaze_normal0/1
).
We try to find the intersection of those lines, which corresponds to gaze_point_3d
.
Is this clearer?
Then the z value is the difference from the gaze determined by x and y?
Until I asked, I thought z was the distance to the object.
Thank you very much. Then, how can I interpret z with x and y?
As mentioned above, x/y/z describe a direction. Specifically, the direction in which one eye ball is rotated towards. The direction alone is not so useful, unless you want to know by how much the eye ball rotated in comparison to a second eye ball direction.
Hello, I am having troubles connecting to pi.local:8080
Then, is the z value distance to the object?
Not z alone. But the length of the whole vector is.
I see! thank you. Have a nice day.
Hi everyone, I would like to use my Pupils Lab Core to determine how often a gaze direction change occurs between three monitors, measure fixations and make a heatplot. I now have a setup that allows me to do robust tracking. Despite the good documentation, I still have some questions: In which format are the timestamps? For example, what does "9245888951" mean under "world timestamp"? Under pupil_positions.csv is also the variable "diameter". This is shown to me with e.g: "2292571258544920". How is this number to be interpreted? Is there a more detailed documentation for the beginner questions? Thank you!
Hey, are looking at the exported csv in Excel?
Both world timestamp and diameter should be decimal point values. The first in seconds, the second in pixels.
Hello folks, I'm planning to buy a new PC with a new CPU for my experiments, as the current CPU cannot keep recording even at 30Hz. I found this old post saying that the Pupil Labs bundle does not support Xeon processors. Is this still an issue? Should I get i7 or i9 processors? https://discord.com/channels/285728493612957698/285728493612957698/578844966638452749
That is still correct
Hi, can I ask a question? What is the difference in the value that norm_pos_x/y between pupil_positions and gaze_positions?
The frame of reference is the difference. pupil norm_pos refers to a point in the corresponding eye video camera. gaze norm_pos refers to a point in the scene camera.
Hi, I need use pupil diameter. But blinks affect pupil diameter. Do you have any reference code or resources to get rid of the blinks points and interpolate new values?
hi, can I copy the calibration data from a PC to another ? I can launch the capture app on my PC, calibrate glasses; but i need the service app on my SBC with linux, the problem is the SBC can use only service app, and the calibration doesn't works, but i need IPC gaze messages (no messages without calibration...)
If you are ok without the scene video, what is it that you need the gaze data for? In other words, what kind of data are you interested in in particular?
i need gaze data, i need to capture the world camera then store where users is looking
on the SBC i use service app, but without a calibration the doesn't send me data
Let's continue in the thread of the other day
Hello folks, can anyone help here? One of the eye cameras stop functioning, showing 0 fps. And the eye camera window is in grey. How can I fix it??
please contact info@pupil-labs.com in this regard
Any answers is appreciated.
does Pupil invisible works with iOS???
Hi, no it does not. π It only works on OnePlus 6 and OnePlus 8/8T with Android
Hello!
I just got the glasses and companion device
The glasses do not connect to the companion device
Hi! The first steps will always be to connect the glasses via the included USB cable to the Companion device / phone. Have you done that already?
How can i connect them?
Does it need to be connected the whole time?
When you want to use it, yes! It gets its power from the phone and the gaze estimation happens in the phone, not the glasses themselves π
That's not what is advertised. This is a major issue
Can you point me to the specific advertisement that you are referring to?
https://pupil-labs.com/products/invisible/ Can you show me where it is written?
In all the pictures where people are wearing the glasses, you can see the cable. We never advertise that the glasses can be used wirelessly π
Hey @user-72f9ba π. Does your use-case strictly exclude usb connectivity? Is the issue cable management? You can also reach out to info@pupil-labs.com to discuss product fit and options π
Hello Dear Pupil Labs Team,
We want to mention about an issue which I have gotten from Pupil Capture using Core product.
I use a new computer with Ryzen 7 and a 3000 serie of Nvidia GPU. On the device, the frequency of the eye camera is fairly suitable for my new purpose, 120+ Hz. However world camera gives nearly 15 FPS, with a huge latency and froozen sometimes. I guess due to some reasons related with world camera process, I got Blue Screen error sometimes on Windows 10. When the error didn't exsist, gaze became really noisy.
I would like to hear your suggestions about the relavent jumping data. I know that the issue is not clear but maybe some aspects can be enough to predict it.
Have a good day!
Hello developers. I am a university student in Japan. I am using Pupil Core in my research and I have done the following work to enable LSL. 1. 1. copy pylsl and pupil_capture_lsl_relay directories to pupil_capture_settings/plugin 2. run Capture with Sudo
However, when I try to activate LSL, I get the following error which I cannot resolve.
world - [WARNING] plugin: Failed to load 'pupil_capture_lsl_relay'. Reason: 'liblsl library '/Users/tappun/pupil_capture_settings/plugins/pylsl/lib/liblsl.dylib' found but could not be loaded - possible platform/architecture mismatch.
You can install the LSL library with conda:
conda install -c conda-forge liblsl
or with homebrew:brew install labstreaminglayer/tap/lsl
or otherwise download it from the liblsl releases page assets: https://github.com/sccn/liblsl/releases On modern MacOS (>= 10.15) it is further necessary to set the DYLD_LIBRARY_PATH environment variable. e.g.>DYLD_LIBRARY_PATH=/opt/homebrew/lib python path/to/my_lsl_script.py
'' world - [WARNING] plugin: Failed to load 'pylsl'. Reason: 'liblsl library '/Users/tappun/pupil_capture_settings/plugins/pylsl/lib/liblsl.dylib' found but could not be loaded - possible platform/architecture mismatch.
I ran the following, but it made no difference.
brew install labstreaminglayer/tap/lsl
I don't have a good understanding of DYLD_LIBRARY_PATH, so I tried specifying it as follows.
DYLD_LIBRARY_PATH="/opt/homebrew/lib/python3.9/site-packages/pylsl/pylsl.py" It would be helpful to know if there are any mistakes.
How can I do this? OS used is MacOS 13.0 (22A380) Python 9.0.
The bundle ships the intel-x86_64 python that is emulated on m1 macs. You installed the native M1 pylsl which is why you get the emulation error.
Regarding the above problem, we considered the possibility that liblsl does not support M1 Mac, and by using a macbookpro (ver. 11.6) equipped with an intel CPU, the error did not occur. Therefore, we will use that machine for development for a while. If you know how to run it on the latest macOS and M1 Mac, I would appreciate it if you could let me know.
Thanks.
I recommend to do so on the develop
branch. See the special note about creating an intel-x86 virtual environment https://github.com/pupil-labs/pupil/tree/develop#installing-dependencies-and-code
Analyze the data
Hi all, does anyone have a detailed analysis of how to analyze the data exported from the system? With the codes and parameters that can be obtained? I am new here and I wanted to understand how they work. Also I wanted to know if the tests can be done directly from the software you download or if you need to implement them with Python or Matlab codes, thanks!!!
Hi @user-beb3db π. Welcome to the community! You can find a detailed overview of analysis and visualisation plugins available in Pupil Player (our free desktop analysis software) here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-player. You don't need to use coding to work with these tools. That said, you can export the results into .csv files, which can of course be processed with Python or Matlab, if that's your interest. If you can share details of your use-case, we can try to follow up with more concrete guidance/examples!
and how does lite analyze the data? Python, Matlab, C ++?
Hello everybody! I was wondering if there is a 32bit version of pupil_core software for linux
Hi, what you specifically need for that are arm-based versions of the dependencies. Check out the #1039477832440631366 thread history.
To try to run it in a raspberry pi
On the other hand, I guess i will have to use a similar aproach if I need to run it on windows 11 right?
Do you mean Windows 11 on the rpi?
no, on another machine
We offer a pre-built bundle for Windows on Intel/AMD 64 systems
OK, thanks its up and runnning now on windows 11. I will try to build in rpi tomorrow. Thank you for your time.
Maybe an additional note: Pupil Core requires a fair amount of CPU. It is likely that the rpi won't be able to provide sufficient speed to run the pupil detection at higher frame rates. You can use it to record the video and run the detection post-hoc though.
Hi there! I am discussing a research study that used Pupil Labs CORE glasses. This is my students' first conversation about eye tracking details in my psychology of music class. I am looking for a post-hoc video that might illustrate fixations, saccades, etc. There are several videos on the Pupil Labs channel of post-hoc analysis and rendering, but there was no audio in the videos. Thanks in advance! I downloaded the demo videos from this link, https://pupil-labs.com/products/core/tech-specs/, but I cannot get them to load in any video apps that I have. Any help is appreciated!
Please open the demo recordings in Pupil Player and export them. The export will include the raw data as csv and the scene video with overlayed gaze. Check out our documentation for instructions.
are the demo recordings titled world.mp4, eye1.mp4, eye0.mp4? Thanks for the help. Looking to do work in eye tracking with Pupil SOON!
These are the raw camera recordings. You can play them back one by one using e.g. VLC media player.
@paprI've finished the recording. How do I expor the pupil in marking the (x, y) coordinates of world-cam?
The (x,y) coordinates of the red point
Open the recording folder in Pupil Player and press the export button. This will save the points as csv
I have finished
π
What is the name of the (x,y) coordinate of the red dot?
Variable name (x,y)
norm_pos
Excuse me, are these two picture?
Norm_pos_x/y in the gaze position csv file
These are the coordinates of Norm_pos_x/y
The y value is too small
My pupil line of sight moves within the frame of the screen
Just like the red circle
@papr
from which file did you load the data?
pupil_positions.csV
pupil != gaze data. What are you looking at is the pupil position within the eye video, not the scene video. You need to look at the norm_pos in gaze_postions.csv
Thank you. I found itοΌοΌοΌ
I want to use the heat map function
You can read about the setup in our documentation https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@papr
How do I do that
Excuse me, how to turn off the effect of this highlight?
You can turn off visualizations via the plugin manager (for unique visualizations) or in their corresponding menu (for non-unique ones).
okοΌThank you. I did it successfully
I can't get a heat map
Because you have not setup the markers and not defined a surface yet
I've turned on the heat map
What should I do
Please read the documentation that I linked above
Can natural calibration generate heat map
Calibration and surface tracking are two different steps. You need to calibrate first. This gives you gaze in scene camera coordinates. Then you need surface tracking to map the gaze onto the surface. Afterward, you can generate the heatmap.
Or is the heat map generated only by screen calibration?
I've done the natural calibration
Then, you only need to setup the AprilTag markers and define a surface as described in the documentation.
Then I recorded the video and the data
What is surface?
The surface is the area of interest onto which you want to generate the heatmap. Pupil Capture and Player need the markers in order to track the surface. for example your computer screen.
As the error message says, you need to setup markers first. The markers are displayed and linked in the documentation. https://docs.pupil-labs.com/core/software/pupil-capture/#markers
Right, I need to calibrate the heat map using the computer screen
May I ask, do I need to print marker?
You can either display them digitally on your screen, or print them and attach them to your screen, yes.
https://raw.githubusercontent.com/wiki/pupil-labs/pupil/media/images/pc-sample-experiment.jpg
The only requirement is that you do not use the same marker more than once. Otherwise, you can choose any number (tag id) that you like
Is there any requirement for the numberοΌ
Make sure to include sufficient white border when displaying or cutting out the markers!
ok
I'll do the experiment now
after step 1, you can check if the markers are detected in Pupil Capture in realtime. Enable the surface tracker plugin. The markers should be marked in color. Then add a surface. You can edit the surface to fir your screen. https://docs.pupil-labs.com/core/software/pupil-capture/#preparing-your-environment
Is this step correctοΌ
@papr
Is the surface tracker plugin enabled in capture exe?
not by default. you can turn it on in the plugin manager
OK
Is this all right?
@papr
Looks good to me. Please confirm that the detection works by running Pupil Capture and pointing the scene camera to the screen.
How should I marker the colors?
Pupil Capture will overlay color in the video preview on the markers if they are detected. (Surface tracker needs to be running)
βThe markers should be marked in color. β
ok
In capture exe, the heat map is in motion
But in record exe, the heat map is not moving
I am not sure what you mean by "record exe". Do you mean Pupil Capture? Also, what type of movement do you expect?
It's the player
In Capture, the heatmap is calculated based on the last x seconds. In Player, the heatmap is calculated based on the data within the trim marks. The trim marks are the small controls on the left and right of the timeline.
In the player, the heat map is always in place
It doesn't follow the eye movement
But in capture exe, the heat map follows eye movements
yes, because in player, the heatmap is calculated on all recorded gaze data.
okk
π
π
Excuse me, I want to export the x+ y+ fixation value of the heat map data, and then draw the heat map of the points in MATLAB.
If you have the surface setup in Player, hit export. There is a surfaces
folder. Look at the fixations_on_surface_...csv
file. Note that there is a difference between fixation and gaze data. The heatmap in Player is based on gaze data.
Which variables should I use?
Which file is it in?
π
i have the surfaces folder.
but the β fixations_on_surface_...csvβ size only 1KB
its blank only name tittle
That means that you have not run the fixation detector yet. Look at the gaze_on_surface...csv
data instead
The table has data
Can I use these two data. x οΌx_ norm ; y : y_norm
yes
Where should I look for gaze data?
@papr
en en
π
Where should I look for gaze data?π
you are looking at it already
What should the name of the file be?
x_norm/y_norm
Load the data in Matlab and use this https://de.mathworks.com/matlabcentral/fileexchange/66629-2-d-histogram-plot to generate the heatmap.
Is it confidence?
okk
I need one more variable. This variable is used to draw the color of the heat map. The degree of this variable determines the color.
That variable depends on how you aggregate the data. matlab should be doing this for you
The value of this variable determines the color of the heat map.
I want to use this variable : "gaze data"
gaze data are x/y locations on the surface. this is the x/y_norm data.
Can the software export gaze counts?
you can simply count the rows
ok
I'd like to ask you again. What variable did you use for the color of the heat map you generated?
the variable is not exported directly. This is how we calculate the heatmap https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L616-L640
I now want to verify the accuracy in matlab.
I want to use the same variables to generate the same heat map as you do
π π
Thank you very much
Hi! Just wondering if there is a way to change the background colour of the on-screen calibration and test routines in Capture? The current white is far too bright compared to our stimuli.
Hi, unfortunately, no. You might want to use the natural features calibration instead.
That said, why do you need the calibration routine to be of the same brightness as your stimulus?
Natural features would indeed work, but I was rather fond of the automated nature of the on-screen markers.
Hm, thinking about natural features I might have gotten an idea. We can embed the markers in our radar display using some custom logic and then run the on-screen marker calibration from Capture in parallel, while not actually displaying its markers on the screen.
In that case, I recommend single marker calibration in physical mode. It will expect that an other source displays the marker and only performs detection (without displaying a marker on its own)
Hello Pupil Labs,
We have Pupil Core Child Version. We will have active children participants in a live play room environment. Can we download software to phone or tablet other than laptop to run Pupil Capture and Player?
Pupil Core can not be run from a phone. You can use a small form-factor tablet-style PC running Pupil Capture to make Pupil Core more portable. If the specifications of such a device are low-end, you record the experiment with the real-time pupil detection disabled to help ensure a high-sampling rate. Pupil detection and calibration can then be performed in a post-hoc context. But it is important you establish a good eye model fit prior to recording. Alternatively, you can connect the laptop through internet, place it on a backpack and control Pupil Capture from a separate machine using the network API
Hello everyone, has anyone ever tried to install the pupil software source code on a raspberry? I got several error after taping this line of instruction: pip install -r requirements.txt So I decided to install each package indivudally. But I am stuck with the package cysignals. I always received the log message errors: "failled building wheel for cysignals". Any ideas? Thanks in advance for your help.
Let's continue the discussion in #1039477832440631366
@user-2196e3 see https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv and https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
@papr Thank you! That's very helpful!
When I started my two devices (pupil invisible) they went into a sync mode and I cannot access the cloud or any of the menu items on the app. It did allow me to take eye-tracking data on a previously entered wearer name but both apps on each device are now stuck and I have recordings on each device that will not upload to my cloud account. Please help..
Could you please try logging out and back in again?
Will do, standy
That did it, thank you
hello! I am working on Reference Image mapper and I am wondering how to add a sample to be mapped. I used a reference picture and video but I now want to apply that enrichment to recordings. How can I do that? Thanks
Add all the recordings that you want to have mapped to the project in which the enrichment is defined.
Hi all, is there anyone who did a filter after exporting the data? Can you share the code if possible? I would like to get feedback, for now I have read an article that was recommended to me above but it is a bit confusing! I would like to confront someone who is processing the exported data! Thank you
and also, does anyone have photos of the setup during the experiment?
Hi, I have 2 questions. 1. where and how can we download the recording folder. I found we only have recording and pupil data, but we don't have recording folder. 2. If we don't have recording folder, so we don't have info.player.json. Anyway we could convert the pupil data timestamp to real time correctly? Thank you!
Could you share a Screenshot of the file list?
We are still use pupil core device to do more experiments. Is it possible to get raw data for new experiment? How to get access to raw data?
Are those files from a recent recording? Pupil Capture creates a folder that you would open in Pupil Player. That is the raw data. You might be using a very old Pupil Capture version.
By the way, which version do you recommend to use?
Can someone explain to me what timestamp is? It's in seconds but what does it mean? Is it seen as time for each sample?
Is it seen as time for each sample? it is π
Maybe this tutorial can give you a more concrete idea of how "Pupil time" relates to the normal system clock https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
More specifically, the time at which the sample was generated. Not its duration.
Hi all, I want to use a Raspberry pi to stream video to a computer. Where the computer runs pupil capture. I found and tried to install using this link https://github.com/Lifestohack/pupil-video-backend
But I got an error message "ModuleNotFoundError: .... cv2"
Pip install opencv-python
When I start to run python main.py
sudo apt install -y python3-opencv
This installs Opencv in the global environment for the default python.
pip install opencv-python
This installs in the current environment
@paprI modified the English alphabet instead of the native language. But the output is a box
I think the included font only has English characters. It might be possible to load a custom font but I would need to look that up next week.
Hello team,
We are currently using pupil core to do realtime eye tracking (gaze on surface) via network API. We would like to get surface data in higher frequency (at least 120Hz) to detect saccades. We noticed the surface datum is 30Hz so we subscribed to both surface & gaze, and tried to map the gaze ourselves usingimg_to_suf_trans
(which updates 30Hz). However, applying the homographic transformation to gaze data (norm_pos in gaze) does not give us the same surface data. I saw from a previous post that it is actually more complicated since you map gaze in undistorted camera space. I'm still not sure how to apply img_to_suf_trans
in undistorted camera space to get surface data in higher frequency. I would very much appreciate any suggestions or pointers to existing examples. Thank you in advance!
Do I understand it correctly that you would prefer mapping gaze using an older surface location instead of waiting for the newest one?
The surface datum should contain transformation matrices for both, distorted and undistorted, spaces.
@paprThank you very much!!
hello
i'm using pupil core but i have Q
my pupil is so sensitive
it's using harder
can i handle in software?
@paprLooking forward to your reply!!
π
Hello, where can I get the format/description of the data that can be collected via pupil labs on HTC VIVE?
It is the same as for the Pupil Core format. https://docs.pupil-labs.com/core/software/recording-format/
You can download an example raw recording here https://drive.google.com/file/d/11J3ZvpH7ujZONzjknzUXLCjUKIzDwrSp/view?usp=sharing
Using Pupil Player, you can export the data to csv https://docs.pupil-labs.com/core/software/pupil-player/#export
Hello all, I used the pupil-video backend on a raspberry and a PC. The raspberry takes the video flux and the PC use the video flux to compute the pupil detection. Everything looks right on the raspberry part.
However, I cannot see the video on the PC
I just saw a "HMD Streaming" on the video source/settings
Have you tried toggling on the "enable manual camera selection" on that panel? the icon will become green and you will be able to select other sources
When I developed the list on the "Active Device", I only saw "Local USB"
I use Windows PC
Actually, I cannot select the new source
Are you able to access the streaming (eye camera) from other apps (e.g. OBS, Meet,...)?
I never tried, how can I do that?
hi i have a question, is there a fast way to extract blink count data from about 500 recordings? Without using PupilPlayer?
hi @&288503824266690561 - I am looking at your DIY core solution. Is the published BOM (google sheets: Pupil Headset BOM : Sheet 1) the most up-to-date version? I see that the base compatibility requirements are UVC 1.1 - are there any additional limitations with the DIY solution and the available github pupil labs software?
Please see the first 5 points https://gist.github.com/papr/b258e0e944604375752eae502b4ad3d5 (i.e. the camera needs to support mjpeg, not h264)
HI, just a general question for people using the surface tracker, What do you do if participants moved a lot? This led to surface moving quite a lot in our case and I am not sure what to do. Please help! Also, for videos we had trouble getting the plugin to work. Support appreciated!
Is the issue that the surface markers are no longer being recognized due to motion blur?
Hello
I have 1 Q
When I do calibration after that
Can see a square box in monitor
Which color has it? Is it green?
What is that??
I don't have photo.... but hmm...
Can see in capture
Like brown
Could you please share a screenshot of what you are referring to?
I will check now
Can you see that box??
Yes, that is the box that I had in mind. This is the calibration area. The area in which the gaze estimation will be most accurate.
Aha...okay
And please a few Description that graph???
What can I do when I see that...
I am not sure what you are referring to. This sounds like you are seeing an error in the picture. But in the picture, the software looks normal to me.
Yes it's correct.
I need to describe to customer
About software
So nowadays I do operations
Hello!!~~
Hi all, I currently use an old eye tracker with only 2 cameras: left eye and right eye. So, I would like to know if it is possible to have the old algorithm that you used to calculate the direction of the gaze, when you used only 2 cameras? Thanks in advance.
The new eye tracker also uses two eye cameras. If you are referring to the old 3d pupil detection algorithm, you should use Pupil Core software version 2. Note, the 3d pupil detection works per eye camera, I.e. It is agnostic to the amount of cameras
Hello, I have a question about the core eyetracker. Is there any analysis software from pupil labs that can automatically evaluate the recordings made? The recordings are around 2 hours long and an automatic evaluation would be a significant advance over doing it by hand. Or are there any recommendations for such a project? Thanks for an answer
Hi, what kind of metrics/evaluations are you looking for?
We want to measure the time the user is looking at a certain area. Like for example monitors (up to 6) and then per monitor a time as output value.
Have you been using the surface tracker in realtime/Pupil Capture?
Yes
Using this script, you can extract the realtime-surface-mapped gaze https://gist.github.com/N-M-T/b7221ace2e7acf0c0c836773a3b4cf7c and automate your analysis
I'll try that, thanksπ
Hello, I am using the Pupil Core glasses with Unity, and have used scripts from this resource: https://github.com/pupil-labs/hmd-eyes
I am aware this works for the HMD, but I was wondering if it works directly with the Pupil Core glasses?
Hello. I'm using the pupil core headset. I have a question about gaze point 3D values in the gaze recordings. What is the datum of the 3d coordinates, since there are negative values? and what's their units? Thanks!
Hi @user-040866! Please check out our 3D camera space coordinates https://docs.pupil-labs.com/core/terminology/#coordinate-system, which is the one followed gazepoint3d follows. I also recommend you to take a look at the reference system here too https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html. As you can see (x,y) can be negative depending on whether you look to the right-left or up-down. The z-axis should be always positive
The units are in mm
hello.The client asked for an explanation of the software menu Do you have an explanation for the menu? Not the data on the homepage
Hey, the documentation should explain the general menus. Then, each plugin has their own menu. We don't have a description of each single ui element, but the documentation has a section on each plugin. That helps understanding the menu.
Hi, i have a question: is it possible to use eye trackers while wearing glasses or is it better to take them off?
Hi @user-beb3db π Sometimes it's possible to put the Core headset first and then the eyeglasses, ensuring that eye cameras capture eyes from below the glasses frame. Note that it is not an ideal condition but does work for some people, depending on eye physiology and eyeglasses shape/dimensions.
Hello everyone ! I have a question. Is it possible using Pupil Core to get the head positions (coordinates of the head or head mouvements) ? Thank you in advance and have a nice day !
Hi @user-6586ca! Check out the head-pose-tracking plugin https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Hello, have some concerns about safety issue of the IR led. Saw in the bom sheet, core used SFH 4050 with 4mW/sr Radiant Intensity. Currently, I can't find this LED at local store. They have other alternatives, Can I consider any IR led around 4mW/sr - 14 mW/sr is safe?
Hi @user-78c370 π. The IR LED listed in the DIY bill of materials should be available from online vendors. Note that Core conforms with EU IEC-62471 standard for eye safety. We wouldn't be able to comment on the safety of third-party IR LEDs I'm afraid.
Found an interesting document about eye safety.
Hi,
I've noticed that for some of my participants, the edge of the pupil and iris is blurred in the eye camera. At first I thought it's a camera issue but I've swapped cameras and the problem persists.
I also used the same eyetracker/cameras on myself in the same lighting conditions and the problem wasn't reproduced.
My participants' eyes look normal to me, so it wasn't a result of some eye defect. I noticed that the two participants with the same problem had green irises, but not sure about the others who didn't have the problem. I've dark brown eyes, but I doubt everyone else I used the eye-tracker on successfully has dark brown eyes.
(I notice that this person has residual mascara on, but I don't think it would cause what I described -- but I could be wrong)
May I know how what's causing this and how to fix it please? Thanks! π
Hi @user-89d824 !
Mascara and eye cosmetics can influence the lipidic layer of the tear film. This can be why you might see this blurred area as the tear film becomes oily. You can find more about how eye cosmetics affect the tear film here https://www.dovepress.com/investigating-the-effect-of-eye-cosmetics-on-the-tear-film-current-ins-peer-reviewed-fulltext-article-OPTOβ¨β¨
If you have physiological serum at hand, one way could be to "clean" the tear film before measuring.β¨β¨
That said, here are some steps you can take to improve pupil detection: 1. Position the eye camera to minimise 'bright pupil' and/or glare. Specifically, try to reduce the bright spot you can see near the pupil's edge. 2. Manually change the exposure settings to optimise the contrast between the pupil and the iris: https://drive.google.com/file/d/1SPwxL8iGRPJe8BFDBfzWWtvzA8UdqM6E/view?usp=sharing 3. Set the ROI to only include the eye region, excluding the dark corners of the image. Note that it is important not to set it too small (watch to the end): https://drive.google.com/file/d/1NRozA9i0SDMe_uQdjC2jIr000iPjqqVH/view?usp=sharing 4. Modify the 2D detector settings: https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings 5. Adjust gain, brightness, and contrast: In the eye window, click Video Source > Image Post Processing Let us know how it goes!
Can you suggest some automatic pipeline method or code to get Saccades from Pupil Core recorded data.
Hello, how can I extract the surface identified by the markers in order to overlay the heatmap?
Hi @user-beb3db π You can use our Surface Tracker plugin to obtain gaze data relative to the surface and generate the heatmap. Please take a look here for further details https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Is there a size of core pupil glasses for children age 4?
Hi @user-515175 π. We offer a child-sized frame (not listed on the store) suitable for 3-9 year olds at the same cost as the adult-sized headset. If you would like to get a quote or place an order for a child-sized frame, just leave a note in the order form or reach out to [email removed]
Hi, I have question regarding the message timestamps I am using the Network API to read the pupil messages in real-time but I have observed that the pupil messages I receive have timestamps which are behind the current pupil time. Here's the code I am using ``` while True: socket.send_string('t') curr_time = float(socket.recv()) logging.info(f'Before : Current time {curr_time}') topic, payload = subscriber.recv_multipart() msg = msgpack.loads(payload, raw=False) timestamp = msg['timestamp'] logging.info(f'Msg timestamp {timestamp}') socket.send_string('t') curr_time = float(socket.recv()) logging.info(f'After : Current time {curr_time}')
And an example of the output I observe -
INFO: Before : Current time 8087.21867836
INFO: Msg timestamp 8087.211949
INFO: After : Current time 8087.227443357
``
Is this expected? I'd expect the message timestamp to be in between the
Beforeand
After` current time in this case
Maybe I am doing something wrong here?
Pupil data inherits their timestamps from the eye images which are in turn timestamped at their exposure. The time difference between now and the pupil datum corresponds to its processing time, including the transfer over the network.
Note, that your way to estimate the current pupil time is not as accurate as it could be. See https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py
Hello Pupil team, this is Amber again. I recently asked about using the nearest older transformation to get surface data at a higher sampling rate, and you mentioned that you will write up an example. I'm not sure if I missed the example because I couldn't find it on Discord anymore π₯ . Any pointers will be very much appreciated!
Hey, sorry for the delay. I will try to get this done today
Here you go! https://gist.github.com/papr/da7b17c165ccbfa6a6835e261607c51e
Hello Pupil team, I am trying to do an experiment using pupil core to measure people's gaze while they are doing their daily activities. By daily activities, I mean for example, cooking, watching TV, etc. What exactly do I need to do when conducting such an experiment? For example, do we need to use printed calibration points instead of using the computer screen for calibration?
Hi, is there a reason for using Pupil Core? Pupil Invisible might be much better suited for this use case. That said, you can use printed markers if you want, but you can similarly use the screen marker calibration. That is because Pupil Core calibrates gaze relative to the scene camera, not the environment.
To estimate the correct delay you need
Thank you so much!! I will work on it and post here if I have follow up questions.
Hello Pupil team, My colleagues and I are wanting to measure aspects of the vestibular ocular reflex with the Pupil Core system; specifically, ocular torsion. I was wondering if that was possible, and if so, which output measure should we look at?
Hi @user-bdb6e6 Pupil Core does not provide with cyclotorsions measurements. To collect these measurements, you will need to get an Iris pattern in the eye camera, and check in the next frames for rotations.
While, you can change the eye cameras resolution to 400 by 400px, it might not be enough to detect features of the Iris, especially if the subject' iris has not many salient features to match (like freckles or distinctive collagen patterns) Potentially, one can use a cosmetic contact lens with an irregular pattern, as they may have more salient patterns to match, but that contact lens needs to be fitted by a professional, and you will have to account for movement of the contact lens. Astigmatic (toric) contact lenses have an specific marking that optometrist use to know whether the lens is correctly placed, yet those markings would most probably not be seen by the eye camera, as they are quite subtle.